| Keyword search (4,164 papers available) | ![]() |
"reinforcement learning" Keyword-tagged Publications:
| Title | Authors | PubMed ID | |
|---|---|---|---|
| 1 | Disentangling prediction error and value in a formal test of dopamine s role in reinforcement learning | Usypchuk AA; Maes EJP; Lozzi M; Avramidis DK; Schoenbaum G; Esber GR; Gardner MPH; Iordanova MD; | 40738112 CSBN |
| 2 | Comprehensive review of reinforcement learning for medical ultrasound imaging | Elmekki H; Islam S; Alagha A; Sami H; Spilkin A; Zakeri E; Zanuttini AM; Bentahar J; Kadem L; Xie WF; Pibarot P; Mizouni R; Otrok H; Singh S; Mourad A; | 40567264 ENCS |
| 3 | Machine learning innovations in CPR: a comprehensive survey on enhanced resuscitation techniques | Islam S; Rjoub G; Elmekki H; Bentahar J; Pedrycz W; Cohen R; | 40336660 ENCS |
| 4 | Computational neuroscience across the lifespan: Promises and pitfalls | van den Bos W; Bruckner R; Nassar MR; Mata R; Eppinger B; | 29066078 PSYCHOLOGY |
| 5 | Does phasic dopamine release cause policy updates? | Carter F; Cossette MP; Trujillo-Pisanty I; Pallikaras V; Breton YA; Conover K; Caplan J; Solis P; Voisard J; Yaksich A; Shizgal P; | 38039083 PSYCHOLOGY |
| 6 | Nonlinear dynamic modeling and model-based AI-driven control of a magnetoactive soft continuum robot in a fluidic environment | Moezi SA; Sedaghati R; Rakheja S; | 37932207 ENCS |
| 7 | Sub-hourly measurement datasets from 6 real buildings: Energy use and indoor climate | Sartori I; Walnum HT; Skeie KS; Georges L; Knudsen MD; Bacher P; Candanedo J; Sigounis AM; Prakash AK; Pritoni M; Granderson J; Yang S; Wan MP; | 37153123 ENCS |
| 8 | Reinforcement learning for automatic quadrilateral mesh generation: A soft actor-critic approach | Pan J; Huang J; Cheng G; Zeng Y; | 36375347 ENCS |
| 9 | Trust-Augmented Deep Reinforcement Learning for Federated Learning Client Selection | Rjoub G; Wahab OA; Bentahar J; Cohen R; Bataineh AS; | 35875592 ENCS |
| 10 | Designing a hybrid reinforcement learning based algorithm with application in prediction of the COVID-19 pandemic in Quebec. | Khalilpourazari S, Hashemi Doulabi H | 33424076 ENCS |
| 11 | Cue-Evoked Dopamine Neuron Activity Helps Maintain but Does Not Encode Expected Value. | Mendoza JA, Lafferty CK, Yang AK, Britt JP | 31693885 CSBN |
| 12 | Metacontrol of decision-making strategies in human aging. | Bolenz F, Kool W, Reiter AM, Eppinger B | 31397670 PERFORM |
| 13 | Developmental Changes in Learning: Computational Mechanisms and Social Influences. | Bolenz F, Reiter AMF, Eppinger B | 29250006 PERFORM |
| Title: | Reinforcement learning for automatic quadrilateral mesh generation: A soft actor-critic approach | ||||
| Authors: | Pan J, Huang J, Cheng G, Zeng Y | ||||
| Link: | https://pubmed.ncbi.nlm.nih.gov/36375347/ | ||||
| DOI: | 10.1016/j.neunet.2022.10.022 | ||||
| Publication: | Neural networks : the official journal of the International Neural Network Society | ||||
| Keywords: | Computational geometry; Mesh generation; Neural networks; Quadrilateral mesh; Reinforcement learning; Soft actor-critic; | ||||
| PMID: | 36375347 | Category: | Date Added: | 2022-11-15 | |
| Dept Affiliation: |
ENCS
1 Concordia Institute for Information Systems Engineering, Concordia University, Montreal, H3G 1M8, Quebec, Canada. 2 Department of Engineering Management & Systems Engineering, Old Dominion University, Norfolk, 23529, Virginia, United States. 3 Department of Engineering Mechanics, Dalian University of Technology, Dalian, 116023, Liaoning, China. 4 Concordia Institute for Information Systems Engineering, Concordia University, Montreal, H3G 1M8, Quebec, Canada. Electronic address: yong.zeng@concordia.ca. |
||||
Description: |
This paper proposes, implements, and evaluates a reinforcement learning (RL)-based computational framework for automatic mesh generation. Mesh generation plays a fundamental role in numerical simulations in the area of computer aided design and engineering (CAD/E). It is identified as one of the critical issues in the NASA CFD Vision 2030 Study. Existing mesh generation methods suffer from high computational complexity, low mesh quality in complex geometries, and speed limitations. These methods and tools, including commercial software packages, are typically semiautomatic and they need inputs or help from human experts. By formulating the mesh generation as a Markov decision process (MDP) problem, we are able to use a state-of-the-art reinforcement learning (RL) algorithm called "soft actor-critic" to automatically learn from trials the policy of actions for mesh generation. The implementation of this RL algorithm for mesh generation allows us to build a fully automatic mesh generation system without human intervention and any extra clean-up operations, which fills the gap in the existing mesh generation tools. In the experiments to compare with two representative commercial software packages, our system demonstrates promising performance with respect to scalability, generalizability, and effectiveness. |



