| Keyword search (4,163 papers available) | ![]() |
"Yang L" Authored Publications:
| Title | Authors | PubMed ID | |
|---|---|---|---|
| 1 | Identifying personalized barriers for hypertension self-management from TASKS framework | Yang J; Zeng Y; Yang L; Khan N; Singh S; Walker RL; Eastwood R; Quan H; | 39143621 ENCS |
| 2 | 1T-2H Mixed-Phase MoS2 Stabilized with a Hyperbranched Polyethylene Ionomer for Mg2+ /Li+ Co-Intercalation Toward High-Capacity Dual-Salt Batteries | Rahmatinejad J; Raisi B; Liu X; Zhang X; Sadeghi Chevinli A; Yang L; Ye Z; | 37691015 ENCS |
| 3 | Energy scheduling for DoS attack over multi-hop networks: Deep reinforcement learning approach | Yang L; Tao J; Liu YH; Xu Y; Su CY; | 36848827 ENCS |
| 4 | Design Principles in mHealth Interventions for Sustainable Health Behavior Changes: Protocol for a Systematic Review | Yang L; Kuang A; Xu C; Shewchuk B; Singh S; Quan H; Zeng Y; | 36811938 ENCS |
| 5 | A Proposed Multi-Criteria Optimization Approach to Enhance Clinical Outcomes Evaluation for Diabetes Care: A Commentary | Wan TTH; Matthews S; Luh H; Zeng Y; Wang Z; Yang L; | 35372638 ENCS |
| Title: | Energy scheduling for DoS attack over multi-hop networks: Deep reinforcement learning approach | ||||
| Authors: | Yang L, Tao J, Liu YH, Xu Y, Su CY | ||||
| Link: | https://pubmed.ncbi.nlm.nih.gov/36848827/ | ||||
| DOI: | 10.1016/j.neunet.2023.02.028 | ||||
| Publication: | Neural networks : the official journal of the International Neural Network Society | ||||
| Keywords: | DoS attack; Dueling double Q-network; Kalman filtering; Markov decision process; Multi-hop networks; | ||||
| PMID: | 36848827 | Category: | Date Added: | 2023-02-28 | |
| Dept Affiliation: |
ENCS
1 Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, School of Automation, Guangdong University of Technology, Guangzhou 510006, China. Electronic address: yanglx@mail2.gdut.edu.cn. 2 Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, School of Automation, Guangdong University of Technology, Guangzhou 510006, China. Electronic address: taojiedyx@163.com. 3 Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, School of Automation, Guangdong University of Technology, Guangzhou 510006, China. Electronic address: yonghua.liu@outlook.com. 4 Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, School of Automation, Guangdong University of Technology, Guangzhou 510006, China. Electronic address: xuyong809@163.com. 5 Department of Mechanical and Industrial Engineering, Concordia University, Montreal, |
||||
Description: |
This paper studies the energy scheduling for Denial-of-Service (DoS) attack against remote state estimation over multi-hop networks. A smart sensor observes a dynamic system, and transmits its local state estimate to a remote estimator. Due to the limited communication range of the sensor, some relay nodes are employed to deliver data packets from the sensor to the remote estimator, which constitutes a multi-hop network. To maximize the estimation error covariance with energy constraint, a DoS attacker needs to determine the energy level implemented on each channel. This problem is formulated as an associated Markov decision process (MDP), and the existence of an optimal deterministic and stationary policy (DSP) is proved for the attacker. Besides, a simple threshold structure of the optimal policy is obtained, which significantly reduces the computational complexity. Furthermore, an up-to-date deep reinforcement learning (DRL) algorithm, dueling double Q-network (D3QN), is introduced to approximate the optimal policy. Finally, a simulation example illustrates the developed results and verifies the effectiveness of D3QN for optimal DoS attack energy scheduling. |



