Keyword search (4,163 papers available)

"Xu Y" Authored Publications:

Title Authors PubMed ID
1 Impact of COVID-19 on incidence and trends of adverse events among hospitalised patients in Calgary, Canada: a retrospective chart review study Wu G; Eastwood CA; Cheligeer C; Southern DA; Zeng Y; Ghali WA; Bakal JA; Boussat B; Flemons W; Forster A; Xu Y; Quan H; 41592994
CONCORDIA
2 Preprocessing narrative texts in electronic medical records to identify hospital adverse events: A scoping review Jafarpour H; Wu G; Cheligeer CK; Yan J; Xu Y; Southern DA; Eastwood CA; Zeng Y; Quan H; 41072367
ENCS
3 Two-dimensional Nanosheets by Liquid Metal Exfoliation Bai Y; Xu Y; Sun L; Ward Z; Wang H; Ratnayake G; Wang C; Zhao M; He H; Gao J; Wu M; Lu S; Bepete G; Peng D; Liu B; Kang F; Terrones H; Terrones M; Lei Y; 39707650
PHYSICS
4 Energy scheduling for DoS attack over multi-hop networks: Deep reinforcement learning approach Yang L; Tao J; Liu YH; Xu Y; Su CY; 36848827
ENCS
5 Artificial aging induced changes in biochar,s properties and Cd2+ adsorption behaviors Wang Z; Bian Y; Xu Y; Zheng C; Jiang Q; An C; 36251198
ENCS
6 Developing EMR-based algorithms to Identify hospital adverse events for health system performance evaluation and improvement: Study protocol Wu G; Eastwood C; Zeng Y; Quan H; Long Q; Zhang Z; Ghali WA; Bakal J; Boussat B; Flemons W; Forster A; Southern DA; Knudsen S; Popowich B; Xu Y; 36197944
ENCS
7 Transcriptomic analysis of 3D vasculature-on-a-chip reveals paracrine factors affecting vasculature growth and maturation Tan SY; Jing Q; Leung Z; Xu Y; Cheng LKW; Tam SST; Wu AR; 36093896
ENCS
8 Treatment of decentralized low-Strength livestock wastewater using microcurrent-assisted multi-soil-layering systems: Performance Assessment and microbial analysis Liu C; Huang G; Song P; An C; Zhang P; Shen J; Ren S; Zhao K; Huang W; Xu Y; Zheng R; 34999101
ENCS

 

Title:Energy scheduling for DoS attack over multi-hop networks: Deep reinforcement learning approach
Authors:Yang LTao JLiu YHXu YSu CY
Link:https://pubmed.ncbi.nlm.nih.gov/36848827/
DOI:10.1016/j.neunet.2023.02.028
Publication:Neural networks : the official journal of the International Neural Network Society
Keywords:DoS attackDueling double Q-networkKalman filteringMarkov decision processMulti-hop networks
PMID:36848827 Category: Date Added:2023-02-28
Dept Affiliation: ENCS
1 Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, School of Automation, Guangdong University of Technology, Guangzhou 510006, China. Electronic address: yanglx@mail2.gdut.edu.cn.
2 Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, School of Automation, Guangdong University of Technology, Guangzhou 510006, China. Electronic address: taojiedyx@163.com.
3 Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, School of Automation, Guangdong University of Technology, Guangzhou 510006, China. Electronic address: yonghua.liu@outlook.com.
4 Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, School of Automation, Guangdong University of Technology, Guangzhou 510006, China. Electronic address: xuyong809@163.com.
5 Department of Mechanical and Industrial Engineering, Concordia University, Montreal,

Description:

This paper studies the energy scheduling for Denial-of-Service (DoS) attack against remote state estimation over multi-hop networks. A smart sensor observes a dynamic system, and transmits its local state estimate to a remote estimator. Due to the limited communication range of the sensor, some relay nodes are employed to deliver data packets from the sensor to the remote estimator, which constitutes a multi-hop network. To maximize the estimation error covariance with energy constraint, a DoS attacker needs to determine the energy level implemented on each channel. This problem is formulated as an associated Markov decision process (MDP), and the existence of an optimal deterministic and stationary policy (DSP) is proved for the attacker. Besides, a simple threshold structure of the optimal policy is obtained, which significantly reduces the computational complexity. Furthermore, an up-to-date deep reinforcement learning (DRL) algorithm, dueling double Q-network (D3QN), is introduced to approximate the optimal policy. Finally, a simulation example illustrates the developed results and verifies the effectiveness of D3QN for optimal DoS attack energy scheduling.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University