•  
  •  
 

Journal of System Simulation

Abstract

Abstract: In emergency scenarios where ambulances are used to evacuate casualties, it is necessary to fully coordinate the rescue capability of the ambulance with the real-time status of the casualties in the scenario to achieve the best rescue results. Such problems are generally non-deterministic polynomial problems, and the traditional deterministic scheduling algorithms are less effective. This paper aimed at the modeling research of the real-time regulation of ambulances in emergency scenarios, an online reinforcement learning DNQ algorithm frameworks based on the data enhancement method is proposed and applied to the solution of the ambulances control model. To solve the problems of poor repeatability in emergency scenarios and slow training of the agent due to the low accumulation speed of learning samples, a DA-DQN method combining the data augmentation method on the basis of the traditional DQN algorithm is proposed. Simulation results show that several classical DQN methods can be trained online to obtain agent, achieving better scheduling results than deterministic algorithms. The treatment failure rate achieved by classical "first-come, first-served" algorithm scheduling is about 45.4%, while the medical failure rate after DQN agent convergence is about 25%. The agent training speed of DA-DQN method is much faster than that of traditional DQN method. Moreover, it has practical application potential in emergency rescue operation regulation.

First Page

584

Last Page

594

CLC

TP391.9

Recommended Citation

Zhang Lei, Zhang Xuechao, Wang Chao, et al. An Intelligent Ambulance Regulation Model Based on Online Reinforcement Learning Algorithm[J]. Journal of System Simulation, 2025, 37(3): 584-594.

DOI

10.16182/j.issn1004731x.joss.24-0098

Share

COinS