Journal of System Simulation
Abstract
Abstract: The accelerated deployment of generative artificial intelligence, particularly large language models, has amplified the social risks of hallucinations, posing systemic threats to the credibility of the information ecosystem, the effectiveness of users’ cognitive decision-making, and the governance security in the public domain. Research primarily focuses on hallucination mitigation mechanisms at the technical level or the design of regulatory frameworks at the policy level, lacking a systematic theoretical analysis of the evolutionary logic of strategic interactions among the “large language models, users, and regulators” under conditions of bounded rationality. By introducing evolutionary game theory into the field of generative artificial intelligence governance, a tripartite dynamic game model integrating the honesty strategies of large language models, user feedback behaviors, and regulatory interventions was constructed. This model revealed the dynamic evolutionary paths of strategy selection and their stability conditions for multiple actors under cost-benefit trade-offs. Research shows that under reasonable parameters, the system can converge to the optimal equilibrium of “honest responding of large language models, active feedback of users, and proactive oversight of regulators”. The initial willingness of users to provide positive feedback accelerates both the honesty process of large language models and the intensity of regulatory responses simultaneously through the dual signal effect. Incentive mechanisms exhibit asymmetric sensitivity: Users are most sensitive to positive incentives; regulatory penalties form rigid constraints on model compliance, and the collaborative benefits play a stable role in the long term. Accordingly, it is necessary to strengthen user feedback incentives, advance regulatory technology empowerment, and optimize institutional collaborative mechanisms. These measures aim to build a governance ecosystem characterized by tripartite collaboration, cost hedging, and risk sharing, thereby providing theoretical support and policy paths for the construction of trustworthy AI and the governance of hallucinations.
Recommended Citation
Yan, Qiang; Zhang, Qianyu; and Wei, Na
(2026)
"Evolutionary Game-based Analysis of Responses to Hallucinations in Generative Artificial Intelligence,"
Journal of System Simulation: Vol. 38:
Iss.
2, Article 12.
DOI: 10.16182/j.issn1004731x.joss.25-0996
Available at:
https://dc-china-simulation.researchcommons.org/journal/vol38/iss2/12
First Page
399
Last Page
415
CLC
TP391.9; C931.6
Recommended Citation
Yan Qiang, Zhang Qianyu, Wei Na. Evolutionary Game-based Analysis of Responses to Hallucinations in Generative Artificial Intelligence[J]. Journal of System Simulation, 2026, 38(2): 399-415.
DOI
10.16182/j.issn1004731x.joss.25-0996
Included in
Artificial Intelligence and Robotics Commons, Computer Engineering Commons, Numerical Analysis and Scientific Computing Commons, Operations Research, Systems Engineering and Industrial Engineering Commons, Systems Science Commons