•  
  •  
 

Journal of System Simulation

Abstract

Abstract: General-purpose large language models lack training on X language-specific corpora, and traditional fine-tuning methods lack targeted adaptation to the interdisciplinary integration and multimodule coupling of X language, resulting in problems such as non-standard syntax and semantic deviation in generated code. To address these issues, this paper systematically proposed the definition and integrated architecture of a large language model for X language simulation. Modeling subclasses were defined according to the disciplines and classes of X language, and dedicated adapters were constructed for each subclass. By merging their weights during the inference phase, the incremental integration of multi-domain modeling skills was efficiently completed without updating the existing model parameters. The multi-disciplinary modeling understanding ability of the model was enhanced through data augmentation and chain-of-thought reasoning; code standardization was improved by combining syntax rule constraints with reinforcement learning; a multi-dimensional evaluation metric covering syntax, semantics, and simulation execution was constructed to systematically measure model quality. Experimental results show that the generation results of the simplified X language simulation model constructed based on this framework are stably improved compared with the general fine-tuning method, which validates the effectiveness of the scalable adapter-merging architecture and provides theoretical guidance and technical support for intelligent modeling and simulation driven by X language.

First Page

869

Last Page

888

CLC

TP391.9

Recommended Citation

Peng Laichunyang, Ye Fei, Guo Xiaoming, et al. Large Language Model for X Language Simulation: Architecture, Key Technologies, and Typical Applications[J]. Journal of System Simulation, 2026, 38(4): 869-888.

Corresponding Author

Ye Fei

DOI

10.16182/j.issn1004731x.joss.25-0905

Share

COinS