Intrusion Detection against Threats in V2X Networks using LLM as World Model
Connected and Automated Vehicles (CAVs) critically depend on secure Vehicle-to-Everything (V2X) communications for cooperative safety, yet remain vulnerable to sophisticated security threats. Among these attacks, the position falsification attacks are taken as one of the most dangerous, where malicious nodes deliberately falsify location data. While conventional rule-based and classical machine learning (ML) detectors frequently prove inadequate against contextually adaptive or temporally evolving attacks, this research introduces the Reasoning via Planning (RAP) framework that innovatively leverages Large Language Models (LLMs) as integrated world models and reasoning agents. RAP incorporates Monte Carlo Tree Search to transcend standard Chain-of-Thought (CoT) approaches by generating alternative reasoning pathways, simulating future state trajectories, and implementing reward-guided step refinement, ultimately converging on optimal reasoning paths through balanced exploration-exploitation. Comprehensive evaluation demonstrates RAP's superior detection efficacy against four supervised ML baselines and outperforms LLaMA-33B with CoT reasoning, confirming transformers' inherent capacity to capture long-range dependencies and subtle anomalies without manual feature engineering.
History
Degree Type
- Master of Science in Electrical and Computer Engineering
Department
- Electrical and Computer Engineering
Campus location
- West Lafayette