Purdue University Graduate School
Browse

Verifiable Autonomous Learning: Formal Specification, Synthesis, Runtime Verification, and Scoring for Trustworthy Autonomous Systems

Download (16.46 MB)
thesis
posted on 2025-07-24, 13:36 authored by Zikang XiongZikang Xiong
<p dir="ltr">Autonomous systems, from self-driving cars to mobile robots, are increasingly expected to operate safely and reliably in complex, dynamic environments. However, ensuring that these learning-enabled systems consistently meet their intended specifications while remaining robust against adversarial influences represents one of the most pressing challenges in autonomous systems research. This thesis presents a line of work on verifiable autonomous learning that addresses this challenge through four complementary research directions: formal specification, synthesis, runtime verification, and scoring.</p><p dir="ltr">The central innovation of this work lies in establishing verifiability as a unifying foundation for trustworthy autonomous systems that bridges multiple aspects of autonomous system development. We demonstrate how various formal methods can be leveraged to: (1) define desired policy behaviors through temporal logic specifications in reinforcement learning, (2) provide runtime safety guarantees through learned Lyapunov functions, (3) create quantitative scoring functions for autonomous driving system evaluation, and (4) model malicious behaviors to defend against sophisticated backdoor attacks. This unified approach enables the development of autonomous systems with strong theoretical foundations and practical applicability.</p><p dir="ltr">Our contributions span four key areas. First, in formal specification, we develop differentiable temporal logic semantics that enable direct integration of complex specifications into neural network training, significantly reducing sample complexity while ensuring specification compliance. Second, for synthesis, we present novel co-learning algorithms that jointly train planning and control policies under formal constraints, including a Twin Neural Lyapunov Function approach that provides stability guarantees for high-dimensional robotic systems. Third, in runtime verification, we establish continuous monitoring and verification techniques using learned Lyapunov functions to provide safety guarantees during system operation, creating dynamic regions of attraction that ensure collision-free navigation. Fourth, for scoring, we introduce learning-based critics that discover interpretable driving rules from demonstration data, providing explainable evaluation criteria for autonomous driving systems.</p><p dir="ltr">Beyond formal methods, this thesis also addresses critical vulnerabilities in learning-based autonomous systems. We investigate defense mechanisms against observation-space adversarial attacks through novel detection and denoising architectures, and expose potential backdoor vulnerabilities in neural path planners while developing corresponding defense strategies.</p><p dir="ltr">The effectiveness of our approaches is demonstrated across diverse autonomous systems, from quadrupedal robots navigating cluttered environments to autonomous vehicles operating in real-world scenarios. Our experimental results show significant improvements in safety, specification compliance, and robustness compared to existing methods, while maintaining or improving performance in benign conditions. </p>

History

Degree Type

  • Doctor of Philosophy

Department

  • Computer Science

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Suresh Jagannathan

Additional Committee Member 2

Lin Tan

Additional Committee Member 3

Xiangyu Zhang

Additional Committee Member 4

Ahmed Qureshi