<p dir="ltr">This dissertation tackles the persistent challenge of simulation-to-reality (sim2real) transfer in computer vision for autonomous systems by developing a comprehensive synthetic vision pipeline. The work spans high-fidelity digital twin environment construction using techniques like photogrammetry, procedural modeling, and 3D Gaussian splatting in Unreal Engine, combined with a custom ROS2-integrated simulator for real-time control and sensor data capture.</p><p dir="ltr">Synthetic datasets generated within this pipeline are benchmarked using object detection models to assess their effectiveness in sim2real transfer. A novel evaluation method is proposed using foreground-focused image quality assessment (IQA) metrics, such as SSIM and CW-SSIM, which show strong predictive power for downstream performance in real-world tasks.</p><p dir="ltr">To enhance transferability, a two-stage genetic algorithm framework is introduced. The first stage (PreGA) optimizes rendering parameters in the simulation to improve background realism, while the second stage (PoGA) applies image-level augmentations to improve foreground realism. Experimental results demonstrate significant improvements in real-world classification accuracy—up to 36% with PreGA and 19% with PoGA—narrowing the sim2real gap and approaching the performance of models trained on real data.</p><p dir="ltr">Overall, this research provides a practical and effective methodology for improving and evaluating synthetic data utility, advancing the field of sim2real transfer in autonomous vision systems.</p>
Funding
Autonomous Mower Pilot Project SPR-4702 Joint Transportation Research Program