Test-time Approaches for Improving Robustness of Deep Learning Techniques to Dataset Shifts in Dynamic Cardiovascular MRI
Dynamic contrast-enhanced cardiac magnetic resonance imaging (DCE-CMRI) is an established medical imaging modality to detect stress-induced myocardial blood flow abnormalities. In recent years, deep learning has demonstrated significant potential in addressing inverse problems related to DCE-CMRI. However, ensuring the robustness and reliability of these deep neural network models remains an ongoing challenge. A key limitation is that deep neural networks trained on internal datasets often exhibit degraded performance when applied to external data. This performance degradation arises due to multiple factors, including differences in acquisition protocols, scanner field strengths, and patient demographics—phenomena collectively known as dataset shifts. Given the impracticality of accounting for every possible MRI hardware and software configuration between training and deployment, developing strategies for test-time adaptation is critical to enhancing model robustness.
In this dissertation, we introduce novel test-time approaches to improve robustness of deep learning techniques to dataset shifts in solving DCE-CMRI inverse problems. Our contributions do not require modifications to model parameters and are fivefold: In Chapter 2, we devise a test-time uncertainty measure based on patch-level training for segmentation of DCE-CMRI and show its value in uncertainty-guided model selection. In Chapter 3, we extend the devised uncertainty metric to enable dynamic quality control and temporal uncertainty localization in a free-breathing segmentation model. We consider a human-in-the-loop-framework under a limited correction budget, representing a practical clinical scenario. In Chapter 4, we incorporate additional domain knowledge into the model selection process based on DCE-CMRI kinetics and focus on a larger external dataset to demonstrate the robustness of the improved model selection framework. In Chapter 5, motivated by the need to leverage magnitude-only image databases for deep learning-based reconstruction, we employ generative diffusion models. Our technique focuses on conditional learning of a score-based diffusion model to synthesize phase maps from magnitude-only images. In Chapter 6, we assess the influence of field-of-view variations on evaluating reconstruction performance at test time, aiming for better interpretation of reconstructed image quality in the presence of dataset shifts, such as lower contrast doses and reduced field strengths.
History
Degree Type
- Doctor of Philosophy
Department
- Electrical and Computer Engineering
Campus location
- West Lafayette