<p dir="ltr">Artificial Intelligence (AI) has become deeply integrated into modern systems, enhancing capabilities such as classification, authentication, and content generation. While these capabilities have brought significant advancements, the rapid adoption of AI has also introduced substantial privacy concerns. These concerns span three critical dimensions of the AI pipeline: training data, models, and AI-powered systems. This dissertation analyzes these privacy risks and proposes novel frameworks to uncover the potential vulnerabilities across all three areas.</p><p dir="ltr">To expose privacy leakage in training data, we design <i>Mirror</i>, a high-fidelity model inversion attack that reconstructs representative samples for target labels by optimizing in the latent space of generative models. <i>Mirror</i> achieves significantly improved performance compared to prior attacks, effectively exposing sensitive training data in both white-box and black-box settings, even for commercial services.</p><p dir="ltr">To examine the unauthorized use of private or copyrighted data, we introduce <i>Insight</i>, a framework that evaluates the robustness of protection techniques against generative models. <i>Insight</i> demonstrates that existing protections are vulnerable when subjected to physical-world distortions, challenging the current assumptions of digital-only defenses.</p><p dir="ltr">Regarding model privacy, we develop <i>Elijah</i>, the first data-free framework to detect and remove backdoor-based watermarks embedded in diffusion models. By identifying distribution shifts and crafting synthetic clean datasets, <i>Elijah</i> achieves near-perfect detection accuracy while preserving the model’s benign functionality.</p><p dir="ltr">Finally, to uncover vulnerabilities in AI-powered systems, we propose <i>ImU</i>, a physical impersonation attack that generates natural and consistent adversarial modifications that remain effective across various facial poses. <i>ImU</i> successfully bypasses face-recognition-based authentication systems in both white-box and black-box scenarios, exposing the real-world implications of model vulnerabilities.</p><p dir="ltr">Through these contributions, this dissertation highlights the pressing need to reconsider privacy safeguards in AI, accounting for not only cyberspace risks but also the complex physical-world factors inherent in practical deployments.</p>