posted on 2025-08-11, 11:51authored byTao LiTao Li
<p dir="ltr">As the use of visual data grows in surveillance, social media, and machine learning, safeguarding individual privacy while preserving data utility has become an urgent challenge. This dissertation, <i>From Anonymous Faces to Provable Privacy</i>, presents a comprehensive progression in visual data de-identification—from early heuristic methods to rigorous privacy guarantees—spanning diverse representation spaces and biometric modalities. The first technical contribution introduces AnonymousNet, a structured face obfuscation framework operating in a discrete facial attribute space. It demonstrates that facial privacy can be quantified and manipulated while maintaining image realism. Next, DeepBlur offers a lightweight and practical latent-space obfuscation method with strong empirical performance, though without formal guarantees. The dissertation then advances a formal privacy definition tailored to single-image publication, proposing a latent-space mechanism that satisfies this definition under ε-differential privacy. This framework bridges empirical utility with worst-case privacy guarantees. To improve the visual fidelity of privatized images, CAGFace, a facial component-aware super-resolution module, is introduced. Finally, the proposed formal privacy framework is extended beyond facial imagery to additional modalities, including gait, voice, and full-body appearance. In particular, the dissertation introduces the first provably private gait de-identification method based on skeleton data. Together, these contributions establish a foundation for modality-agnostic, privacy-preserving data publishing, grounded in robust theoretical foundations and practical relevance.</p>