ADVANCING THE SECURITY AND PRIVACY OF SOCIOTECHNICAL SYSTEMS: HUMAN-CENTERED APPROACHES TO STUDY THREAT ACTORS AND END-USER TRUST
Sociotechnical systems are broadly defined as systems that blend technological aspects with human elements, such as human behaviors and mental models. As these systems increasingly integrate more sophisticated technical components, such as extended reality and generative AI, they are experiencing widespread adoption among end users. However, sociotechnical systems still face various limitations in terms of security, privacy, and user trust.
In this dissertation, I explore the security, privacy, and trust challenges that arise from user interactions with diverse sociotechnical systems, ranging from e-commerce and social media platforms to cutting-edge human-AI tools. My thesis highlights the significance of leveraging a mixed-methods approach, integrating computational, quantitative, and qualitative usercentered techniques. Such methods allow me to develop a comprehensive understanding of user security, privacy, and trust and build solutions to address these challenges.
First, I describe my research that identifies the tactics and tools used by threat actors to generate revenue by exploiting sociotechnical systems. These efforts are guided by my application of computational ethnography to study online communities of threat actors. Specifically, I expose how threat actors on YouTube exploitatively monetize content while violating platform guidelines. I also investigate how abusive e-commerce vendors exploit and harm other sellers through deceptive business practices.
Second, I examine how abuse from threat actors can impact end users’ security and privacy decisions and perceptions. Specifically, I used a mixed-methods approach to gather the perspectives of 68 refugees and liaisons who work closely with them, revealing the impact of toxic content targeted at the refugee community.
Third, I evaluate the trustworthiness of sociotechnical systems – specifically, whether these systems generate content that end users can trust. I developed two studies for this purpose. First, I designed a study to assess the effectiveness of large language models as a source of security and privacy advice. Next, through an analysis of user expectations and the design of a system that integrates NLP, signal processing, and formal methods, I evaluate the alignment of online mental wellness content with user expectations.
History
Degree Type
- Doctor of Philosophy
Department
- Computer Science
Campus location
- West Lafayette