Accounting Individual Cognition and Social Influence for Better Human-AI Decision Making
The ubiquitous presence of Artificial Intelligence (AI) technologies in daily life has given rise to a prevalent paradigm of human-AI decision-making (HAIDM), where AI provides recommendations alongside the decision task while humans make the final decision. However, the performance of HAIDM often falls short of expectations. Existing research primarily focuses on enhancing AI's capabilities and its presentation, addressing predominantly AI-related factors. This thesis argues that the under-exploration of human-centered and contextual factors in HAIDM hinders the full realization of the potential of this human-AI collaboration paradigm. This dissertation addresses this limitation through two primary lines of investigation, centering around one of the most critical concepts in human-AI interaction: reliance. The first line focuses on human decision-makers' cognitive processes and their influence on human reliance on AI in HAIDM. The second line examines social influence and its impact on human reliance on AI. Each line begins by understanding the effects of specific factors on reliance within HAIDM. Building on these insights, each line then proposes designs of AI-based systems aimed at influencing the outcomes of HAIDM in a human-centric view via shaping human reliance on AI.
In the first line, we focus on exploring how people's cognitive factors affect their reliance on AI in HAIDM and how to influence the outcome of HAIDM by taking cognitive factors into consideration. To begin with, we experimentally identify human-AI agreement on high-confidence tasks as a key factor that people use as a proxy for gauging the model performance and thus determining their reliance on AI. We further introduce a hidden Markov model to effectively characterize how people adjust their trust and, hence, reliance on AI. Based on these understandings, we then design an adversarial attack framework that reveals how targeting high-confidence tasks can significantly impact human trust and proposes an algorithm to strategically reduce reliance by inferring hidden trust states.
We further investigate how social influences---opinions from individuals other than the primary decision-makers---affect human reliance on AI in HAIDM. Using AI-based credibility indicators in information spread as a case study, an experimental study shows that people rely on AI more when they are under social influence compared to when it is absent, regardless of the correctness of AI. A follow-up study examines more complex social influences coming from both laypeople peers and experts, finding that accurate AI indicators still reduce misinformation, but their effects are moderated by contextual factors such as agreement between AI and expert judgments. Reflecting on the mixed impacts of social influence on reliance---particularly when individuals share similar opinions with AI---we propose an intervention that introduces second opinions sourced independently of the AI. Second opinions effectively reduce over-reliance, and allowing people to actively request second opinions that have a high level of agreement further increases appropriate reliance by reducing over-reliance without increasing under-reliance.
These insights emphasize that enhancing decision-making with AI-assisted tools requires improvements not only from the AI modeling perspective but also from human-centered and contextual perspectives. This underscores the need to better understand and support how people interact with AI and how such interactions influence collaborative outcomes. Future research should explore human-AI collaboration within a broadly defined space of human-AI interaction, including longer-term engagement in the era of generative AI (GenAI), as well as social constraints such as the security and privacy of collaboration.
In conclusion, this research presents a comprehensive examination of HAIDM, offering new insights into human reliance on AI and strategies to enhance human-AI collaboration. By probing into both human decision-makers' cognitive processes and social influences in HAIDM, this thesis contributes significantly to the field, paving the way for designing more effective HAIDM systems.
History
Degree Type
- Doctor of Philosophy
Department
- Computer Science
Campus location
- West Lafayette