Enhancing Human-AI Collaboration in AI-Assisted Decision-Making for Individuals and Groups
AI-assisted decision-making systems are increasingly integrated into various domains, from individual to collective decision-making. However, their effectiveness depends on how well users navigate AI limitations, particularly when models exhibit systematic errors under distribution shifts. Although AI tools aim to improve decision quality, users may struggle to calibrate their reliance on these AI tools effectively. This challenge is even more complex in group decision-making, where social dynamics further influence the reliance on AI. Despite the widespread adoption of AI in both individual and group settings, little is known about how decision-makers adjust their reliance in response to AI failures. This dissertation investigates: (1) How do individuals interact with AI-assisted tools, particularly when faced with distribution shifts? Can external interventions help individuals rely on AI-assisted tools appropriately without modifying the AI model? (2) How do group dynamics shape AI reliance compared to individual interactions, and can external interventions reduce the negative effects of group dynamics to promote critical engagement with AI recommendations?
Through human-subject experiments in house price prediction (individuals) and recidivism prediction (groups), we find that individuals, despite recognizing distribution shifts, tend to over-rely on AI because they overestimate its capabilities. Groups exhibit even higher reliance, likely driven by groupthink and a desire to avoid conflict, as they treat AI recommendations as an anchor. Although group decisions are generally less biased than individual ones, their increased dependence on flawed AI raises concerns about the robustness of decisions.
To mitigate these issues, we propose two external interventions: (1) hands-on user education, where individuals explore AI limitations in an interactive sandbox, and (2) an LLM-powered devil's advocate that challenges groupthink by prompting critical discussions. The interactive sandbox allows users to engage with data examples and model behaviors, helping them better understand AI limitations and reduce over-reliance. Meanwhile, the LLM-powered devil's advocate fosters deeper deliberation, helping groups critically evaluate AI recommendations and develop more appropriate reliance on AI, ultimately improving group-AI collaborative decision-making.
These insights emphasize that enhancing decision-making with AI-assisted tools requires not only improving AI models but also refining how humans interact with them. This underscores the importance of understanding and improving how individuals and groups develop appropriate reliance on AI and make decisions based on AI-assisted recommendations. Future research should explore personalized AI interventions that address confidence miscalibration, scalable AI literacy training through adaptive model-specific tutorials, explore the group-AI interaction in complex group settings and LLM-powered facilitators that enhance group deliberation and decision-making. Additionally, future work should ensure that AI-driven decision support remains transparent, accountable, and capable of fostering long-term improvements in human reasoning.
History
Degree Type
- Doctor of Philosophy
Department
- Consumer Science
Campus location
- West Lafayette