Purdue University Graduate School
Browse

INVESTIGATING HUMAN VALUES AND LINGUISTIC PATTERNS IN MENTAL HEALTH CHATBOTS

Download (4.44 MB)
thesis
posted on 2025-07-25, 02:09 authored by Maleeha SheikhMaleeha Sheikh
<p dir="ltr">As access to traditional mental health services remains limited for many populations worldwide, artificial intelligence (AI)-powered chatbots offering emotional support have gained increasing prominence as scalable, on-demand alternatives. These systems leverage natural language processing and affective computing techniques to engage users in emotionally responsive dialogue. While chatbots are becoming more common, making them truly understand and respond to human emotions in a caring and trustworthy way is still a major challenge. This involves not just detecting emotions, but also teaching them to act like a supportive companion during times of emotional distress. This study presents an observational analysis to examine how mobile applications based on mental health chatbots express human values and linguistic patterns, and whether these align with users’ expressed needs for effective emotional support, as reflected in user reviews. We analyzed three widely used mental health chatbots - Wysa, Sintelly, and Youper - through structured interactions and qualitative review data. Using a machine learning framework grounded in the Schwartz Theory of Basic Human Values, we assessed the extent to which these chatbots express core human values. Additionally, we employed a well-established linguistic analysis tool to evaluate the chatbots’ language patterns and emotional authenticity, providing insights into how these systems simulate empathetic communication. Our findings suggest that the expression of certain human values, particularly Security and Benevolence, is integrated throughout the design of mobile chatbot applications for mental health. Again, reviews focus more on Achievement and Security which is crucial for building user trust and satisfaction. Furthermore, the study reveals substantial variation in how chatbots express emotion through language, directly impacting user experience and therapeutic effectiveness. These insights provide implications for developing more emotionally intelligent and ethically sound mental health chatbots. By aligning chatbot communication with users’ psychological needs and value orientations, this research contributes to the advancement of AI-driven mental health support systems that are both accessible and emotionally resonant.</p>

History

Degree Type

  • Master of Science in Engineering

Department

  • Electrical and Computer Engineering

Campus location

  • Fort Wayne

Advisor/Supervisor/Committee Chair

Chao Chen

Advisor/Supervisor/Committee co-chair

MD Romael Haque

Additional Committee Member 2

Michelle A. Drouin

Additional Committee Member 3

Claudio C. Freitas