posted on 2021-08-04, 14:09authored byLouis Hickman
Automated video interviews (AVIs) use machine learning algorithms to predict interviewee personality traits and social skills, and they are increasingly being used in industry. The present study examines the possibility of expanding the scope and utility of these approaches by developing and testing AVIs that score ability from interviewee verbal, paraverbal, and nonverbal behavior in video interviews. To advance our understanding of whether AVI ability assessments are useful, I develop AVIs that predict ability (GMA, verbal ability, and interviewer-rated intellect) and investigate their reliability (i.e., inter-algorithm reliability, internal consistency across interview questions, and test retest reliability). Then, I investigate the convergent and discriminant-related validity evidence as well as potential ethnic and gender bias of such predictions. Finally, based on the Brunswik lens model, I compare how ability test scores, AVI ability assessments, and interviewer ratings of ability relate to interviewee behavior. By exploring how ability relates to behavior and how ability ratings from both AVIs and interviewers relate to behavior, the study advances our understanding of how ability affects interview performance and the cues that interviewers use to judge ability.
Funding
AI-DCL: Collaborative Research: EAGER: Understanding and Alleviating Potential Biases in Large Scale Employee Selection Systems: The Case of Automated Video Interviews
Directorate for Computer & Information Science & Engineering