Across a range of sectors, including transportation and healthcare, the use of automation to assist humans with increasingly complex tasks is also demanding that such systems are more interactive with human users. Given the role of cognitive factors in human decision-making during their interactions with automation, models enabling human cognitive state estimation and prediction could be used by autonomous systems to appropriately adapt their behavior. However, accomplishing this requires mathematical models of human cognitive state evolution that are suitable for algorithm design. In this thesis, a computational model of coupled human trust and self-confidence dynamics is proposed. The dynamics are modeled as a partially observable Markov decision process that leverages behavioral and self-report data as observations for estimation of the cognitive states. The use of an asymmetrical structure in the emission probability functions enables labeling and interpretation of the coupled cognitive states. The model is trained and validated using data collected from 340 participants. Analysis of the transition probabilities shows that the model captures nuanced effects, in terms of participants' decisions to rely on an autonomous system, that result as a function of the combination of their trust in the automation and self-confidence. Implications for the design of human-aware autonomous systems are discussed, particularly in the context of human trust and self-confidence calibration.
Funding
CPS: Frontier: Collaborative Research: Cognitive Autonomy for Human CPS: Turning Novices into Experts
Directorate for Computer & Information Science & Engineering