A review of mathematical models of human trust in automation

Front Neuroergon. 2023 Jun 13:4:1171403. doi: 10.3389/fnrgo.2023.1171403. eCollection 2023.

Abstract

Understanding how people trust autonomous systems is crucial to achieving better performance and safety in human-autonomy teaming. Trust in automation is a rich and complex process that has given rise to numerous measures and approaches aimed at comprehending and examining it. Although researchers have been developing models for understanding the dynamics of trust in automation for several decades, these models are primarily conceptual and often involve components that are difficult to measure. Mathematical models have emerged as powerful tools for gaining insightful knowledge about the dynamic processes of trust in automation. This paper provides an overview of various mathematical modeling approaches, their limitations, feasibility, and generalizability for trust dynamics in human-automation interaction contexts. Furthermore, this study proposes a novel and dynamic approach to model trust in automation, emphasizing the importance of incorporating different timescales into measurable components. Due to the complex nature of trust in automation, it is also suggested to combine machine learning and dynamic modeling approaches, as well as incorporating physiological data.

Keywords: decision-making; dynamical models; human-autonomy teaming; mathematical modeling; reliance; risk dynamics; trust; trust measures.

Publication types

  • Review

Grants and funding

This research was sponsored by the Army Research Office through Cooperative Agreement Number W911NF-20-2-0252, The James S. McDonnell Foundation Twenty-First Century Science Initiative in Studying Complex Systems Scholar Award (UHC Scholar Award 220020472), assistantships from Arizona State University, and the ASU GPSA Publication Fee Grant.