Trust in Automation: Analysis and Model of Operator Trust in Decision Aid AI Over Time - Equipe Immersive Natural User Interaction team Access content directly
Conference Papers Year : 2023

Trust in Automation: Analysis and Model of Operator Trust in Decision Aid AI Over Time

Abstract

Understanding how human trust in AI evolves over time is essential to identify the limits of each party and provide solutions for optimal collaboration. With this goal in mind, we examine the factors that directly or indirectly influence trust, whether they come from humans, AI, or the environment. We then propose a summary of methods for measuring trust, whether subjective or objective, to show which ones are best suited for longitudinal studies. We then focus on the main driving force behind the evolution of trust: feedback. We justify how learning feedback can be transposed to trust and what types of feedback can be applied to impact the evolution of trust over time. After understanding the factors that influence and how to measure trust, we propose an application example on a maritime surveillance tool with an AI-based decision aid.
Fichier principal
Vignette du fichier
paper8.pdf (897.89 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-04328490 , version 1 (07-12-2023)

Identifiers

  • HAL Id : hal-04328490 , version 1

Cite

Vincent Fer, Daniel Lafond, Gilles Coppin, Mathias Bollaert, Olivier Grisvard, et al.. Trust in Automation: Analysis and Model of Operator Trust in Decision Aid AI Over Time. Conference on Artificial Intelligence for Defense, DGA Maîtrise de l'Information, Nov 2023, Rennes, France. ⟨hal-04328490⟩
42 View
66 Download

Share

Gmail Facebook X LinkedIn More