Close

Presentation

Self-Repairing and/or Buoyant Trust in Artificial Intelligence
Event Type
Lecture
Tracks
Human AI Robot Teaming (HART)
TimeTuesday, October 11th11:30am - 11:45am EDT
LocationA706
DescriptionArtificial Intelligence (AI) is often viewed as the means by which the intelligence community will cope with the increasing amount of information available to them. Trust is a complex, dynamic phenomenon, which drives adoption (or disuse) of technology. We conducted a naturalistic study with intelligence professionals (planners, collectors, analysts, etc.) to understand trust dynamics with AI systems. We found that on a long-enough time scale, trust in AI self-repaired after incidents where trust was lost, usually based merely on the assumption that AI had improved since participants last interacted with it. Similarly, we found that trust in AI increased over time after incidents where trust was gained in the AI. We termed this general trend “buoyant trust in AI,” where trust in AI tends to increase over time, regardless of previous interactions with the system. Key findings are discussed, along with possible directions for future research.