Estimating Trust in Conversational Agent with Lexical and Acoustic Features
Event Type
Virtual Program Session
Human AI Robot Teaming (HART)
TimeWednesday, October 12th1:45pm - 2:00pm EDT
DescriptionAs NASA moves to long-duration space exploration operations, there is an increasing need for human-agent cooperation that requires real-time trust estimation by virtual agents. Our objective was to estimate trust using conversational data, including lexical and acoustic features, with machine learning. A 2(reliability) × 2 (cycles) × 3(events) within-subject study was designed to provoke various levels of trust. Participants had trust-related conversations with a conversational agent at the end of each event. To estimate trust, subjective trust ratings were predicted using machine learning models trained on three types of conversational features (i.e., lexical, acoustic, and combined). Results showed that a random forest model, trained on the combined lexical and acoustic features, best predicts trust in the conversational agent (R2adj = 0.67). Comparing models,
we showed that trust is not only reflected in lexical cues but also acoustic cues. These results show the possibility of using conversational data to measure trust unobtrusively and dynamically.