Can we make people trust Artificial Intelligence (AI) more using attachment security?
Artificial intelligence (AI) is everywhere. In a typical day, people likely use AI multiple times without even knowing it: Alexa and Siri, Google Maps, Uber and Lyft, autopilot on commercial flights, spam filters, and smart email categorization (so anyone using Gmail, Yahoo, or Office 365/outlook), mobile check deposits, plagiarism checkers, online searches, personalized recommendations, Facebook, Instagram, and Pinterest are all examples of AI.
But what happens when people are being introduced to a new AI technology? How likely are they to trust the new technology?
With an interdisciplinary team of researchers from the University of Kansas, we set to find out. The results are published in a new paper in the journal Computers in Human Behavior. We found that people’s trust in AI is tied to their relationship or attachment style. Our research shows that people who are anxious about their relationships with humans tend to be less trusting when it comes to AI. Importantly, the research also suggests trust in artificial intelligence can be increased by reminding people of their secure relationships with other humans.
Grand View Research estimated the global artificial intelligence market at $39.9 billion in 2019, and it is projected to expand at a compound annual growth rate of 42.2 percent from 2020 to 2027. However, lack of trust remains a key obstacle to adopting new AI technologies.
Our new research suggests ways to boost trust in artificial intelligence.
In three studies, we first showed that attachment style, thought to play a central role in romantic and parent-child relationships, predicts people’s trust in artificial intelligence. In the next two studies, we exposed people to attachment-related cues (via priming or nudging) and showed that such exposure leads to changes in trust levels. Some of our main findings include:
- People’s attachment anxiety predicts less trust in artificial intelligence.
- Enhancing attachment anxiety reduced trust in artificial intelligence.
- Conversely, enhancing attachment security increases trust in artificial intelligence.
- These effects are unique to attachment security and were not found with exposure to positive affect cues.
Most research on trust in AI focuses on cognitive ways to boost trust. Here we took a different approach by focusing on a relational affective route to boost trust, seeing AI as a partner or a team member rather than a device. Finding associations between one’s attachment style — an individual difference representing the way people feel, think, and behave in close relationships — and one’s trust in AI, paves the way to new understandings and potentially new interventions to induce trust. The findings show you can predict and increase people’s trust levels in non-humans based on their early relationships with humans. This has the potential to improve adoption of new technologies and the integration of AI in the work place.article continues after advertisement
The research team includes investigators from a wide array of disciplines, including psychology, engineering, business, and medicine. This interdisciplinary approach provides a new perspective on AI, trust, and associations with relational and affective factors.
References
Omri Gillath, Ting Ai, Michael S. Branicky, Shawn Keshmiri, Robert, B. Davison, & Ryan Spaulding. Attachment and Trust in Artificial Intelligence. Computers in Human Behavior, 2020; 106607 DOI: 10.1016/j.chb.2020.106607