1. Generative AI and Its Implications for Definitions of Trust
- Author
-
Marty J. Wolf, Frances Grodzinsky, and Keith W. Miller
- Subjects
trust ,e-trust ,chatbots ,generative artificial intelligence ,Information technology ,T58.5-58.64 - Abstract
In this paper, we undertake a critical analysis of how chatbots built on generative artificial intelligence impact assumptions underlying definitions of trust. We engage a particular definition of trust and the object-oriented model of trust that was built upon it and identify how at least four implicit assumptions may no longer hold. Those assumptions include that people generally provide others with a default level of trust, the ability to identify whether the trusted agent is human or artificial, that risk and trust can be readily quantified or categorized, and that there is no expectation of gain by agents engaged in trust relationships. Based on that analysis, we suggest modifications to the definition and model to accommodate the features of generative AI chatbots. Our changes re-emphasize developers’ responsibility for the impacts of their AI artifacts, no matter how sophisticated the artifact may be. The changes also reflect that trust relationships are more fraught when participants in such relationships are not confident in identifying the nature of a potential trust partner.
- Published
- 2024
- Full Text
- View/download PDF