1. Modeling and Validation of Biased Human Trust
- Author
-
Hoogendoorn, M., Jaffry, S.W., van Maanen, P., Treur, J., Boissier, O., et al., Artificial intelligence, Network Institute, Social AI, and Boissier, O., et al., null
- Subjects
Intelligent agent ,Empirical data ,business.industry ,Computer science ,Multi-agent system ,Rationality ,Cognition ,Artificial intelligence ,business ,computer.software_genre ,Data science ,computer - Abstract
When considering intelligent agents that interact with humans, having an idea of the trust levels of the human, for example in other agents or services, can be of great importance. Most models of human trust that exist, are based on some rationality assumption, and biased behavior is not represented, whereas a vast literature in Cognitive and Social Sciences indicates that humans often exhibit non-rational, biased behavior with respect to trust. This paper reports how some variations of biased human trust models have been designed, analyzed and validated against empirical data. The results show that such biased trust models are able to predict human trust significantly better. © 2011 IEEE.
- Published
- 2011
- Full Text
- View/download PDF