Back to Search Start Over

Enabling Morally Sensitive Robotic Clarification Requests

Authors :
Jackson, Ryan Blake
Williams, Tom
Publication Year :
2020

Abstract

The design of current natural language oriented robot architectures enables certain architectural components to circumvent moral reasoning capabilities. One example of this is reflexive generation of clarification requests as soon as referential ambiguity is detected in a human utterance. As shown in previous research, this can lead robots to (1) miscommunicate their moral dispositions and (2) weaken human perception or application of moral norms within their current context. We present a solution to these problems by performing moral reasoning on each potential disambiguation of an ambiguous human utterance and responding accordingly, rather than immediately and naively requesting clarification. We implement our solution in the DIARC robot architecture, which, to our knowledge, is the only current robot architecture with both moral reasoning and clarification request generation capabilities. We then evaluate our method with a human subjects experiment, the results of which indicate that our approach successfully ameliorates the two identified concerns.<br />Comment: Accepted for nonarchival presentation at Advances in Cognitive Systems (ACS) 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2007.08670
Document Type :
Working Paper