1. Factors in Crowdsourcing for Evaluation of Complex Dialogue Systems
- Author
-
Aicher, Annalena, Hillmann, Stefan, Feustel, Isabel, Michael, Thilo, Möller, Sebastian, and Minker, Wolfgang
- Subjects
Computer Science - Human-Computer Interaction - Abstract
In the last decade, crowdsourcing has become a popular method for conducting quantitative empirical studies in human-machine interaction. The remote work on a given task in crowdworking settings suits the character of typical speech/language-based interactive systems for instance with regard to argumentative conversations and information retrieval. Thus, crowdworking promises a valuable opportunity to study and evaluate the usability and user experience of real humans in interactions with such interactive systems. In contrast to physical attendance in laboratory studies, crowdsourcing studies offer much more flexible and easier access to large numbers of heterogeneous participants with a specific background, e.g., native speakers or domain expertise. On the other hand, the experimental and environmental conditions as well as the participant's compliance and reliability (at least better monitoring of the latter) are much better controllable in a laboratory. This paper seeks to present a (self-)critical examination of crowdsourcing-based studies in the context of complex (spoken) dialogue systems. It describes and discusses observed issues in crowdsourcing studies involving complex tasks and suggests solutions to improve and ensure the quality of the study results. Thereby, our work contributes to a better understanding and what needs to be considered when designing and evaluating studies with crowdworkers for complex dialogue systems., Comment: 14 pages, accepted to 13th International Workshop on Spoken Dialogue Systems (IWSDS), Los Angeles, USA, February 21--24, 2023
- Published
- 2024