Back to Search Start Over

Help Them Understand: Testing and Improving Voice User Interfaces.

Authors :
Guglielmi, Emanuela
Rosa, Giovanni
Scalabrino, Simone
Bavota, Gabriele
Oliveto, Rocco
Source :
ACM Transactions on Software Engineering & Methodology; Jul2024, Vol. 33 Issue 6, p1-33, 33p
Publication Year :
2024

Abstract

Voice-based virtual assistants are becoming increasingly popular. Such systems provide frameworks to developers for building custom apps. End-users can interact with such apps through a Voice User Interface (VUI), which allows the user to use natural language commands to perform actions. Testing such apps is not trivial: The same command can be expressed in different semantically equivalent ways. In this article, we introduce VUI-UPSET, an approach that adapts chatbot-testing approaches to VUI-testing. We conducted an empirical study to understand how VUI-UPSET compares to two state-of-the-art approaches (i.e., a chatbot testing technique and ChatGPT) in terms of (i) correctness of the generated paraphrases, and (ii) capability of revealing bugs. To this aim, we analyzed 14,898 generated paraphrases for 40 Alexa Skills. Our results show that VUI-UPSET generates more bug-revealing paraphrases than the two baselines with, however, ChatGPT being the approach generating the highest percentage of correct paraphrases. We also tried to use the generated paraphrases to improve the skills. We tried to include in the voice interaction models of the skills (i) only the bug-revealing paraphrases, (ii) all the valid paraphrases. We observed that including only bug-revealing paraphrases is sometimes not sufficient to make all the tests pass. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
1049331X
Volume :
33
Issue :
6
Database :
Complementary Index
Journal :
ACM Transactions on Software Engineering & Methodology
Publication Type :
Academic Journal
Accession number :
178356400
Full Text :
https://doi.org/10.1145/3654438