1. Evaluating emotional and subjective responses in synthetic art-related dialogues: A multi-stage framework with large language models.
- Author
-
Luna-Jiménez, Cristina, Gil-Martín, Manuel, D'Haro, Luis Fernando, Fernández-Martínez, Fernando, and San-Segundo, Rubén
- Subjects
- *
LANGUAGE models , *BEHAVIORAL assessment , *TEXT mining , *DATA mining , *CAPACITY (Law) , *CHATBOTS - Abstract
The appearance of Large Language Models (LLM) has implied a qualitative step forward in the performance of conversational agents, and even in the generation of creative texts. However, previous applications of these models in generating dialogues neglected the impact of 'hallucinations' in the context of generating synthetic dialogues, thus omitting this central aspect in their evaluations. For this reason, we propose an open-source and flexible framework called GenEvalGPT framework: a comprehensive multi-stage evaluation strategy utilizing diverse metrics. The objective is two-fold: first, the goal is to assess the extent to which synthetic dialogues between a chatbot and a human align with the specified commands, determining the successful creation of these dialogues based on the provided specifications; and second, to evaluate various aspects of emotional and subjective responses. Assuming that dialogues to be evaluated were synthetically produced from specific profiles, the first evaluation stage utilizes LLMs to reconstruct the original templates employed in dialogue creation. The success of this reconstruction is then assessed in a second stage using lexical and semantic objective metrics. On the other hand, crafting a chatbot's behaviors demands careful consideration to encompass a diverse range of interactions it is meant to engage in. Synthetic dialogues play a pivotal role in this context, as they can be deliberately synthesized to emulate various behaviors. This is precisely the objective of the third stage: evaluating whether the generated dialogues adhere to the required aspects concerning emotional and subjective responses. To validate the capabilities of the proposed framework, we applied it to recognize whether the chatbot exhibited one of two distinct behaviors in the synthetically generated dialogues: being emotional and providing subjective responses, or remaining neutral. This evaluation will encompass traditional metrics and automatic metrics generated by the LLM. In our use case of art-related dialogues, our findings reveal that the capacity to recover templates or profiles is more effective for information or profile items that are objective and factual, in contrast to those related to mental states or subjective facts. For the emotional and subjective behavior assessment, rule-based metrics achieved a 79% of accuracy in detecting emotions or subjectivity (anthropic), and an 82% on the LLM automatic metrics. The combination of these metrics and stages could help to decide which of the generated dialogues should be maintained depending on the applied policy, which could vary from preserving between 57% to 93% of the initial dialogues. • Large Language Models (LLM) can generate and evaluate novel dialogues. • A combination of objective metrics with LLM self-reported metrics helps to assess quality. • The proposed framework allows the generation and evaluation of synthetic dialogues. • Framework's use case: emotional and subjective characteristics of chatbots (anthropic). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF