34 results on '"Kopp, Stefan"'
Search Results
2. Modeling the Semantic Coordination of Speech and Gesture under Cognitive and Linguistic Constraints
- Author
-
Bergmann, Kirsten, Kahl, Sebastian, Kopp, Stefan, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Aylett, Ruth, editor, Krenn, Brigitte, editor, Pelachaud, Catherine, editor, and Shimodaira, Hiroshi, editor
- Published
- 2013
- Full Text
- View/download PDF
3. The Relation Between Cognitive Abilities and the Distribution of Semantic Features Across Speech and Gesture in 4‐year‐olds.
- Author
-
Abramov, Olga, Kern, Friederike, Koutalidis, Sofia, Mertens, Ulrich, Rohlfing, Katharina, and Kopp, Stefan
- Subjects
REASONING in children ,SPEECH & gesture ,COGNITIVE ability ,MENTAL representation ,INDIVIDUAL differences ,GESTURE - Abstract
When young children learn to use language, they start to use their hands in co‐verbal gesturing. There are, however, considerable differences between children, and it is not completely understood what these individual differences are due to. We studied how children at 4 years of age employ speech and iconic gestures to convey meaning in different kinds of spatial event descriptions, and how this relates to their cognitive abilities. Focusing on spontaneous illustrations of actions, we applied a semantic feature (SF) analysis to characterize combinations of speech and gesture meaning and related them to the child's visual‐spatial abilities or abstract/concrete reasoning abilities (measured using the standardized SON‐R 212−7 test). Results show that children with higher cognitive abilities convey significantly more meaning via gesture and less solely via speech. These findings suggest that young children's use of cospeech representational gesturing is positively related to their mental representation and reasoning abilities. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Pragmatic multimodality: Effects of nonverbal cues of focus and certainty in a virtual human
- Author
-
Freigang, Farina, Klett, Sören, Kopp, Stefan, Beskow, J., Peters, C., Castellano, G., O'Sullivan, C., Leite, I., and Kopp, S.
- Subjects
060201 languages & linguistics ,Communication ,Facial expression ,Recall ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,business.industry ,media_common.quotation_subject ,06 humanities and the arts ,Multimodality ,Nonverbal communication ,Perception ,0602 languages and literature ,Psychology ,Prosody ,business ,Virtual actor ,media_common ,Cognitive psychology ,Gesture - Abstract
In pragmatic multimodality, modal (pragmatic) information is conveyed multimodally by cues in gesture, facial expressions, head movements and prosody. We observed these cues in natural interaction data. They can convey positive and negative focus, in that they emphasise or de-emphasise a piece of information, and they can convey uncertainty. In this work, we test the effects on perception and recall in a human user, when those cues are carried out by a virtual human. The nonverbal behaviour of the virtual human was modelled using motion capture data and ensured a fully multimodal appearance. Results of the study show that the virtual human was perceived as very competent and as saying something important. A special case of de-emphasising cues led to lower content recall.
- Published
- 2017
5. Second Language Tutoring using Social Robots: A Large-Scale Study.
- Author
-
Vogt, Paul, den Berghe, Rianne van, de Haas, Mirjam, Hoffman, Laura, Kanero, Junko, Mamus, Ezgi, Montanier, Jean-Marc, Oranc¸, Cansu, Oudgenoeg-Paz, Ora, Garc´ıa, Daniel Hern´andez, Papadopoulos, Fotios, Schodde, Thorsten, Verhagen, Josje, Wallbridge, Christopher D., Willemsen, Bram, de Wit, Jan, Belpaeme, Tony, G¨oksun, Tilbe, Kopp, Stefan, and Krahmer, Emiel
- Subjects
SOCIAL robots ,ENGLISH as a foreign language ,TUTORS & tutoring ,NEW words - Abstract
We present a large-scale study of a series of seven lessons designed to help young children learn English vocabulary as a foreign language using a social robot. The experiment was designed to investigate 1) the effectiveness of a social robot teaching children new words over the course of multiple interactions (supported by a tablet), 2) the added benefit of a robot's iconic gestures on word learning and retention, and 3) the effect of learning from a robot tutor accompanied by a tablet versus learning from a tablet application alone. For reasons of transparency, the research questions, hypotheses and methods were preregistered. With a sample size of 194 children, our study was statistically well-powered. Our findings demonstrate that children are able to acquire and retain English vocabulary words taught by a robot tutor to a similar extent as when they are taught by a tablet application. In addition, we found no beneficial effect of a robot's iconic gestures on learning gains. [ABSTRACT FROM AUTHOR]
- Published
- 2019
6. This Is What’s Important – Using Speech and Gesture to Create Focus in Multimodal Utterance
- Author
-
Freigang, Farina, Kopp, Stefan, Traum, David, Swartout, William, Khooshabeh, Peter, Scherer, Stefan, and Leuski, Anton
- Subjects
060201 languages & linguistics ,Recall ,Speech recognition ,media_common.quotation_subject ,05 social sciences ,06 humanities and the arts ,050105 experimental psychology ,Empirical research ,Perception ,0602 languages and literature ,0501 psychology and cognitive sciences ,Psychology ,Competence (human resources) ,Utterance ,media_common ,Gesture ,Natural communication - Abstract
In natural communication, humans enrich their utterances with pragmatic information indicating, e.g., what is important to them or what they are not certain about. We investigate whether and how virtual humans (VH) can employ this kind of meta-communication. In an empirical study we have identified three modifying functions that humans produce and perceive in multimodal utterance, one being to create or attenuate focus. In this paper we test whether such modifying functions are also observed in speech and/or gesture of a VH, and whether this changes the perception of a VH overall. Results suggest that, although the VH’s behaviour is judged rather neutral overall, focusing is distinctively recognised, leads to better recall, and affects perceived competence. These effects are strongest if focus is created jointly by speech and gesture.
- Published
- 2016
7. Ithe effect of an intelligent virtual agent’s nonverbal behavior with regard to dominance and cooperativity
- Author
-
Straßmann, Carolin, Rosenthal-von der Pütten, Astrid, Yaghoubzadeh, Ramin, Kaminski, Raffael, Krämer, Nicole, Traum, David, Swartout, William, Khooshabeh, Peter, Kopp, Stefan, Scherer, Stefan, and Leuski, Anton
- Subjects
Social perception ,media_common.quotation_subject ,05 social sciences ,050109 social psychology ,Cooperativity ,Affect (psychology) ,050105 experimental psychology ,Nonverbal behavior ,Nonverbal communication ,Dominance (ethology) ,Psychologie ,Perception ,0501 psychology and cognitive sciences ,Psychology ,Social psychology ,media_common ,Gesture - Abstract
In order to design a successful human-agent-interaction, knowledge about the effects of a virtual agent’s behavior is important. Therefore, the presented study aims to investigate the effect of different nonverbal behavior on the agent’s person perception with a focus on dominance and cooperativity. An online study with 190 participants was conducted to evaluate the effect of different nonverbal behaviors. 23 nonverbal behaviors of four different experimental conditions (dominant, submissive, cooperative and non-cooperative behavior) were compared. Results emphasize that, indeed, nonverbal behavior is powerful to affect users’ person perception. Data analyses reveal symbolic gestures such as crossing the arms, stemming the hands on the hip or touching one’s neck to most effectively influence dominance perception. Regarding perceived cooperativity expressivity has the most pronounced effect.
- Published
- 2016
8. Social resonance and embodied coordination in face-to-face conversation with artificial interlocutors
- Author
-
Kopp, Stefan
- Subjects
Linguistics and Language ,media_common.quotation_subject ,Embodied conversational agents ,computer.software_genre ,Language and Linguistics ,Gesture ,Perception ,Natural (music) ,Conversation ,media_common ,Communication ,business.industry ,Computer Science Applications ,Focus (linguistics) ,Social Resonance ,Embodied agent ,Embodied cognition ,Coordination ,Modeling and Simulation ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Psychology ,computer ,Software ,Humanoid robot - Abstract
Human natural face-to-face communication is characterized by inter-personal coordination. In this paper, phenomena are analyzed that yield coordination of behaviors, beliefs, and attitudes between interaction partners, which can be tied to a concept of establishing social resonance. It is discussed whether these mechanisms can and should be transferred to conversation with artificial interlocutors like ECAs or humanoid robots. It is argued that one major step in this direction is embodied coordination, mutual adaptations that are mediated by flexible modules for the top-down production and bottom-up perception of expressive conversational behavior that ground in and, crucially, coalesce in the same sensorimotor structures. Work on modeling this for ECAs with a focus on coverbal gestures is presented. (C) 2010 Elsevier B.V. All rights reserved.
- Published
- 2010
9. Synthesizing multimodal utterances for conversational agents
- Author
-
Kopp, Stefan and Wachsmuth, Ipke
- Subjects
computer animation ,Modalities ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,business.industry ,Computer science ,computer.internet_protocol ,motion control ,Speech recognition ,Transition (fiction) ,computer.software_genre ,Motion control ,Computer Graphics and Computer-Aided Design ,Focus (linguistics) ,gesture animation ,multimodal conversational agents ,model-based ,Natural (music) ,Artificial intelligence ,business ,Adaptation (computer science) ,computer ,Software ,Natural language processing ,XML ,Gesture - Abstract
Conversational agents are supposed to combine speech with non-verbal modalities for intelligible multimodal utterances. In this paper, we focus on the generation of gesture and speech from XML-based descriptions of their overt form. An incremental production model is presented that combines the synthesis of synchronized gestural, verbal, and facial behaviors with mechanisms for linking them in fluent utterances with natural co-articulation and transition effects. In particular, an efficient kinematic approach for animating hand gestures from shape specifications is presented, which provides fine adaptation to temporal constraints that are imposed by cross-modal synchrony. Copyright (C) 2004 John Wiley Sons, Ltd.
- Published
- 2004
10. Giving interaction a hand
- Author
-
Kopp, Stefan and ARRAY(0xacc41b8)
- Subjects
Computational model ,business.industry ,Computer science ,Multimodal communication ,Cognition ,computer.software_genre ,Basic research ,Human–computer interaction ,Embodied cognition ,Natural (music) ,Artificial intelligence ,business ,computer ,Natural language processing ,Humanoid robot ,Gesture - Abstract
Humans frequently join words and gestures for multimodal communication. Such natural co-speech gesturing goes far beyond what can be currently processed by gesture-based interfaces and especially its coordination with speech still poses open challenges for basic research and multimodal interfaces alike. How can we develop computational models for processing and generating natural speech-gesture behavior, in a flexible, fast and adaptive manner similar to humans? In this talk I will review approaches and methods applied to this problem and I will argue that such models need to (and can) based on a deeper understanding of what shapes co-speech gesturing in a particular situation. I will present work that connects empirical analyses with computational modeling and evaluation to unravel the cognitive, embodied and socio-interactional mechanisms underlying the use of speech- accompanying gestural behavior, and to develop deeper models of these mechanisms for interactive systems such as virtual characters, humanoid robots, or multimodal interfaces.
- Published
- 2013
11. Understanding How Well You Understood – Context- Sensitive Interpretation of Multimodal User Feedback
- Author
-
Buschmeier, Hendrik and Kopp, Stefan
- Subjects
Communication ,Facial expression ,business.industry ,media_common.quotation_subject ,Interpretation (philosophy) ,Context (language use) ,Gaze ,Human–computer interaction ,Perception ,business ,Psychology ,Utterance ,media_common ,Meaning (linguistics) ,Gesture - Abstract
Human interlocutors continuously show behaviour indicative of their perception, understanding, acceptance and agreement of and with the other's utterances [1,4]. Such evidence can be provided in the form of verbal-vocal feedback signals, head gestures, facial expressions or gaze and often interacts with the current dialogue context. As feedback signals are able to express subtle differences in meaning, we hypothesise that they tend to reflect their producer's mental state quite accurately. To be cooperative and human-like dialogue partners, virtual conversational agents should be able to interpret their user's evidence of understanding and to react appropriately to it by adapting to their needs [2]. We present a Bayesian network model for context-sensitive interpretation of listener feedback for such an 'attentive speaker agent', which takes the user's multimodal behaviour (verbal-vocal feedback, headgestures, gaze) as well as its own utterance and knowledge of the dialogue domain into account to form a model of the user's mental state.
- Published
- 2012
12. Individualized Gesture Production in Embodied Conversational Agents
- Author
-
Kopp, Stefan, Bergmann, Kirsten, Zacarias, Marielba, and de Oliveira, José Valente
- Subjects
Communication ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,business.industry ,computer.software_genre ,Noun phrase ,Embodied agent ,Empirical research ,Embodied cognition ,Human–computer interaction ,Dialog system ,business ,computer ,Competence (human resources) ,Virtual actor ,Gesture - Abstract
Gesturing behavior is subject to great variations across situations, individuals, or cultures. These variations make gestures hard for systematic studies and modeling attempts. However, gesture research on real humans and modeling approaches with virtual agents have made significant progress in the last years. In this chapter we discuss the state of research and present results from an extensive empirical study on human iconic gestures in direction giving dialogues. It is described how machine learning methods can be employed to extract different speakers’ gesturing style and to generate individualized language and gestures in ECAs. Evaluations show that human observers rate virtual agents better in terms of competence, human-likeness, or likability when a consistent individual gesture style is produced.
- Published
- 2012
13. A second chance to make a first impression? How appearance and nonverbal behavior affect perceived warmth and competence of virtual agents over time
- Author
-
Bergmann, Kirsten, Eyssel, Friederike Anne, Kopp, Stefan, Walker, Marilyn, Neff, Michael, Paiva, Ana, and Nakano, Yukiko
- Subjects
Nonverbal behavior ,Agent behavior ,Applied research ,First impression (psychology) ,Psychology ,Competence (human resources) ,Social psychology ,Gesture - Abstract
First impressions of others are fundamental for the further develop- ment of a relationship and are thus of major importance for the design of vir- tual agents, too. We addressed the question whether there is a second chance for first impressions with regard to the major dimensions of social cognition–warmth and competence. We employed a novel experimental set-up that combined agent appearance (robot-like vs. human-like) and agent behavior (gestures present vs. absent) of virtual agents as between-subject factors with a repeated measures de- sign. Results indicate that ratings of warmth depend on interaction effects of time and agent appearance, while evaluations of competence seem to depend on the interaction of time and nonverbal behavior. Implications of these results for basic and applied research on intelligent virtual agents will be discussed.
- Published
- 2012
14. How Do Iconic Gestures Convey Visuo-Spatial Information? Bringing Together Empirical, Theoretical, and Simulation Studies
- Author
-
Rieser, Hannes, Bergmann, Kirsten, Kopp, Stefan, Efthimiou, Eleni, and Kouroupetroglou, Georgios
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,business.industry ,Virtual agent ,computer.software_genre ,Noun phrase ,Human–computer interaction ,Artificial intelligence ,business ,Competence (human resources) ,computer ,Spatial analysis ,Natural language processing ,Gesture - Abstract
We investigate the question of how co-speech iconic gestures are used to convey visuo-spatial information in an interdisciplinary way, starting with a corpus-based empirical and theoretical perspective on how a typology of gesture form and a partial ontology of gesture meaning are related. Results provide the basis for a computational modeling approach that allows us to simulate the production of speaker-specific gesture forms to be realized with virtual agents. An evaluation of our simulation results and our methodology shows that the model is able to successfully approximate human gestural behavior use of iconic gestures, and moreover, that gestural behavior can improve how humans rate a virtual agent in terms of eloquence, competence, human-likeness, or likeability.
- Published
- 2012
15. Effects of gesture on the perception of psychological anthropomorphism: A case study with a humanoid robot
- Author
-
Salem, Maha, Eyssel, Friederike Anne, Rohlfing, Katharina, Kopp, Stefan, Joublin, F., Mutlu, B., Bartneck, C., Ham, J., Evers, V., and Kanda, T.
- Subjects
Shared reality ,Communication ,business.industry ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,media_common.quotation_subject ,Anthropomorphism ,Affect (psychology) ,Robotic systems ,Perception ,Robot ,Non-verbal Cues and Expressiveness ,Multimodal Interaction and Conversational Skills ,Psychology ,business ,Attribution ,Humanoid robot ,Cognitive psychology ,Gesture ,media_common - Abstract
Previous work has shown that gestural behaviors affect anthropomorphic inferences about artificial communicators such as virtual agents. In an experiment with a humanoid robot, we investigated to what extent gesture would affect anthropomorphic inferences about the robot. Particularly, we examined the effects of the robot's hand and arm gestures on the attribution of typically human traits, likability of the robot, shared reality, and future contact intentions after interacting with the robot. For this, we manipulated the non-verbal behaviors of the humanoid robot in three experimental conditions: (1) no gesture, (2) congruent gesture, and (3) incongruent gesture. We hypothesized higher ratings on all dependent measures in the two gesture (vs. no gesture) conditions. The results confirm our predictions: when the robot used gestures during interaction, it was anthropomorphized more, participants perceived it as more likable, reported greater shared reality with it, and showed increased future contact intentions than when the robot gave instructions without using gestures. Surprisingly, this effect was particularly pronounced when the robot's gestures were partly incongruent with speech. These findings show that communicative non-verbal behaviors in robotic systems affect both anthropomorphic perceptions and the mental models humans form of a humanoid robot during interaction.
- Published
- 2011
16. Towards Meaningful Robot Gesture
- Author
-
Salem, Maha, Kopp, Stefan, Wachsmuth, Ipke, Joublin, Frank, Ritter, Helge, Sagerer, Gerhard, Dillmann, Rüdiger, and Buss, Martin
- Subjects
Engineering ,business.industry ,computer.software_genre ,Domain (software engineering) ,Action (philosophy) ,ASIMO ,Robot ,Artificial intelligence ,Dialog system ,business ,computer ,Humanoid robot ,Virtual actor ,Gesture - Abstract
Humanoid robot companions that are intended to engage in natural and fluent human-robot interaction are supposed to combine speech with non-verbal modalities for comprehensible and believable behavior. We present an approach to enable the humanoid robot ASIMO to flexibly produce and synchronize speech and co-verbal gestures at run-time, while not being limited to a predefined repertoire of motor action. Since this research challenge has already been tackled in various ways within the domain of virtual conversational agents, we build upon the experience gained from the development of a speech and gesture production model used for our virtual human Max. Being one of the most sophisticated multi-modal schedulers, the Articulated Communicator Engine (ACE) has replaced the use of lexicons of canned behaviors with an on-the-spot production of flexibly planned behavior representations. As an underlying action generation architecture, we explain how ACE draws upon a tight, bi-directional coupling of ASIMO’s perceptuo-motor system with multi-modal scheduling via both efferent control signals and afferent feedback.
- Published
- 2009
17. A Probabilistic Model of Motor Resonance for Embodied Gesture Perception
- Author
-
Sadeghipour, Amir, Kopp, Stefan, Ruttkay, Zsófia, Kipp, Michael, Nijholt, Anton, and Vilhjamsson, Hannes
- Subjects
Communication ,Visual perception ,business.industry ,media_common.quotation_subject ,Representation (systemics) ,computer.software_genre ,Embodied agent ,Action (philosophy) ,Human–computer interaction ,Embodied cognition ,Perception ,Cognitive robotics ,Psychology ,business ,computer ,media_common ,Gesture - Abstract
Basic communication and coordination mechanisms of human social interaction are assumed to be mediated by perception-action links. These links ground the observation and understanding of others in one's own action generation system, as evidenced by immediate motor resonances to perceived behavior. We present a model to endow virtual embodied agents with similar properties of embodied perception. With a focus of hand-arm gesture, the model comprises hierarchical levels of motor representation (commands, programs, schemas) that are employed and start to resonate probabilistically to visual stimuli of a demonstrated movement. The model is described and evaluation results are provided.
- Published
- 2009
18. GNetIc – Using Bayesian Decision Networks for Iconic Gesture Generation
- Author
-
Bergmann, Kirsten, Kopp, Stefan, Ruttkay, Zsófia, Kipp, Michael, Nijholt, Anton, and Vilhjálmsson, Hannes
- Subjects
Computer science ,business.industry ,Bayesian probability ,Context (language use) ,Referent ,computer.software_genre ,Task (project management) ,Artificial intelligence ,Representation (mathematics) ,business ,computer ,Spatial analysis ,Human communication ,Natural language processing ,Gesture - Abstract
Expressing spatial information with iconic gestures is abundant in human communication and requires to transform a referent representation into resembling gestural form. This task is challenging as the mapping is determined by the visuo-spatial features of the referent, the overall discourse context as well as concomitant speech, and its outcome varies considerably across different speakers. We present a framework, GNetIc, that combines data-driven with model-based techniques to model the generation of iconic gestures with Bayesian decision networks. Drawing on extensive empirical data, we discuss how this method allows for simulating speaker-specific vs. speaker- independent gesture production. Modeling results from a prototype implementation are presented and evaluated.
- Published
- 2009
19. Social Motorics – Towards an Embodied Basis of Social Human-Robot Interaction
- Author
-
Sadeghipour, Amir, Yaghoubzadeh, Ramin, Rüter, Andreas, Kopp, Stefan, Dillmann, Rüdiger, Vernon, David, Nakamura, Yoshihiko, Schaal, Stefan, Ritter, Helge, Sagerer, Gerhard, and Buss, Martin
- Subjects
Focus (computing) ,Embodied cognition ,business.industry ,Representation (systemics) ,Probabilistic logic ,Motor program ,Artificial intelligence ,Motor learning ,Psychology ,business ,Human–robot interaction ,Gesture - Abstract
In this paper we present a biologically-inspired model for social behavior recognition and generation. Based on an unified sensorimotor representation, it integrates hierarchical motor knowledge structures, probabilistic forward models for predicting observations, and inverse models for motor learning. With a focus on hand gestures, results of initial evaluations against real-world data are presented.
- Published
- 2009
20. Implementing a non-modular theory of language production in an embodied conversational agent
- Author
-
Sowa, Timo, Kopp, Stefan, Duncan, Susan, McNeill, David, Wachsmuth, Ipke, Lenzen, Manuela, and Knoblich, Günther
- Subjects
speech ,gesture ,embodied conversational agents ,Growth Point Theory - Abstract
This chapter discusses and assesses the feasibility of operationalising Growth Point Theory's model of language production in embodied conversational agents (ECAs). First, the chapter outlines the cornerstones of non-modular Growth Point Theory and its empirical basis. It then gives an overview of gesture and speech production models that are currently realised in ECAs, and discuss their potential and limitations with respect to which characteristics of natural speech and gesture they can account for. Finally, it discusses which requirements a technical model must meet in order to be more compatible with Growth Point Theory.
- Published
- 2008
21. Trading Spaces: How Humans and Humanoids Use Speech And Gesture to Give Directions
- Author
-
Kopp, Stefan, Tepper, P., Striegnitz, K., Ferriman, K., Cassell, J., and Nishida, Toyoaki
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,computer.software_genre ,Feature (linguistics) ,Embodied cognition ,Gesture recognition ,Conversation ,Artificial intelligence ,Dialog system ,business ,On Language ,computer ,Natural language ,Natural language processing ,media_common ,Gesture - Abstract
Humans intuitively accompany direction-giving with gestures. These gestures have been shown to have the same underlying conceptual structure as diagrams and direction-giving language, but the puzzle is how they communicate given that their form is not codified, and may in fact differ from one person or situation to the next. Based on results from a study on language and gesture in direction-giving, we describe a framework to analyze gestural images into semantic units (image description features), and to link these units to morphological features (hand shape, trajectory, etc.). This feature-based framework allows for implementing an integrated microplanner for multimodal directions that derives the form of both natural language and gesture directly from communicative goals. Using this microplanner we developed an embodied conversational agent that can perform appropriate speech and novel gestures in direction-giving conversation with real humans.
- Published
- 2007
22. Simulating the Emotion Dynamics of a Multimodal Conversational Agent
- Author
-
Becker, Christian, Kopp, Stefan, Wachsmuth, Ipke, André, Elisabeth, Dybkjær, Laila, Minker, Wolfgang, and Heisterkamp, Paul
- Subjects
Facial expression ,business.industry ,Computer science ,Cognition ,Cognitive architecture ,Boredom ,computer.software_genre ,Intelligent agent ,Mood ,Dynamics (music) ,medicine ,Artificial intelligence ,medicine.symptom ,Dialog system ,business ,computer ,Cognitive psychology ,Gesture - Abstract
We describe an implemented system for the simulation and visualisation of the emotional state of a multimodal conversational agent called Max. The focus of the presented work lies on modeling a coherent course of emotions over time. The basic idea of the underlying emotion system is the linkage of two interrelated psychological concepts: an emotion axis – representing short-time system states – and an orthogonal mood axis that stands for an undirected, longer lasting system state. A third axis was added to realize a dimension of boredom. To enhance the believability and lifelikeness of Max, the emotion system has been integrated in the agent’s architecture. In result, Max’s facial expression, gesture, speech, and secondary behaviors as well as his cognitive functions are modulated by the emotional system that, in turn, is affected by information arising at various levels within the agent’s architecture.
- Published
- 2004
23. Imitation games with an artificial agents: From mimicking to understanding shape-related iconic gestures
- Author
-
Kopp, Stefan, Sowa, Timo, Wachsmuth, Ipke, Camurri, Antonio, and Volpe, Gualtiero
- Subjects
Communication ,Computer science ,business.industry ,Principle of compositionality ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Representation (arts) ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Gesture recognition ,Natural (music) ,Imitation (music) ,business ,Gesture ,Meaning (linguistics) ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We describe an anthropomorphic agent that is engaged in an imitation game with the human user. In imitating natural gestures demonstrated by the user, the agent brings together gesture recognition and synthesis on two levels of representation. On the mimicking level, the essential form features of the meaning-bearing gesture phase (stroke) are extracted and reproduced by the agent. Meaning-based imitation requires extracting the semantic content of such gestures and re-expressing it with possibly alternative gestural forms. Based on a compositional semantics for shape-related iconic gestures, we present first steps towards this higher-level gesture imitation in a restricted domain.
- Published
- 2004
24. Lifelike gesture synthesis and timing for conversational agents
- Author
-
Wachsmuth, Ipke, Kopp, Stefan, and Sowa, Timo
- Subjects
Computer science ,Speech recognition ,Context (language use) ,computer.software_genre ,Motion (physics) ,Embodied agent ,Gesture recognition ,Synchronization (computer science) ,Dialog system ,computer ,Computer facial animation ,Gesture ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Synchronization of synthetic gestures with speech output is one of the goals for embodied conversational agents which have become a new paradigm for the study of gesture and for human-computer interface. In this context, this contribution presents an operational model that enables lifelike gesture animations of an articulated figure to be rendered in real-time from representations of spatiotemporal gesture knowledge. Based on various findings on the production of human gesture, the model provides means for motion representation, planning, and control to drive the kinematic skeleton of a figure which comprises 43 degrees of freedom in 29 joints for the main body and 20 DOF for each hand. The model is conceived to enable cross-modal synchrony with respect to the coordination of gestures with the signal generated by a text-to-speech system.
- Published
- 2002
25. The ALICO corpus: analysing the active listener.
- Author
-
Malisz, Zofia, Włodarczak, Marcin, Buschmeier, Hendrik, Skubisz, Joanna, Kopp, Stefan, and Wagner, Petra
- Subjects
ACTIVE listening ,CORPORA ,BACKCHANNELS (Social media) ,SOCIAL interaction ,GESTURE - Abstract
The Active Listening Corpus (ALICO) is a multimodal data set of spontaneous dyadic conversations in German with diverse speech and gestural annotations of both dialogue partners. The annotations consist of short feedback expression transcriptions with corresponding communicative function interpretations as well as segmentations of interpausal units, words, rhythmic prominence intervals and vowel-to-vowel intervals. Additionally, ALICO contains head gesture annotations of both interlocutors. The corpus contributes to research on spontaneous human-human interaction, on functional relations between modalities, and timing variability in dialogue. It also provides data that differentiates between distracted and attentive listeners. We describe the main characteristics of the corpus and briefly present the most important results obtained from analyses in recent years. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
26. To Err is Human(-like): Effects of Robot Gesture on Perceived Anthropomorphism and Likability.
- Author
-
Salem, Maha, Eyssel, Friederike, Rohlfing, Katharina, Kopp, Stefan, and Joublin, Frank
- Subjects
ROBOTS ,ANTHROPOMORPHISM ,HUMANOID robots ,NONVERBAL communication ,SENSORY perception ,GESTURE - Abstract
Previous work has shown that non-verbal behaviors affect anthropomorphic inferences about artificial communicators such as virtual agents or social robots. In an experiment with a humanoid robot we investigated the effects of the robot's hand and arm gestures on the perception of humanlikeness, likability of the robot, shared reality, and future contact intentions after interacting with the robot. For this purpose, the speech-accompanying non-verbal behaviors of the humanoid robot were manipulated in three experimental conditions: (1) no gesture, (2) congruent co-verbal gesture, and (3) incongruent co-verbal gesture. We hypothesized higher ratings on all dependent measures in the two multimodal (i.e., speech and gesture) conditions compared to the unimodal (i.e., speech only) condition. The results confirm our predictions: when the robot used co-verbal gestures during interaction, it was anthropomorphized more, participants perceived it as more likable, reported greater shared reality with it, and showed increased future contact intentions than when the robot gave instructions without gestures. Surprisingly, this effect was particularly pronounced when the robot's gestures were partly incongruent with speech, although this behavior negatively affected the participants' task-related performance. These findings show that communicative non-verbal behaviors displayed by robotic systems affect anthropomorphic perceptions and the mental models humans form of a humanoid robot during interaction. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
27. Generation and Evaluation of Communicative Robot Gesture.
- Author
-
Salem, Maha, Kopp, Stefan, Wachsmuth, Ipke, Rohlfing, Katharina, and Joublin, Frank
- Subjects
ROBOTICS ,ROBOTS ,GESTURE ,NONVERBAL communication ,HUMAN-robot interaction - Abstract
How is communicative gesture behavior in robots perceived by humans? Although gesture is crucial in social interaction, this research question is still largely unexplored in the field of social robotics. Thus, the main objective of the present work is to investigate how gestural machine behaviors can be used to design more natural communication in social robots. The chosen approach is twofold. Firstly, the technical challenges encountered when implementing a speech-gesture generation model on a robotic platform are tackled. We present a framework that enables the humanoid robot to flexibly produce synthetic speech and co-verbal hand and arm gestures at run-time, while not being limited to a predefined repertoire of motor actions. Secondly, the achieved flexibility in robot gesture is exploited in controlled experiments. To gain a deeper understanding of how communicative robot gesture might impact and shape human perception and evaluation of human-robot interaction, we conducted a between-subjects experimental study using the humanoid robot in a joint task scenario. We manipulated the non-verbal behaviors of the robot in three experimental conditions, so that it would refer to objects by utilizing either (1) unimodal (i.e., speech only) utterances, (2) congruent multimodal (i.e., semantically matching speech and gesture) or (3) incongruent multimodal (i.e., semantically non-matching speech and gesture) utterances. Our findings reveal that the robot is evaluated more positively when non-verbal behaviors such as hand and arm gestures are displayed along with speech, even if they do not semantically match the spoken utterance. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
28. Gesture processing as grounded motor cognition: Towards a computational model
- Author
-
Sadeghipour, Amir, Kopp, Stefan, and Rahbar-Shamskar, Alireza
- Subjects
Modalities ,grounded cognition ,gestures ,Neuropsychology ,Cognition ,Virtual agent ,social interaction ,computational model ,Motor Cognition ,Embodied cognition ,Motor cognition ,General Materials Science ,Psychology ,embodied conversational agents ,Cognitive psychology ,Gesture ,Meaning (linguistics) ,embodiment - Abstract
In this paper, we present an approach to treat and model the processing (i.e. recognition and production) of communicative gestures as grounded motor cognition. We first review cognitive theories and neuropsychological studies on human motor cognition. On this basis, we propose a computational framework that connects the sensorimotor processing of hand gestures in representational structures of meaning (visuospatial imagery), other modalities (language), and communicative intentions. We present an implementation that enables an embodied virtual agent to engage in gesture-based interaction with a human user.
- Full Text
- View/download PDF
29. The Effects of an Embodied Agent´s Nonverbal Behavior on User's Evaluation and Behavioral Mimicry.
- Author
-
Simons, Nina, Krämer, Nicole, and Kopp, Stefan
- Subjects
NONVERBAL communication ,CONVERSATION ,INTERPERSONAL communication ,IMITATIVE behavior ,GESTURE ,BODY language - Abstract
Against the background that recent studies on embodied conversational agents demonstrate the importance of their behavior, an experimental study is presented that assessed the effects of different nonverbal behaviors of an embodied conversational agent on the experiences and evaluations of the user as well as on their behavior. 50 participants conducted a conversation with different versions of the virtual agent Max, whose nonverbal communication was manipulated with regard to eyebrow movements and self-touching gestures. In a 2x2 between subjects design each behavior was varied in two levels: occurrence of the behavior compared to the absence of the behavior. The behavior of the participants was analyzed to determine whether the user mimics the agent´s behavior. Results show that self-touching gestures compared to no self-touching gestures have positive effects on the experiences and evaluations of the user, whereas eyebrow raising evoked less positive experiences and evaluations in contrast to no eyebrow raising. The nonverbal behavior of the participants was not affected by the agent's nonverbal behavior. Implications for agent design and basic research on nonverbal communication are discussed. ..PAT.-Unpublished Manuscript [ABSTRACT FROM AUTHOR]
- Published
- 2007
30. Gesture and speech in interaction: An overview.
- Author
-
Wagner, Petra, Malisz, Zofia, and Kopp, Stefan
- Subjects
- *
GESTURE , *SPEECH , *BODY language , *COMMUNICATION , *DATA analysis , *INTERACTION model (Communication) - Abstract
Abstract: Gestures and speech interact. They are linked in language production and perception, with their interaction contributing to felicitous communication. The multifaceted nature of these interactions has attracted considerable attention from the speech and gesture community. This article provides an overview of our current understanding of manual and head gesture form and function, of the principle functional interactions between gesture and speech aiding communication, transporting meaning and producing speech. Furthermore, we present an overview of research on temporal speech-gesture synchrony, including the special role of prosody in speech-gesture alignment. In addition, we provide a summary of tools and data available for gesture analysis, and describe speech-gesture interaction models and simulations in technical systems. This overview also serves as an introduction to a Special Issue covering a wide range of articles on these topics. We provide links to the Special Issue throughout this paper. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
31. Don’t Scratch! Self-adaptors Reflect Emotional Stability
- Author
-
Neff, Michael, Toothman, Nicholas, Bowmani, Robeson, Fox Tree, Jean E., Walker, Marilyn A., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Vilhjálmsson, Hannes Högni, editor, Kopp, Stefan, editor, Marsella, Stacy, editor, and Thórisson, Kristinn R., editor
- Published
- 2011
- Full Text
- View/download PDF
32. Gestures in Human-Computer Interaction – Just Another Modality?
- Author
-
Pirhonen, Antti, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Kopp, Stefan, editor, and Wachsmuth, Ipke, editor
- Published
- 2010
- Full Text
- View/download PDF
33. Gesture Space and Gesture Choreography in European Portuguese and African Portuguese Interactions: A Pilot Study of Two Cases
- Author
-
Rodrigues, Isabel Galhano, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Kopp, Stefan, editor, and Wachsmuth, Ipke, editor
- Published
- 2010
- Full Text
- View/download PDF
34. Second Language Tutoring using Social Robots. A Large-Scale Study
- Author
-
Mirjam de Haas, Stefan Kopp, Rianne van den Berghe, Aylin C. Küntay, Jean-Marc Montanier, Cansu Oranç, Junko Kanero, Jan de Wit, Daniel Hernández García, Fotios Papadopoulos, Laura Hoffman, Paul Vogt, Tilbe Göksun, Ezgi Mamus, Thorsten Schodde, Ora Oudgenoeg-Paz, Tony Belpaeme, Emiel Krahmer, Paul P.M. Leseman, Josje Verhagen, Amit Kumar Pandey, Christopher D. Wallbridgell, Bram Willemsen, Cognitive Science & AI, Language, Communication and Cognition, ACLC (FGw), Kanero, Junko, Oranç, Cansu, Küntay, Aylin, Vogt, Paul, van den Berghe, Rianne, de Haas, Mirjam, Hoffman, Laura, Mamus, Ezgi, Montanier, Jean-Marc, Oudgenoeg-Paz, Ora, Garcia, Daniel Hernandez, Papadopoulos, Fotios, Schodde, Thorsten, Verhagen, Josje, Wallbridge, Christopher D., Willemsen, Bram, de Wit, Jan, Belpaeme, Tony, Goksun, Tilbe, Kopp, Stefan, Krahmer, Emiel, Leseman, Paul, Pandey, Amit Kumar, Graduate School of Social Sciences and Humanities, and Department of Psychology
- Subjects
Child-Robot Interaction ,Vocabulary ,Technology and Engineering ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,media_common.quotation_subject ,Foreign language ,Social Sciences ,Robots for learning ,Second language tutoring ,Long-term interaction ,Gesture ,CHILDREN ,050105 experimental psychology ,Psychology ,Engineering ,Robotics ,Mathematics education ,0501 psychology and cognitive sciences ,PEER ,TUTOR ,computer.programming_language ,media_common ,Social robot ,05 social sciences ,050301 education ,Cognition ,Transparency (behavior) ,Robot ,0503 education ,computer - Abstract
We present a large-scale study of a series of seven lessons designed to help young children learn english vocabulary as a foreign language using a social robot. The experiment was designed to investigate 1) the effectiveness of a social robot teaching children new words over the course of multiple interactions (supported by a tablet), 2) the added benefit of a robot's iconic gestures on word learning and retention, and 3) the effect of learning from a robot tutor accompanied by a tablet versus learning from a tablet application alone. For reasons of transparency, the research questions, hypotheses and methods were preregistered. With a sample size of 194 children, our study was statistically well-powered. Our findings demonstrate that children are able to acquire and retain English vocabulary words taught by a robot tutor to a similar extent as when they are taught by a tablet application. In addition, we found no beneficial effect of a robot's iconic gestures on learning gains., European Union's Horizon 2020 Research and Innovation Programme; European Union (European Union); Horizon 2020
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.