11 results on '"Jasmin Steinwender"'
Search Results
2. Towards a platform-independent cooperative human-robot interaction system: II. Perception, execution and imitation of goal directed actions.
- Author
-
Stéphane Lallée, Ugo Pattacini, Jean-David Boucher, Séverin Lemaignan, Alexander Lenz, Chris Melhuish, Lorenzo Natale, Sergey Skachek, Katharina Hamann, Jasmin Steinwender, Emrah Akin Sisbot, Giorgio Metta, Rachid Alami, Matthieu Warnier, Julien Guitton, Felix Warneken, and Peter Ford Dominey
- Published
- 2011
- Full Text
- View/download PDF
3. Which one? Grounding the referent based on efficient human-robot interaction.
- Author
-
Raquel Ros Espinoza, Séverin Lemaignan, Emrah Akin Sisbot, Rachid Alami, Jasmin Steinwender, Katharina Hamann, and Felix Warneken
- Published
- 2010
- Full Text
- View/download PDF
4. The BERT2 infrastructure: An integrated system for the study of human-robot interaction.
- Author
-
Alexander Lenz, Sergey Skachek, Katharina Hamann, Jasmin Steinwender, Tony Pipe, and Chris Melhuish
- Published
- 2010
- Full Text
- View/download PDF
5. Towards a Platform-Independent Cooperative Human Robot Interaction System: III An Architecture for Learning and Executing Actions and Shared Plans.
- Author
-
Stéphane Lallée, Ugo Pattacini, Séverin Lemaignan, Alexander Lenz, Chris Melhuish, Lorenzo Natale, Sergey Skachek, Katharina Hamann, Jasmin Steinwender, Emrah Akin Sisbot, Giorgio Metta, Julien Guitton, Rachid Alami, Matthieu Warnier, Tony Pipe, Felix Warneken, and Peter Ford Dominey
- Published
- 2012
- Full Text
- View/download PDF
6. Solving ambiguities with perspective taking.
- Author
-
Raquel Ros, Emrah Akin Sisbot, Rachid Alami, Jasmin Steinwender, Katharina Hamann, and Felix Warneken
- Published
- 2010
- Full Text
- View/download PDF
7. Young children's planning in a collaborative problem-solving task
- Author
-
Jasmin Steinwender, Michael Tomasello, Felix Warneken, and Katharina Hamann
- Subjects
Relation (database) ,Action (philosophy) ,Anticipation (artificial intelligence) ,Component (UML) ,Developmental and Educational Psychology ,Collaborative Problem Solving ,Experimental and Cognitive Psychology ,Plan (drawing) ,Psychology ,Developmental change ,Social psychology ,Task (project management) ,Cognitive psychology - Abstract
a b s t r a c t One important component of collaborative problem solving is the ability to plan one's own action in relation to that of a partner. We presented 3- and 5-year-old peer pairs with two different tool choice situations in which they had to choose complementary tools with which to subsequently work on a collaborative problem- solving apparatus. In the bidirectional condition, exemplars of the two necessary tools appeared in front of each child. In the unidirect- ional condition, one child had to choose between two different tools first, while the other child had only one tool available. Thus, both conditions required close attention to the actions of the partner, but the unidirectional condition additionally required the anticipation of the partner's constrained tool choice. Five-year-olds were pro- ficient planners in both conditions, whereas 3-year-olds did not consistently make the correct choice. However, 3-year-olds who had first experienced the unidirectional condition chose the correct tool at an above-chance level. Moreover, communication during the tool choice led to greater success among 3-year-olds, but not among 5-year-olds. These results provide the first experimental evidence that between 3 and 5 years of age children develop the ability to plan the division of labor in a collaborative task. We discuss our findings regarding planning for a collaborative task in relation to prior research on planning abilities for individual problem-solving that appear to undergo developmental change between 3 and 5 years of age.
- Published
- 2014
- Full Text
- View/download PDF
8. Cooperative human robot interaction systems: IV. Communication of shared plans with Naïve humans using gaze and speech
- Author
-
Paul F. M. J. Verschure, Katharina Hamann, Maxime Petit, Stephane Lallee, Giorgio Metta, Hector Barron-Gonzales, Jasmin Steinwender, Ilaria Gori, Uriel Martienz, Ugo Pattacini, Peter Ford Dominey, and Felix Warneken
- Subjects
Robot kinematics ,Social robot ,Computer science ,business.industry ,05 social sciences ,Cognitive architecture ,050105 experimental psychology ,Human–robot interaction ,03 medical and health sciences ,Social actions ,0302 clinical medicine ,Social cognition ,Gesture recognition ,Human–computer interaction ,Robot ,0501 psychology and cognitive sciences ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Humanoid robot ,Gesture - Abstract
Cooperation1 is at the core of human social life. In this context, two major challenges face research on humanrobot interaction: the first is to understand the underlying structure of cooperation, and the second is to build, based on this understanding, artificial agents that can successfully and safely interact with humans. Here we take a psychologically grounded and human-centered approach that addresses these two challenges. We test the hypothesis that optimal cooperation between a naive human and a robot requires that the robot can acquire and execute a joint plan, and that it communicates this joint plan through ecologically valid modalities including spoken language, gesture and gaze. We developed a cognitive system that comprises the human-like control of social actions, the ability to acquire and express shared plans and a spoken language stage. In order to test the psychological validity of our approach we tested 12 naive subjects in a cooperative task with the robot. We experimentally manipulated the presence of a joint plan (vs. a solo plan), the use of task-oriented gaze and gestures, and the use of language accompanying the unfolding plan. The quality of cooperation was analyzed in terms of proper turn taking, collisions and cognitive errors. Results showed that while successful turn taking could take place in the absence of the explicit use of a joint plan, its presence yielded significantly greater success. One advantage of the solo plan was that the robot would always be ready to generate actions, and could thus adapt if the human intervened at the wrong time, whereas in the joint plan the robot expected the human to take his/her turn. Interestingly, when the robot represented the action as involving a joint plan, gaze provided a highly potent nonverbal cue that facilitated successful collaboration and reduced errors in the absence of verbal communication. These results support the cooperative stance in human social cognition, and suggest that cooperative robots should employ joint plans, fully communicate them in order to sustain effective collaboration while being ready to adapt if the human makes a midstream mistake.
- Published
- 2013
- Full Text
- View/download PDF
9. Towards a Platform-Independent Cooperative Human Robot Interaction System: III An Architecture for Learning and Executing Actions and Shared Plans
- Author
-
Giorgio Metta, Sergey Skachek, Stephane Lallee, Jasmin Steinwender, Ugo Pattacini, Alexander Lenz, Chris Melhuish, Emrah Akin Sisbot, Lorenzo Natale, Séverin Lemaignan, Peter Ford Dominey, Rachid Alami, Tony Pipe, Julien Guitton, Katharina Hamann, Matthieu Warnier, Felix Warneken, Department of Robotics, Brain and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia (IIT), Équipe Robotique et InteractionS (LAAS-RIS), Laboratoire d'analyse et d'architecture des systèmes (LAAS), Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées, Bristol Robotics Laboratory (BRL), University of Bristol [Bristol], Department of Psychology, Harvard University (Department of Psychology, Harvard University), Harvard University [Cambridge], Institut National de la Santé et de la Recherche Médicale (INSERM), Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT)-Université de Toulouse (UT)-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Université de Toulouse (UT)-Institut National des Sciences Appliquées (INSA)-Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université de Toulouse (UT)-Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT), and Harvard University
- Subjects
0209 industrial biotechnology ,Robot kinematics ,Computer science ,business.industry ,[SHS.INFO]Humanities and Social Sciences/Library and information sciences ,05 social sciences ,02 engineering and technology ,Knowledge acquisition ,050105 experimental psychology ,Human–robot interaction ,Software portability ,020901 industrial engineering & automation ,Artificial Intelligence ,Computer-supported cooperative work ,Robot ,0501 psychology and cognitive sciences ,Artificial intelligence ,Behavior-based robotics ,business ,Software ,Humanoid robot - Abstract
International audience; Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing and interacting with humans. The current research addresses this challenge. We present results from a cooperative human-robot interaction system that has been specifically developed for portability between different humanoid platforms, by abstraction layers at the perceptual and motor interfaces. In the perceptual domain, the resulting system is demonstrated to learn to recognize objects and to recognize actions as sequences of perceptual primitives, and to transfer this learning, and recognition, between different robotic platforms. For execution, composite actions and plans are shown to be learnt on one robot and executed successfully on a different one. Most importantly, the system provides the ability to link actions into shared plans, that form the basis of human-robot cooperation, applying principles from human cognitive development to the domain of robot cognitive systems.
- Published
- 2012
- Full Text
- View/download PDF
10. Which One? Grounding the Referent Based on Efficient Human-Robot Interaction
- Author
-
Jasmin Steinwender, Felix Warneken, Rachid Alami, Katharina Hamann, Séverin Lemaignan, E. Akin Sisbot, Raquel Ros, Équipe Robotique et InteractionS (LAAS-RIS), Laboratoire d'analyse et d'architecture des systèmes (LAAS), Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées, Max Planck Institute for Evolutionary Anthropology [Leipzig], Max-Planck-Gesellschaft, Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT)-Université de Toulouse (UT)-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Université de Toulouse (UT)-Institut National des Sciences Appliquées (INSA)-Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université de Toulouse (UT)-Université Toulouse Capitole (UT Capitole), and Université de Toulouse (UT)
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,02 engineering and technology ,Ontology (information science) ,Object (computer science) ,Referent ,Human–robot interaction ,Visualization ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,020901 industrial engineering & automation ,Complete information ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,020201 artificial intelligence & image processing ,[INFO]Computer Science [cs] ,Artificial intelligence ,business ,Set (psychology) - Abstract
In human-robot interaction, a robot must be prepared to handle possible ambiguities generated by a human partner. In this work we propose a set of strategies that allow a robot to identify the referent when the human partner refers to an object giving incomplete information, i.e. an ambiguous description. Moreover, we propose the use of an ontology to store and reason on the robot's knowledge to ease clarification, and therefore, improve interaction. We validate our work through both simulation and two real robotic platforms performing two tasks: a daily-life situation and a game.
- Published
- 2010
11. Action observation can prime visual object recognition
- Author
-
HB Helbig, Jasmin Steinwender, Markus Kiefer, and Markus Graf
- Subjects
Adult ,Male ,Neuroscience(all) ,Action priming ,Object (grammar) ,Observation ,Neuropsychological Tests ,Prime (order theory) ,Task (project management) ,Discrimination Learning ,Young Adult ,Motor system ,Reaction Time ,Humans ,Discrimination learning ,Communication ,business.industry ,General Neuroscience ,Cognitive neuroscience of visual object recognition ,Action observation ,Recognition, Psychology ,Object recognition ,Pattern Recognition, Visual ,Action (philosophy) ,Female ,business ,Psychology ,Priming (psychology) ,Photic Stimulation ,Psychomotor Performance ,Research Article ,Cognitive psychology - Abstract
Observing an action activates action representations in the motor system. Moreover, the representations of manipulable objects are closely linked to the motor systems at a functional and neuroanatomical level. Here, we investigated whether action observation can facilitate object recognition using an action priming paradigm. As prime stimuli we presented short video movies showing hands performing an action in interaction with an object (where the object itself was always removed from the video). The prime movie was followed by a (briefly presented) target object affording motor interactions that are either similar (congruent condition) or dissimilar (incongruent condition) to the prime action. Participants had to decide whether an object name shown after the target picture corresponds with the picture or not (pictureword matching task). We found superior accuracy for primetarget pairs with congruent as compared to incongruent actions across two experiments. Thus, action observation can facilitate recogni tion of a manipulable object typically involving a similar action. This action priming effect supports the notion that action representations play a functional role in object recognition.
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.