8 results on '"Meyrueis, Vincent"'
Search Results
2. Communication invitée
- Author
-
Boutet, Dominique, Jégo, Jean-François, Meyrueis, Vincent, Dynamique du Langage In Situ (DYLIS), Université de Rouen Normandie (UNIROUEN), Normandie Université (NU)-Normandie Université (NU)-Institut de Recherche Interdisciplinaire Homme et Société (IRIHS), Normandie Université (NU)-Normandie Université (NU)-Université de Rouen Normandie (UNIROUEN), Normandie Université (NU), Arts des Images et Art Contemporain (AIAC), Université Paris 8 Vincennes-Saint-Denis (UP8), Image Numérique et Réalité Virtuelle (INREV ), Université Paris 8 Vincennes-Saint-Denis (UP8)-Université Paris 8 Vincennes-Saint-Denis (UP8), and Viadrina Universität
- Subjects
[SHS.LANGUE]Humanities and Social Sciences/Linguistics ,[SHS.ANTHRO-SE]Humanities and Social Sciences/Social Anthropology and ethnology ,ComputingMilieux_MISCELLANEOUS - Abstract
International audience
- Published
- 2019
3. Body of the gestures / Gestures of the body
- Author
-
Boutet, Dominique, Jégo, Jean-François, Meyrueis, Vincent, and Boutet, Dominique
- Subjects
[SHS.ANTHRO-SE] Humanities and Social Sciences/Social Anthropology and ethnology ,[SHS.LANGUE] Humanities and Social Sciences/Linguistics ,ComputingMilieux_MISCELLANEOUS - Published
- 2019
4. POLIMOD Pipeline: tutorial: step-by-step tutorial: Motion Capture, Visualization & Data Analysis for gesture studies
- Author
-
Boutet, Dominique, Jégo, Jean-François, Meyrueis, Vincent, Dynamique du Langage In Situ (DYLIS), Université de Rouen Normandie (UNIROUEN), Normandie Université (NU)-Normandie Université (NU)-Institut de Recherche Interdisciplinaire Homme et Société (IRIHS), Normandie Université (NU)-Normandie Université (NU)-Université de Rouen Normandie (UNIROUEN), Normandie Université (NU), Université Paris 8 Vincennes-Saint-Denis (UP8), Université de Rouen, Université Paris 8, Moscow State Linguistic University, and POLIMOD
- Subjects
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,[MATH.MATH-NA]Mathematics [math]/Numerical Analysis [math.NA] ,[INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL] ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] - Abstract
This open-access tutorial describes step-by-step how to use motion capture for gesture studies. We propose a pipeline to collect, visualize, annotate and analyze motion capture (mocap) data for gesture studies. A pipeline is "an implementation of a workflow specification. The term comes from computing, where it means a set of serial processes, with the output of one process being the input of the subsequent process. A production pipeline is not generally perfectly serial because real workflows usually have branches and iterative loops, but the idea is valid: A pipeline is the set of procedures that need to be taken in order to create and hand off deliverables" (Okun, 2010).The pipeline designed here presents two main parts and three subparts. The first part of the pipeline describes the data collection process, including the setup and its prerequisites, the protocol to follow and how to export data regarding the analysis. The second part is focusing on data analysis, describing the main steps of Data processing, Data analysis itself following different gesture descriptors, and Data visualization in order to understand complex or multidimensional gesture features. We design the pipeline using blocks connected with arrows. Each block is presenting a specific step using hardware or software. Arrows represent the flow of data between each block and the 3-letter acronyms attached refer to the data file format.
- Published
- 2018
5. POLIMOD Pipeline: documentation. Motion Capture, Visualization & Data Analysis for gesture studies
- Author
-
Boutet, Dominique, Jégo, Jean-François, Meyrueis, Vincent, Dynamique du Langage In Situ (DYLIS), Université de Rouen Normandie (UNIROUEN), Normandie Université (NU)-Normandie Université (NU)-Institut de Recherche Interdisciplinaire Homme et Société (IRIHS), Normandie Université (NU)-Normandie Université (NU)-Université de Rouen Normandie (UNIROUEN), Normandie Université (NU), Université Paris 8 Vincennes-Saint-Denis (UP8), and Université de Rouen, Université Paris 8, Moscow State Linguistic University
- Subjects
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[INFO]Computer Science [cs] ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,[INFO.INFO-NA]Computer Science [cs]/Numerical Analysis [cs.NA] ,[INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL] ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] - Abstract
We propose a pipeline to collect, visualize, annotate and analyze motion capture (mocap) data for gesture studies. A pipeline is "an implementation of a workflow specification. The term comes from computing, where it means a set of serial processes, with the output of one process being the input of the subsequent process. A production pipeline is not generally perfectly serial because real workflows usually have branches and iterative loops, but the idea is valid: A pipeline is the set of procedures that need to be taken in order to create and hand off deliverables" (Okun, 2010). The pipeline designed here (table 1) presents two main parts and three subparts. The first part of the pipeline describes the data collection process, including the setup and its prerequisites, the protocol to follow and how to export data regarding the analysis. The second part is focusing on data analysis, describing the main steps of Data processing, Data analysis itself following different gesture descriptors, and Data visualization in order to understand complex or multidimensional gesture features. We design the pipeline using blocks connected with arrows. Each block is presenting a specific step using hardware or software. Arrows represent the flow of data between each block and the 3-letter acronyms attached refer to the data file format (table 2). The development of the pipeline raises three main questions: How to synchronize Data ? How to pick data and transform it? And what is changing in annotation? To solve the question of data synchronization, we design a protocol where we detail how hardware has to be properly selected regarding the type of measures and the protocol to follow implies specific steps for the participant such as adopting a T-Pose, or clapping their hands once at the beginning and the end of the recording to facilitate data synchronization and export in the next steps. About picking relevant data and transforming it, we propose to select and prepare files to export regarding the analysis and software expected. Thus, mocap files could be converted to videos to be visualized for instance in Elan to enhance gesture coding or converted to text files to be analyzed in Excel, or processed in Unity to explore the flow of movement, new gesture features or kinematics. We detail all these processes in a step-by-step tutorial available in an open access. Finally, we question what a pipeline involving Mocap is changing in annotation. We notice mocap allows no more a single point a view but as many as required since we can use virtual camera to study gesture from the "skeleton" of the participant. For example, we show it is possible to adopt a first-person point of view to embody then better understand participants gestures. We also propose an augmented reality tool developed in the Unity3D software to visualize in real-time multidimensional gesture features (such as velocity, acceleration, jerk) or a combination of them in simpler curve or surface representations. As future direction, data collected here could be used for a machine learning algorithm in order to extract automatically gesture properties or automatically detect and tag aspectuality of gestures. At last, an embodied visualization tool using virtual reality could thus offer newer possibilities to code and understand gestures than using a 2D video as a reference or study material.
- Published
- 2018
6. Modification interactive de formes en réalité virtuelle : application à la conception d'un produit
- Author
-
Meyrueis, Vincent, Centre de Robotique (CAOR), MINES ParisTech - École nationale supérieure des mines de Paris, Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL), École Nationale Supérieure des Mines de Paris, and Philippe Fuchs
- Subjects
Conception assistée par ordinateur (CAO) ,Réalité virtuelle ,Environnement immersif ,Real-time 3D object deformation ,Immersive environment ,Virtual reality ,Déformation en temps réel d'objets 3D ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] ,Computer-aided design (CAD) - Abstract
The context of this research is the use of virtual reality for product design and development. At the industrial level, the virtual integration reviews are currently limited to static project review, with no possibility of changing the original design. In this work, we present a new method for virtual design review, which aims to enable the user to modify, as naturally as possible, the digital model inside the virtual environment. This method, called D3, is based on three steps: a selection step by drawing, a deformation step by manipulating the selected area and a final refining step that allows engineers to adjust the modifications. For the method to be a means of communication, it has to be simple, intuitive and usable by all project actors. An experiment was performed to evaluate the method in terms of learning curve and errors performed by subjects in a modification task. Finally, in order to facilitate the refining step, this method offers several ways for engineers to reflect the modifications on the CAD model.; Ces travaux se placent dans le cadre de l'utilisation de la réalité virtuelle pour la conception et le développement de produits. Les revues d'intégration virtuelles utilisées, au niveau industriel, sont actuellement limitées à la revue de projet statique, sans possibilité de modification de la conception initiale. Nous proposons une méthode destinée à la revue de projet virtuelle, dont le but est de permettre à l'utilisateur de modifier le plus naturellement possible la maquette numérique depuis l'environnement virtuel. Cette méthode appelée D3 est basée sur trois étapes : une étape de sélection par dessin, une étape de déformation par manipulation de la zone sélectionnée et une étape de reprise qui permet aux ingénieurs de reprendre les modifications. Afin que la méthode soit un support de communication, elle se doit d'être simple, intuitive et utilisable par tous. Une expérimentation a été menée dans le but d'évaluer la méthode au niveau de la facilité d'apprentissage et des erreurs commises par les sujets lors d'une tâche de modification. Enfin, dans le but de faciliter le travail de reprise, cette méthode offre plusieurs moyens permettant aux ingénieurs de répliquer les modifications sur la maquette CAO.
- Published
- 2011
7. D3: an Immersive aided design deformation method
- Author
-
Meyrueis, Vincent, Paljic, Alexis, Fuchs, Philippe, Centre de Robotique (CAOR), MINES ParisTech - École nationale supérieure des mines de Paris, Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL), équipe RV&RA, Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-MINES ParisTech - École nationale supérieure des mines de Paris, and ACM
- Subjects
Computer-Aided Design (CAD) ,Real-time 3D Object Deformation ,I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling--Curve, surface, solid, and object representations ,I.3.6 [Computer Graphics]: Methodology and Techniques--Interaction techniques ,Virtual Reality ,Immersive Environment ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] - Abstract
International audience; In this paper, we introduce a new deformation method adapted to immersive design. The use of Virtual Reality (VR) in the design process implies a physical displacement of project actors and data between the virtual reality facilities and the design office. The decisions taken in the immersive environment are manually reflected on the Computed Aided Design (CAD) system. This increases the design time and breaks the continuity of data workflow. On this basis, there is a clear demand among the industry for tools adapted to immersive design. But few methods exist that encompass CAD problematic in VR. For this purpose, we propose a new method, called D3, for "Draw, Deform and Design", based on a 2 step manipulation paradigm, consisting with 1) area selection and 2) path drawing, and a final refining and fitting phase. Our method is discussed on the basis of a set of CAD deformation scenarios.
- Published
- 2009
8. Tâches de conception en environnement immersif par les techniques de réalité virtuelle
- Author
-
Meyrueis, Vincent, Paljic, Alexis, Fuchs, Philippe, Centre de Robotique (CAOR), MINES ParisTech - École nationale supérieure des mines de Paris, Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL), équipe RV&RA, and Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-MINES ParisTech - École nationale supérieure des mines de Paris
- Subjects
[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,ComputingMilieux_MISCELLANEOUS ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] - Abstract
International audience
- Published
- 2007
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.