130 results on '"Caplier, Alice"'
Search Results
2. Motion-based countermeasure against photo and video spoofing attacks in face recognition
- Author
-
Edmunds, Taiamiti and Caplier, Alice
- Published
- 2018
- Full Text
- View/download PDF
3. How to predict the global instantaneous feeling induced by a facial picture?
- Author
-
Lienhard, Arnaud, Ladret, Patricia, and Caplier, Alice
- Published
- 2015
- Full Text
- View/download PDF
4. Unsupervised joint face alignment with gradient correlation coefficient
- Author
-
Ni, Weiyuan, Vu, Ngoc-Son, and Caplier, Alice
- Published
- 2016
- Full Text
- View/download PDF
5. Lucas–Kanade based entropy congealing for joint face alignment
- Author
-
Ni, Weiyuan, Vu, Ngoc-Son, and Caplier, Alice
- Published
- 2012
- Full Text
- View/download PDF
6. Face recognition using the POEM descriptor
- Author
-
Vu, Ngoc-Son, Dee, Hannah M., and Caplier, Alice
- Published
- 2012
- Full Text
- View/download PDF
7. EOG-based drowsiness detection: Comparison between a fuzzy system and two supervised learning classifiers
- Author
-
Picot, Antoine, Charbonnier, Sylvie, and Caplier, Alice
- Published
- 2011
- Full Text
- View/download PDF
8. Retina enhanced SURF descriptors for spatio-temporal concept detection
- Author
-
Strat, Sabin Tiberius, Benoit, Alexandre, Lambert, Patrick, and Caplier, Alice
- Published
- 2014
- Full Text
- View/download PDF
9. Video viewing: do auditory salient events capture visual attention?
- Author
-
Coutrot, Antoine, Guyader, Nathalie, Ionescu, Gelu, and Caplier, Alice
- Published
- 2014
- Full Text
- View/download PDF
10. A belief-based sequential fusion approach for fusing manual signs and non-manual signals
- Author
-
Aran, Oya, Burger, Thomas, Caplier, Alice, and Akarun, Lale
- Published
- 2009
- Full Text
- View/download PDF
11. Lip contour segmentation and tracking compliant with lip-reading application constraints
- Author
-
Stillittano, Sébastien, Girondel, Vincent, and Caplier, Alice
- Published
- 2013
- Full Text
- View/download PDF
12. Using retina modelling to characterize blinking: comparison between EOG and video analysis
- Author
-
Picot, Antoine, Charbonnier, Sylvie, Caplier, Alice, and Vu, Ngoc-Son
- Published
- 2012
- Full Text
- View/download PDF
13. Multimodal focus attention and stress detection and feedback in an augmented driver simulator
- Author
-
Benoit, Alexandre, Bonnaud, Laurent, Caplier, Alice, Ngo, Phillipe, Lawson, Lionel, Trevisan, Daniela G., Levacic, Vjekoslav, Mancas, Céline, and Chanel, Guillaume
- Published
- 2009
- Full Text
- View/download PDF
14. Cued Speech Gesture Recognition: A First Prototype Based on Early Reduction
- Author
-
Burger, Thomas, Caplier, Alice, and Perret, Pascal
- Published
- 2008
- Full Text
- View/download PDF
15. Image and Video for Hearing Impaired People
- Author
-
Caplier, Alice, Stillittano, Sébastien, Aran, Oya, Akarun, Lale, Bailly, Gérard, Beautemps, Denis, Aboutabit, Nouredine, and Burger, Thomas
- Published
- 2008
- Full Text
- View/download PDF
16. Multimodal signal processing and interaction for a driving simulator: Component-based architecture
- Author
-
Benoit, Alexandre, Bonnaud, Laurent, Caplier, Alice, Jourde, Frédéric, Nigay, Laurence, Serrano, Marcos, Damousis, Ioannis, Tzovaras, Dimitrios, and Lawson, Jean-Yves Lionel
- Published
- 2007
- Full Text
- View/download PDF
17. Accurate and quasi-automatic lip tracking
- Author
-
Eveno, Nicolas, Caplier, Alice, and Coulon, Pierre-Yves
- Subjects
Image processing -- Research ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
Lip segmentation is an essential stage in many multimedia systems such as videoconferencing, lip reading, or low-hit-rate coding communication systems. In this paper, we propose an accurate and robust quasi-automatic lip segmentation algorithm. First, the upper mouth boundary and several characteristic points are detected in the first frame by using a new kind of active contour: the 'jumping snake.' Unlike classic snakes, it can be initialized far from the final edge and the adjustment of its parameters is easy and intuitive. Then, to achieve the segmentation, we propose a parametric model composed of several cubic curves. Its high flexibility enables accurate lip contour extraction even in the challenging case of a very asymmetric mouth. Compared to existing models, it brings a significant improvement in accuracy and realism. The segmentation in the following frames is achieved by using an interframe tracking of the keypoints and the model parameters. However, we show that, with a usual tracking algorithm, the keypoints' positions become unreliable after a few frames. We therefore propose an adjustment process that enables an accurate tracking even after hundreds of frames. Finally, we show that the mean keypoints' tracking errors of our algorithm are comparable to manual points' selection errors. Index Terms--Active contour, deformable model, points tracking, segmentation.
- Published
- 2004
18. Computational Analysis of Correlations between Image Aesthetic and Image Naturalness in the Relation with Image Quality.
- Author
-
Le, Quyet-Tien, Ladret, Patricia, Nguyen, Huu-Tuan, and Caplier, Alice
- Subjects
STATISTICAL correlation ,AESTHETICS ,INTELLIGENCE levels ,VISUAL perception - Abstract
The main purpose of this paper is the study of the correlations between Image Aesthetic (IA) and Image Naturalness (IN) and the analysis of the influence of IA and IN on Image Quality (IQ) in different contexts. The first contribution is a study about the potential relationships between IA and IN. For that study, two sub-questions are considered. The first one is to validate the idea that IA and IN are not correlated to each other. The second one is about the influence of IA and IN features on Image Naturalness Assessment (INA) and Image Aesthetic Assessment (IAA), respectively. Secondly, it is obvious that IQ is related to IA and IN, but the exact influence of IA and IN on IQ has not been evaluated. Besides that, the context impact on those influences has not been clarified, so the second contribution is to investigate the influence of IA and IN on IQ in different contexts. The results obtained from rigorous experiments prove that although there are moderate and weak correlations between IA and IN, they are still two different components of IQ. It also appears that viewers' IQ perception is affected by some contextual factors, and the influence of IA and IN on IQ depends on the considered context. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. A Human Body Analysis System
- Author
-
Girondel, Vincent, Bonnaud, Laurent, and Caplier, Alice
- Published
- 2006
- Full Text
- View/download PDF
20. Cued Speech Gesture Recognition: A First Prototype Based on Early Reduction
- Author
-
Caplier Alice, Burger Thomas, and Perret Pascal
- Subjects
Electronics ,TK7800-8360 - Abstract
Cued Speech is a specific linguistic code for hearing-impaired people. It is based on both lip reading and manual gestures. In the context of THIMP (Telephony for the Hearing-IMpaired Project), we work on automatic cued speech translation. In this paper, we only address the problem of automatic cued speech manual gesture recognition. Such a gesture recognition issue is really common from a theoretical point of view, but we approach it with respect to its particularities in order to derive an original method. This method is essentially built around a bioinspired method called early reduction. Prior to a complete analysis of each image of a sequence, the early reduction process automatically extracts a restricted number of key images which summarize the whole sequence. Only the key images are studied from a temporal point of view with lighter computation than the complete sequence.
- Published
- 2007
21. Image and Video for Hearing Impaired People
- Author
-
Aran Oya, Akarun Lale, Burger Thomas, Bailly Gérard, Beautemps Denis, Aboutabit Nouredine, Caplier Alice, and Stillittano Sébastien
- Subjects
Electronics ,TK7800-8360 - Abstract
We present a global overview of image- and video-processing-based methods to help the communication of hearing impaired people. Two directions of communication have to be considered: from a hearing person to a hearing impaired person and vice versa. In this paper, firstly, we describe sign language (SL) and the cued speech (CS) language which are two different languages used by the deaf community. Secondly, we present existing tools which employ SL and CS video processing and recognition for the automatic communication between deaf people and hearing people. Thirdly, we present the existing tools for reverse communication, from hearing people to deaf people that involve SL and CS video synthesis.
- Published
- 2007
22. Image and Video Processing for Disability
- Author
-
Pun Thierry, Tzovaras Dimitrios, and Caplier Alice
- Subjects
Electronics ,TK7800-8360 - Published
- 2007
23. Time- and Resource-Efficient Time-to-Collision Forecasting for Indoor Pedestrian Obstacles Avoidance.
- Author
-
Urban, David and Caplier, Alice
- Subjects
AUTONOMOUS vehicles ,CONVOLUTIONAL neural networks ,CAMCORDERS ,DEEP learning ,OPTICAL head-mounted displays - Abstract
As difficult vision-based tasks like object detection and monocular depth estimation are making their way in real-time applications and as more light weighted solutions for autonomous vehicles navigation systems are emerging, obstacle detection and collision prediction are two very challenging tasks for small embedded devices like drones. We propose a novel light weighted and time-efficient vision-based solution to predict Time-to-Collision from a monocular video camera embedded in a smartglasses device as a module of a navigation system for visually impaired pedestrians. It consists of two modules: a static data extractor made of a convolutional neural network to predict the obstacle position and distance and a dynamic data extractor that stacks the obstacle data from multiple frames and predicts the Time-to-Collision with a simple fully connected neural network. This paper focuses on the Time-to-Collision network's ability to adapt to new sceneries with different types of obstacles with supervised learning. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. Deep Learning for Spatio-Temporal Modeling of Dynamic Spontaneous Emotions.
- Author
-
Al Chanti, Dawood and Caplier, Alice
- Abstract
Facial expressions involve dynamic morphological changes in a face, conveying information about the expresser’s feelings. Each emotion has a specific spatial deformation over the face and temporal profile with distinct time segments. We aim at modeling the human dynamic emotional behavior by taking into consideration the visual content of the face and its evolution. But emotions can both speed-up or slow-down, therefore it is important to incorporate information from the local neighborhood frames (short-term dependencies) and the global setting (long-term dependencies) to summarize the segment context despite of its time variations. A 3D-Convolutional Neural Networks (3D-CNN) is used to learn early local spatiotemporal features. The 3D-CNN is designed to capture subtle spatiotemporal changes that may occur on the face. Then, a Convolutional-Long-Short-Term-Memory (ConvLSTM) network is designed to learn semantic information by taking into account longer spatiotemporal dependencies. The ConvLSTM network helps considering the global visual saliency of the expression. That is locating and learning features in space and time that stand out from their local neighbors in order to signify distinctive facial expression features along the entire sequence. Non-variant representations based on aggregating global spatiotemporal features at increasingly fine resolutions are then done using a weighted Spatial Pyramid Pooling layer. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. Image and Video Processing for Disability
- Author
-
Caplier, Alice, Pun, Thierry, and Tzovaras, Dimitrios
- Published
- 2008
- Full Text
- View/download PDF
26. Image Aesthetic Assessment Based on Image Classification and Region Segmentation.
- Author
-
Quyet-Tien Le, Ladret, Patricia, Huu-Tuan Nguyen, and Caplier, Alice
- Subjects
IMAGE segmentation ,AESTHETICS ,LANDSCAPES ,PHOTOGRAPHY ,FLOWERS - Abstract
The main goal of this paper is to study Image Aesthetic Assessment (IAA) indicating images as high or low aesthetic. The main contributions concern three points. Firstly, following the idea that photos in different categories (human, flower, animal, landscape, . . .) are taken with different photographic rules, image aesthetic should be evaluated in a different way for each image category. Large field images and close-up images are two typical categories of images with opposite photographic rules so we want to investigate the intuition that prior Large field/Close-up Image Classification (LCIC) might improve the performance of IAA. Secondly, when a viewer looks at a photo, some regions receive more attention than other regions. Those regions are defined as Regions Of Interest (ROI) and it might be worthy to identify those regions before IAA. The question "Is it worthy to extract some ROIs before IAA?" is considered by studying Region Of Interest Extraction (ROIE) before investigating IAA based on each feature set (global image features, ROI features and background features). Based on the answers, a new IAA model is proposed. The last point is about a comparison between the efficiency of handcrafted and learned features for the purpose of IAA. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
27. Training dataset optimization for deep learning applied to optical proximity correction on non-regular hole masks.
- Author
-
Urard, Mathis, Paquet, Clément, Beylier, Charlotte, Pena, Jean-Noël, Caplier, Alice, Dalla Mura, Mauro, Bange, Romain, and Guizzetti, Roberto
- Published
- 2024
- Full Text
- View/download PDF
28. Driver Head Movements While Using a Smartphone in a Naturalistic Context
- Author
-
García-García, Miguel, Caplier, Alice, Rombaut, Michèle, Garcia Garcia, Miguel, GIPSA - Architecture, Géométrie, Perception, Images, Gestes (GIPSA-AGPIG), Département Images et Signal (GIPSA-DIS), Grenoble Images Parole Signal Automatique (GIPSA-lab ), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Grenoble Images Parole Signal Automatique (GIPSA-lab ), and Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])
- Subjects
[INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; Smartphone usage while driving is a dangerous activity directly linked to 17% of deadly accidents in France in 2015. While it can potentially impact every road user, professional drivers are perhaps the most affected collective as they spend a high amount of time in their vehicles on a daily basis, whether they deliver goods or transport people. Detecting smartphone usage is interesting for many reasons, from legal controls to forensics investigation in accidents, without forgetting the possibility of alert the driver of the danger he is engaging in. In this study we evaluate the pertinence of using driver head rotation movements to automatically predict smartphone usage at the wheel. In order to fit the system to the particularities of professional drivers, a naturalistic driving study have been conducted. 15 operational vehicles from two private French transport companies were equipped with near-infrared cameras, embedded real-time computer vision and machine learning techniques. Common behavioural patterns have been revealed. Head rebounds are a constant (letting the driver switch gaze between the road and the device). Additionally, the duration a driver spends looking down from a reference neutral direction can be used as a reliable parameter to predict smartphone usage. On the other hand some divergences between van and bus drivers head movements have been noticed. A real-time smartphone usage detection system has been implemented from the results of this study. Preliminary results are encouraging and a prototype of the system is already being tested by professional drivers in a naturalistic context.
- Published
- 2017
29. Fake Face Detection Based on Radiometric Distorsions
- Author
-
Edmunds, Taiamiti, Caplier, Alice, GIPSA - Architecture, Géométrie, Perception, Images, Gestes (GIPSA-AGPIG), Département Images et Signal (GIPSA-DIS), Grenoble Images Parole Signal Automatique (GIPSA-lab ), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Grenoble Images Parole Signal Automatique (GIPSA-lab ), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019]), ANR-13-INSE-0004,BIOFENCE,Amélioration, évaluation et certification de la résistance de systèmes biométriques face aux leurres(2013), Edmunds, Taiamiti, and Ingénierie Numérique et Sécurité - Amélioration, évaluation et certification de la résistance de systèmes biométriques face aux leurres - - BIOFENCE2013 - ANR-13-INSE-0004 - INS - VALID
- Subjects
[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,ComputingMilieux_MISCELLANEOUS ,[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience
- Published
- 2016
30. Patch-based Local Phase Quantization of Monogenic components for Face Recognition
- Author
-
Tuan Nguyen, Huu, Caplier, Alice, Caplier, Alice, GIPSA - Architecture, Géométrie, Perception, Images, Gestes (GIPSA-AGPIG), Département Images et Signal (GIPSA-DIS), Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Grenoble Images Parole Signal Automatique (GIPSA-lab), and Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Local Phase Quantization ,Monogenic filter based face recognition ,Patch based ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; In this paper, we propose a novel feature extraction methodfor Face recognition called patch based Local Phase Quantizationof Monogenic components (PLPQMC). From the inputimage, the directional Monogenic bandpass components aregenerated. Then, each pixel of a bandpass image is replacedby the mean value of its rectangular neighborhood. Next,LPQ histogram sequences are computed upon those images.Finally, these histogram sequences are concatenated for constitutinga global representation of the face image. Using theproposed method for feature extraction, we construct a newface recognition system with Whitened Principal ComponentAnalysis (WPCA) for dimensionality reduction, k-nearestneighbor classifier and weighted angle distance for classification.Performance evaluations on two public face databasesFERET and SCface show that our method is efficient againstsome challenging issues, e.g. expressions, illumination, timelapse,low resolution, and it is competing with state-of-the-artmethods.
- Published
- 2014
31. Inner lip segmentation by combining active contours and parametric models
- Author
-
Stillittano, Sébastien, Caplier, Alice, GIPSA - Géométrie, Perception, Images, Geste (GIPSA-GPIG), Département Images et Signal (GIPSA-DIS), Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS), and Caplier, Alice
- Subjects
stomatognathic diseases ,stomatognathic system ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[INFO.INFO-TS] Computer Science [cs]/Signal and Image Processing ,Computer Science::Computer Vision and Pattern Recognition ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; Lip reading applications require accurate information about lip movement and shape, and both outer and inner contours are useful. In this paper, we introduce a new method for inner lip segmentation. From the outer lip contour given by a preexisting algorithm, we use some key points to initialize an active contour called "jumping snake". According to some optimal information of luminance and chrominance gradient, this active contour fits the position of two parametric models; a first one composed of two cubic curves and a broken line in case of a closed mouth, and a second one composed of four cubic curves in case of an open mouth. These parametric models give a flexible and accurate final inner lip contour. Finally, we present several experimental results demonstrating the effectiveness of the proposed algorithm.
- Published
- 2008
32. Facial expression recognition based on the belief theory : comparison with different classifiers
- Author
-
Hammal, Zakia, Couvreur, Laurent, Caplier, Alice, Rombaut, Michèle, and Caplier, Alice
- Subjects
[INFO.INFO-TS] Computer Science [cs]/Signal and Image Processing ,ComputingMilieux_MISCELLANEOUS ,[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing - Published
- 2005
33. Face Recognition using Multi-modal Binary Patterns - Proceedings
- Author
-
Nguyen, T.P., Vu, Son, Caplier, Alice, Centre de Morphologie Mathématique (CMM), MINES ParisTech - École nationale supérieure des mines de Paris, Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL), GIPSA - Architecture, Géométrie, Perception, Images, Gestes (GIPSA-AGPIG), Département Images et Signal (GIPSA-DIS), Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS), Vesalis (Vesalis), and PME
- Subjects
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; A new descriptor called Multi-modal Binary Patterns (MMBP) is proposed for face recognition. It balances well important requirements for real-world applications, including the robustness, discriminative power, and the low computational cost. The proposed algorithm has several desirable properties: 1) it captures information from face image in any direction as it is oriented feature, 2) being a spatial multi-scale structure, the descriptor catches not only local but also more global information about object, 3) it is robust to image transformation like variations of lighting, expressions, and 4) it is computationally efficient. In more detail, to catch information in a given direction, a Local Line Binary Pattern (LLBP) based operator is first applied. The MMBP feature is then built by applying a LBP-based self-similarity operator on the values being calculated by LLBP operators across different directions. A Whitened PCA dimensionality reduction technique is applied to get more a compact and efficient descriptor. Experimental results achieved on the comprehensive FERET data set being comparable to state-of-the-art validates the efficiency of our method.
- Published
- 2012
34. DYNEMO: A Corpus of dynamic and spontaneous emotional facial expressions
- Author
-
Meillon, Brigitte, Tcherkassov, Anna, Mandran, Nadine, Adam, Jean-Michel, Dubois, Michel, Benoit, Anne-Marie, Guérin-Dugué, Anne, Caplier, Alice, Laboratoire d'Informatique de Grenoble (LIG), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF), Laboratoire de Psychologie Sociale (LPS), Laboratoire Inter-universitaire de Psychologie : Personnalité, Cognition, Changement Social (LIP-PC2S), Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Savoie Mont Blanc (USMB [Université de Savoie] [Université de Chambéry])-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Savoie Mont Blanc (USMB [Université de Savoie] [Université de Chambéry]), GIPSA - Vision and Brain Signal Processing (GIPSA-VIBS), Département Images et Signal (GIPSA-DIS), Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS), GIPSA - Architecture, Géométrie, Perception, Images, Gestes (GIPSA-AGPIG), and Projet ANR Dynemo
- Subjects
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; DynEmo is a publicly available database of significant size containing dynamic and authentic EFE of annoyance, astonishment, boredom, cheerfulness, disgust, fright, curiosity, moved, pride, and shame. All EFEs' affective states are identified both by the expresser and by observers, with all methodological, contextual, etc., elements at the disposal of the scientific community. This database was elaborated by a multidisciplinary team. This multimodal corpus meets psychological, technical and ethical criteria. 358 EFE videos (1 to 15 min. long) of ordinary people (aged from 25 to 65, half women and half men) recorded in natural (but experimental) conditions are associated with 2 types of data: first, the affective state of the expresser (self-reported once the emotional inducing task completed), and second, the timeline of observers' assessments regarding the emotions displayed all along the recording. This timeline allows easy emotion segmentations for any searcher interested in human non verbal behavior analysis
- Published
- 2010
35. DynEmo: A Database of Dynamic and Spontaneous Emotional Facial Expressions
- Author
-
Tcherkassov, Anna, Dupré, D., Dubois, Michel, Mandran, Nadine, Meillon, Brigitte, Boussard, Gwenn, Adam, Jean-Michel, Caplier, Alice, Guérin-Dugué, Anne, Benoit, Anne-Marie, Mermillod, Martial, Laboratoire de Psychologie Sociale (LPS), Laboratoire Inter-universitaire de Psychologie : Personnalité, Cognition, Changement Social (LIP-PC2S), Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Savoie Mont Blanc (USMB [Université de Savoie] [Université de Chambéry])-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Savoie Mont Blanc (USMB [Université de Savoie] [Université de Chambéry]), Laboratoire d'Informatique de Grenoble (LIG), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF), GIPSA - Géométrie, Perception, Images, Geste (GIPSA-GPIG), Département Images et Signal (GIPSA-DIS), Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Grenoble Images Parole Signal Automatique (GIPSA-lab), and Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)
- Subjects
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,ComputingMilieux_MISCELLANEOUS - Abstract
International audience
- Published
- 2009
36. A Dempster-Shafer Theory Based Combination of Classifiers for Hand Gesture Recognition
- Author
-
BURGER, Thomas, ARAN, Oya, URANKAR, Alexandra, CAPLIER, Alice, AKARUN, Lale, Department of Computer Engineering [Bogazici], Boǧaziçi üniversitesi = Boğaziçi University [Istanbul], France Télécom Recherche & Développement (FT R&D), France Télécom, GIPSA - Géométrie, Perception, Images, Geste (GIPSA-GPIG), Département Images et Signal (GIPSA-DIS), Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS), and Boğaziçi University [Istanbul]
- Subjects
[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] - Abstract
International audience; As part of our work on hand gesture interpretation, we present our results on hand shape recognition. Our method is based on attribute extraction and multiple partial classifications. The novelty lies in the fashion the fusion of all the partial classification results are performed. This fusion is (1) more efficient in terms of information theory and leads to more accurate results, (2) general enough to allow heterogeneous sources of information to be taken into account: Each classifier output is transformed to a belief function, and all the corresponding functions are fused together with other external evidential sources of information.
- Published
- 2009
- Full Text
- View/download PDF
37. Estimation of facial expression intensity based on the belief theory
- Author
-
Ghamen, Khadoudja, Caplier, Alice, Laboratory Lire (LIRE), Mentoury university, GIPSA - Géométrie, Perception, Images, Geste (GIPSA-GPIG), Département Images et Signal (GIPSA-DIS), Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Grenoble Images Parole Signal Automatique (GIPSA-lab), and Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)
- Subjects
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; This article presents a new method to estimate the intensity of a human facial expression. Supposing an expression occurring on a face has been recognized among the six universal emotions (joy, disgust, surprise, sadness, anger, fear), the estimation of the expression's intensity is based on the determination of the degree of geometrical deformations of some facial features and on the analysis of several distances computed on skeletons of expressions. These skeletons are the result of a contour segmentation of facial permanent features (eyes, brows, mouth). The proposed method uses the belief theory for data fusion. The intensity of the recognized expression is scored on a three-point ordinal scale: "low intensity", "medium intensity" or " high intensity". Experiments on a great number of images validate our method and give good estimation for facial expression intensity. We have implemented and tested the method on the following three expressions: joy, surprise and disgust
- Published
- 2008
38. Estimation of Anger, Sadness and Fear Expression Intensity based on the Belief Theory
- Author
-
Ghamen, Khadoudja, Caplier, Alice, Laboratory Lire (LIRE), Mentoury university, GIPSA - Géométrie, Perception, Images, Geste (GIPSA-GPIG), Département Images et Signal (GIPSA-DIS), Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Grenoble Images Parole Signal Automatique (GIPSA-lab), and Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)
- Subjects
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; This article presents a new method to estimate the intensity of a human facial expression. Supposing an expression occurring on a face has been recognized among the six universal emotions (joy, disgust, surprise, sadness, anger, fear), the estimation of the expression's intensity is based on the determination of the degree of geometrical deformations of some facial features and on the analysis of several distances computed on skeletons of expressions. These skeletons are the result of a contour segmentation of facial permanent features (eyes, brows, mouth). The proposed method uses the belief theory for data fusion. The intensity of the recognized expression is scored on a three-point ordinal scale: "low intensity", "medium intensity" or " high intensity". Experiments on a great number of images validate our method and give good estimation for facial expression intensity. We have implemented and tested the method on Joy, Surprise and Disgust and now we implement the same method on the following expressions: Anger, Fear and Sadness
- Published
- 2008
39. Classification of Facial Expressions Based on Transient Features
- Author
-
Ghamen, Khadoudja, Caplier, Alice, Laboratory Lire (LIRE), Mentoury university, GIPSA - Géométrie, Perception, Images, Geste (GIPSA-GPIG), Département Images et Signal (GIPSA-DIS), Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Grenoble Images Parole Signal Automatique (GIPSA-lab), and Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Stendhal - Grenoble 3-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)
- Subjects
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; Over the last decades, the most automatic facial expression recognition methods and systems presented in the literature are based on motions and deformations of permanent facial features and as a post processing they use some transient features to improve some results. In this paper we present a new method which represents the opposed method, it uses firstly transient facial features due to expressions for the expression recognition and then the permanent features for the phase of post processing. Obtained results are compared to the first ones, they reveal that they are almost the same
- Published
- 2008
40. Emotion Detection in the Loop from Brain Signals and Facial Images
- Author
-
Savran, Arman, Ciftci, Koray, Chanel, Guillaume, Mota, Javier, Hong Viet, Luong, Sankur, Blent, Akarun, Lale, Caplier, Alice, and Rombaut, Michele
- Subjects
near-infrared spectroscopy ,emotion detection ,EEG ,video - Abstract
In this project, we intended to develop techniques for multimodal emotion detection, one modality being brain signals via fNIRS, the second modality being face video and the third modality being the scalp EEG signals. EEG andfNIRS provided us with an “internal” look at the emotion generation processes, while video sequence gave us an “external” look on the “same” phenomenon.Fusions of fNIRS with video and of EEG with fNIRS were considered. Fusion of all three modalities was not considered due to the extensive noise on the EEG signals caused by facial muscle movements, which are required for emotion detection from video sequences.Besides the techniques mentioned above, peripheral signals, namely, respiration, cardiac rate, and galvanic skin resistance were also measured from the subjects during “fNIRS + EEG” recordings. These signals provided us with extra information about the emotional state of the subjects.The critical point in the success of this project was to be able to build a “good” database. Good data acquisition means synchronous data and requires the definition of some specific experimental protocols for emotions elicitation. Thus, we devoted much of our time to data acquisition throughout the workshop, which resulted in a large enough database formaking the first analyses. Results presented in this report should be considered as preliminary. However, they are promising enough to extend the scope of the research.
- Published
- 2006
41. Multimodal Focus Attention Detection in an Augmented Driver Simulator
- Author
-
Benoit, Alexandre, Bonnaud, Laurent, Caplier, Alice, Ngo, Philippe, Lawson, L., Treviesan, D., Levacic, V., Mancas, C., Chanel, G., Laboratoire des images et des signaux (LIS), Institut National Polytechnique de Grenoble (INPG)-Université Joseph Fourier - Grenoble 1 (UJF)-Centre National de la Recherche Scientifique (CNRS), Communications and Remote Sensing Laboratory [Louvain], Université Catholique de Louvain = Catholic University of Louvain (UCL), Faculty of Electrical Engineering and Computing [Zagreb] (FER), University of Zagreb, Computer Science Dpt, and Université de Genève (UNIGE)
- Subjects
OpenInterface ,stress ,data fusion ,driver simulator ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,facial movements analysis ,fission ,attention level ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,physiological signals - Abstract
International audience; This project proposes to develop a driver simulator, which takes into account information about the user state of mind (level of attention, fatigue state, stress state). The user'sstate of mind analysis is based on video data and physiological signals. Facial movements such as eyes blinking, yawning, head rotations... are detected on video data: they are used in order to evaluate the fatigue and attention level of the driver. The user'selectrocardiogram and galvanic skin response are recorded andanalyzed in order to evaluate the stress level of the driver. A driver simulator software is modified in order to be able to appropriately react to these critical situations of fatigue andstress: some visual messages are sent to the driver, wheelvibrations are generated and the driver is supposed to react to the alertness messages. A flexible and efficient multi threaded server architecture is proposed to support multi messages sent bydifferent modalities. Strategies for data fusion and fission are also provided. Some of these components are integrated within the first prototype of OpenInterface (the Multimodal Similar platform).
- Published
- 2006
- Full Text
- View/download PDF
42. Enhancing Facial expressions Classification By Information Fusion
- Author
-
Buciu, I., Hammal, Zakia, Caplier, Alice, Nikolaidis, N., Pitas, I., Cieren, Isabelle, Laboratoire des images et des signaux (LIS), and Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique de Grenoble (INPG)-Université Joseph Fourier - Grenoble 1 (UJF)
- Subjects
ComputingMilieux_MISCELLANEOUS - Abstract
International audience
- Published
- 2006
43. Face spoofing detection based on colour distortions.
- Author
-
Edmunds, Taiamiti and Caplier, Alice
- Abstract
Securing face recognition systems against spoofing attacks have been recognised as a real challenge. Spoofing attacks are conducted by printing or displaying a digital acquisition of a capture subject (target user) in front of the sensor. These extra reproduction stages generate colour distortions between face artefacts and real faces. In this work, the problem of spoof detection is addressed by modelling the radiometric distortions generated by the recapturing process. The spoof detection process takes advantage of enrolment data and occurs after face identification so that for each client the authors have at disposal at least one genuine face sample as a reference. Once identified, they compute the colour transformation between the observed face and its enrolment counterpart. A compact parametric representation is proposed to model those radiometric transforms and it is used as features for classification. They evaluate the proposed method on Replay‐Attack, CASIA and MSU public databases and show its competitiveness with state‐of‐the‐art countermeasures. Limitations of the proposed method are clearly identified and discussed through experiments in adversary evaluation conditions where colour distortions are not only generated by the recapturing process but also by natural illumination variations. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. A belief theory-based static posture recognition system for real-time videosurveillance applications
- Author
-
Girondel, Vincent, Caplier, Alice, Bonnaud, Laurent, Laboratoire des images et des signaux (LIS), Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique de Grenoble (INPG)-Université Joseph Fourier - Grenoble 1 (UJF), and Girondel, Vincent
- Subjects
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[INFO.INFO-TS] Computer Science [cs]/Signal and Image Processing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; This paper presents a system that can automatically recognize four different static human body postures for video surveillance applications. The considered postures are standing, sitting, squatting, and lying. The data come from the persons 2D segmentation and from their face localization. It consists in distance measurements relative to a reference posture (standing, arms stretched horizontally). The recognition is based on data fusion using the belief theory, because this theory allows the modelling of imprecision and uncertainty. The efficiency and the limits of the recognition system are highlighted thanks to the processing of several thousands of frames. A considered application is the monitoring of elder people in hospitals or at home. This system allows real-time processing.
- Published
- 2005
45. Quiet versus agitated: vocal classification system
- Author
-
Hammal, Zakia, Bozkurt, Baris, Couvreur, Laurent, Unay, D., Caplier, Alice, Dutoit, T., Laboratoire des images et des signaux (LIS), Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique de Grenoble (INPG)-Université Joseph Fourier - Grenoble 1 (UJF), Faculté polytechnique de Mons, Université de Mons (UMons), and TCTS-FPMs
- Subjects
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,ComputingMilieux_MISCELLANEOUS - Abstract
International audience
- Published
- 2005
46. Comparison of 2D and 3D Analysis for Automated Cued Speech Gesture Recognition
- Author
-
Caplier, Alice, Bonnaud, Laurent, Malassiotis, Sotiris, Strintzis, Michael, Laboratoire des images et des signaux (LIS), Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique de Grenoble (INPG)-Université Joseph Fourier - Grenoble 1 (UJF), and Bonnaud, Laurent
- Subjects
[INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[INFO.INFO-TS] Computer Science [cs]/Signal and Image Processing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; This paper deals with the problem of the automated classification of cued speech gestures. Cued speech is a specific gesture language (different from the sign language) used for communication between deaf people and other people. It uses only 8 different hand configurations. The aim of this work is to apply a simple classifier on 3 images data sets, in order to answer two main questions: is 3D data needed, and how important is the hand segmentation quality ? The first data set consists of images acquired with a single camera in a controlled light environment and a segmentation (called “2D segmentation”) based on luminance information. The second data set is acquired with a 3D camera which can produce a depth map; a segmentation (called “3D segmentation”) of the hand configurations based on the video and the depth map is performed. The third data set consists in 3D-segmented masks where the resulting hand mask is warped to compensate for hand pose variations. For the classification purposes, hand configurations are characterized by the computation of the seven Hu moment invariants. Then a supervised classification using a multi-layer perceptron is done. The performance of classification based on 2D and 3D information are compared.
- Published
- 2004
47. Real-time tracking of multiple persons by Kalman filtering and face pursuit for multimedia applications
- Author
-
Girondel, Vincent, Caplier, Alice, Bonnaud, Laurent, Laboratoire des images et des signaux (LIS), Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique de Grenoble (INPG)-Université Joseph Fourier - Grenoble 1 (UJF), and Girondel, Vincent
- Subjects
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[INFO.INFO-TS] Computer Science [cs]/Signal and Image Processing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; We present an algorithm that can track multiple persons and their faces simultaneously in a video sequence, even if they are completely occluded from the camera's point of view. This algorithm is based on the detection and tracking of persons masks and their faces. Face localization uses skin detection based on color information with an adaptive thresholding. In order to handle occlusions, a Kalman filter is defined for each person that allows the prediction of the person bounding box, of the face bounding box and of its speed. In case of incomplete measurements (for instance, in case of partial occlusion), a partial Kalman filtering is done. Several results show the efficiency of this method. This algorithm allows real time processing.
- Published
- 2004
48. Hands detection and tracking for interactive multimedia applications
- Author
-
Girondel, Vincent, Bonnaud, Laurent, Caplier, Alice, Laboratoire des images et des signaux (LIS), Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique de Grenoble (INPG)-Université Joseph Fourier - Grenoble 1 (UJF), and Girondel, Vincent
- Subjects
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[INFO.INFO-TS] Computer Science [cs]/Signal and Image Processing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,[SPI.SIGNAL] Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; The context of this work is a European project art.live which aims at mixing real and virtual worlds for multimedia applications. This paper focuses on an algorithm for the detection and tracking of face and both hands of segmented persons standing in front of a camera. The first step consists in the detection of skin pixels based on skin colour: the HSI and YCbCr colour spaces are compared. The colour space that allows both fast detection and accurate results is selected. The second step is the identification of face and both hands among all detected skin patches. This involves spatial criteria related to human morphology and temporal tracking. The third step consists in parameter adaptation of the skin detection algorithm. Several results show the efficiency of the method. It has been integrated and validated in a global real time interactive multimedia system.
- Published
- 2002
49. Approche Spatio-Temporelle Pour l'analyse de Séquences d'images. Application En Détection de Mouvement
- Author
-
Caplier, Alice, Luthon, Franck, Laboratoire Informatique de l'Université de Pau et des Pays de l'Adour (LIUPPA), and Université de Pau et des Pays de l'Adour (UPPA)
- Subjects
[INFO]Computer Science [cs] ,ComputingMilieux_MISCELLANEOUS - Abstract
International audience
- Published
- 1997
50. Fully automated facial picture evaluation using high level attributes.
- Author
-
Lienhard, Arnaud, Caplier, Alice, and Ladret, Patricia
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.