514 results on '"InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI)"'
Search Results
2. Multimodal Creative Inquiry: Theorising a New Approach for Children’s Science Meaning-Making in Early Childhood Education
- Author
-
Nikolay Veresov and Sarika Kewalramani
- Subjects
Early childhood education ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,media_common.quotation_subject ,ComputingMilieux_PERSONALCOMPUTING ,Science education ,Social semiotics ,Education ,Documentation ,Pedagogy ,ComputingMilieux_COMPUTERSANDEDUCATION ,Meaning-making ,Curiosity ,Semiotics ,Early childhood ,Sociology ,media_common - Abstract
This paper discusses how multimodal creative inquiry might be conceptualised and implemented for children’s meaning-making in science. We consider Halliday’s (1978) and Vygotsky’s (1987, 2016) theoretical ideas for showing how the most important characteristics of social semiotics are connected to imagination, play-based and creative inquiry for children’s science meaning-making. Qualitative data was analysed from two preschool classroom video observations of 40 children’s playful interactions with technologies, such as robotic toys, semiotic artefacts, two teachers’ reflective journal documentation and children’s artefacts. Findings show children participate and discuss elements of scientific concepts in inquiry-based dialogues and make sense of science concepts whilst becoming creators of multimodal representations arising from their interests and curiosity. The semiotic resources that operate through technologies such as apps provide a medium for creative inquiry affording communication spaces and multimodal (visual, haptic [digital touch], text) meaning-making around everyday science phenomena. Practical implications lie in upskilling educators’ integration of semiotic resources such as robotic toys and deploying a multimodal creative inquiry approach for reconfiguring children’s science learning opportunities in early childhood educational practices.
- Published
- 2021
- Full Text
- View/download PDF
3. Selecting effectively contributes to the mnemonic benefits of self-generated cues
- Author
-
Scott H. Fraundorf and Jonathan G. Tullis
- Subjects
Cued recall ,Recall ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Metacognition ,Experimental and Cognitive Psychology ,Mnemonic ,InformationSystems_MODELSANDPRINCIPLES ,Neuropsychology and Physiological Psychology ,Arts and Humanities (miscellaneous) ,Selection (linguistics) ,Optimal distinctiveness theory ,Psychology ,Generation effect ,Generator (mathematics) ,Cognitive psychology - Abstract
Self-generated memory cues support recall of target information more robustly than memory cues generated by others. Across two experiments, we tested whether the benefit of self-generated cues in part reflects a meta-mnemonic effect rather than a pure generation effect. In other words, can learners select better memory cues for themselves than others can? Participants generated as many possible memory cues for each to-be-remembered target as they could and then selected the cue they thought would be most effective. Self-selected memory cues elicited better cued recall than cues the generator did not select and cues selected by observers. Critically, this effect cannot be attributed to the process of generating a cue itself because all of the cues were self-generated. Further analysis indicated that differences in cue selection arise because generators and observers valued different cue characteristics; specifically, observers valued the commonality of the cue more than the generators, while generators valued the distinctiveness of a cue more than observers. Together, results suggest that self-generated cues are effective at supporting memory, in part, because learners select cues that are tailored to their specific memory needs.
- Published
- 2021
- Full Text
- View/download PDF
4. Visual–haptic integration, action and embodiment in virtual reality
- Author
-
Ken I. McAnally and Guy Wallis
- Subjects
Proprioception ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,05 social sciences ,Experimental and Cognitive Psychology ,General Medicine ,Virtual reality ,computer.software_genre ,050105 experimental psychology ,law.invention ,InformationSystems_MODELSANDPRINCIPLES ,Touchscreen ,Arts and Humanities (miscellaneous) ,Action (philosophy) ,law ,Human–computer interaction ,Virtual machine ,Developmental and Educational Psychology ,0501 psychology and cognitive sciences ,computer ,Sensory cue ,050107 human factors ,Avatar ,Haptic technology - Abstract
The current generation of virtual reality (VR) technologies has improved substantially from legacy systems, particularly in the resolution and latency of their visual display. The presentation of haptic cues remains challenging, however, because haptic systems do not generalise well over the range of stimuli (both tactile and proprioceptive) normally present when interacting with objects in the world. This study investigated whether veridical tactile and proprioceptive cues lead to more efficient interaction with a virtual environment. Interaction in the world results in spatial and temporal correlation of tactile, proprioceptive and visual cues. When cues in VR are similarly correlated, observers experience a sense of embodiment and agency of their avatars. We investigated whether sensorimotor performance was mediated by embodiment of the avatar hands. Participants performed a Fitts’ tapping task in different conditions (VR with no haptics, active haptics, passive haptics, and on a real touchscreen). The active-haptic condition provided abstract tactile cues and the passive haptic condition provided veridical tactile and proprioceptive cues. An additional (hybrid haptics) condition simulated an ideal passive haptic system. Movement efficiency (throughput) and embodiment were higher for the passive than for the active and no-haptics conditions. However, components of embodiment (perceived agency and ownership) did not predict unique variance in throughput. Improved sensorimotor performance and ratings of presence and realism support the use of passive haptics in VR environments where objects are in known and stable locations, regardless of whether performance was mediated by the sense of embodiment.
- Published
- 2021
- Full Text
- View/download PDF
5. Evaluation of Pedestrian Space Sequences according to Landscape Elements: Focus on Large-Scale Residential Complexes
- Author
-
Yu Jun Kim, Je Jin Park, Youn Won Kang, and Jong Gu Kim
- Subjects
Sequence ,Empirical data ,ComputingMethodologies_SIMULATIONANDMODELING ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Pedestrian ,Space (commercial competition) ,Psychological evaluation ,Transport engineering ,Focus (optics) ,Scale (map) ,ComputingMethodologies_COMPUTERGRAPHICS ,Civil and Structural Engineering - Abstract
One of the most important ways to improve pedestrian spaces is to quantitatively analyze the preferences of pedestrians through concrete and empirical data. This requires a psychological evaluation on pedestrians and an analysis on correlations between pedestrian space elements and pedestrians’ preferences. In this study, the relationship between landscape elements in pedestrian space and the preferences of pedestrians was analyzed. The quantitative information of landscape elements was obtained from landscape sequences because pedestrians experience the landscape as a sequence, not individual scenes. The analysis was used to develop a comprehensive index of how pedestrians look at pedestrian spaces. A psychological evaluation was performed to obtain pedestrian preferences for different spaces and the influencing factors. This study clarified the landscape elements preferred by pedestrians, and the results can be used to develop basic guidelines for the planning and design of pedestrian spaces.
- Published
- 2021
- Full Text
- View/download PDF
6. Alleviating the cold-start playlist continuation in music recommendation using latent semantic indexing
- Author
-
Ali Yürekli, Cihan Kaleli, and Alper Bilge
- Subjects
Information retrieval ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,Matrix (music) ,Library and Information Sciences ,Recommender system ,Weighting ,Metadata ,Cold start ,Singular value decomposition ,Similarity (psychology) ,Media Technology ,State (computer science) ,Information Systems - Abstract
The cold-start problem is a grand challenge in music recommender systems aiming to provide users with a better and continuous music listening experience. When a new user creates a playlist, the recommender system remains in a cold-start state until enough information is collected to identify the user’s musical taste. In such cases, playlist metadata, such as title or description, have been successfully employed to create intent recommendation models. In this paper, we propose a multi-stage retrieval system utilizing user-generated titles to alleviate the cold-start problem in automatic playlist continuation. Initially, playlists are clustered to form a music documents collection. Then, the system applies latent semantic indexing to the collection to discover hidden patterns between tracks and playlist titles. For similarity calculation, singular value decomposition is performed on a track-cluster matrix. When the system is given a new playlist as a cold-start instance, it first retrieves neighboring clusters and then produces a ranked list of recommendations by weighting candidate tracks in these clusters. We scrutinize the performance of the proposed system on a large, real-world music playlists dataset supplied by the Spotify platform. Our empirical results show that the proposed system outperforms the state-of-the-art approaches and improves recommendation accuracy significantly in three primary evaluation metrics.
- Published
- 2021
- Full Text
- View/download PDF
7. Evaluation of Caregiver Training Procedures to Teach Activities of Daily Living Skills
- Author
-
Elizabeth J. Preas and Therese L. Mathews
- Subjects
Alternative methods ,Medical education ,Activities of daily living ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,education ,Training (meteorology) ,General Medicine ,medicine.disease ,Training methods ,Skills training ,Autism spectrum disorder ,medicine ,ComputingMilieux_COMPUTERSANDSOCIETY ,Psychology ,Research Article - Abstract
Caregivers of children with an autism spectrum disorder are often responsible for assisting their children to complete activities of daily living skills. Effective and efficient caregiver training methods are needed to train caregivers. The present study used two concurrent multiple-baseline across-participants designs to evaluate the effects of real-time feedback and behavioral skills training on training eight caregivers to implement teaching procedures for activities of daily living skills with their child. We assessed caregivers’ accuracy and correct implementation of the six-component teaching procedure after they received either real-time feedback or behavioral skills training. Caregivers from both groups mastered and maintained correct implementation of the teaching procedures with their child. The overall results suggest that real-time feedback and behavioral skills training are efficacious to train caregivers to implement activities of daily living skills procedures with their children, and that real-time feedback may be an efficient alternative method to train caregivers. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s40617-020-00513-z.
- Published
- 2021
- Full Text
- View/download PDF
8. Distributed Scaffolding: Scaffolding Students in Classroom Environments
- Author
-
Sadhana Puntambekar
- Subjects
Scaffold ,Zone of proximal development ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,education ,ComputingMilieux_PERSONALCOMPUTING ,Educational psychology ,Context (language use) ,Key features ,GeneralLiterature_MISCELLANEOUS ,ComputingMilieux_COMPUTERSANDEDUCATION ,Developmental and Educational Psychology ,Mathematics education ,Psychology ,Construct (philosophy) - Abstract
This paper traces the origins of the scaffolding construct, placing it in its theoretical-historical context. The paper discusses the connection between Vygotsky’s Zone of Proximal Development (ZPD), and the notion of scaffolding, and explicates the differences between scaffolding and scaffolds. The paper then presents a discussion of the changes that the notion of scaffolding has undergone, especially when it comes to supporting students in classroom contexts. In classrooms where one teacher supports multiple students, scaffolding is distributed across various tools and social scaffolds. A discussion of the notion of distributed scaffolding is presented, to describe how students in classrooms may be supported by various tools and social scaffolds. The paper then introduces the kinds of distribution and interactions between tools and social scaffolds that need to be considered to support multiple students in classroom contexts. Finally, distributed scaffolding is discussed with reference to the key features of scaffolding, especially fading and transfer of responsibility.
- Published
- 2021
- Full Text
- View/download PDF
9. TELE’DRAMA—International sociometry in the virtual space
- Author
-
Andrea Wilches and Daniela Simmons
- Subjects
Sociometry ,Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Behavioral therapy ,ComputingMilieux_PERSONALCOMPUTING ,Virtual space ,Online video ,computer.software_genre ,Experiential learning ,Action (philosophy) ,Industrial and organizational psychology ,Psychology ,computer ,Drama - Abstract
This article aims to offer an overview of Tele’Drama as a method and its pioneering role in implementing action and experiential methods via online communication, as well as its purpose in applying J. L. Moreno’s view on international sociometry. The main author of this article is the creator of the method, and believes that Tele‘Drama is a particularly important part of the future of action methods as it is also a bridge between locations and cultures.
- Published
- 2021
- Full Text
- View/download PDF
10. Robot and virtual reality-based intervention in autism: a comprehensive review
- Author
-
Fadi Abu-Amara, Heba Mohammad, Hatem Tamimi, and Ameur Bensefia
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer Networks and Communications ,Computer science ,Applied Mathematics ,media_common.quotation_subject ,Virtual reality ,medicine.disease ,Computer Science Applications ,Developmental disorder ,Nonverbal communication ,Computational Theory and Mathematics ,Artificial Intelligence ,Autism spectrum disorder ,Intervention (counseling) ,medicine ,ComputingMilieux_COMPUTERSANDSOCIETY ,Autism ,Electrical and Electronic Engineering ,Imitation ,Affordance ,Information Systems ,media_common ,Cognitive psychology - Abstract
Autism Spectrum Disorder is a neurological and developmental disorder. Children diagnosed with this disorder have persistent deficits in their social-emotional reciprocity skills, nonverbal communication, and developing, maintaining, and understanding relationships. Besides, autistic children usually have motor deficits that influence their imitation and gesture production ability. The present study aims to review and analyze the current research findings in using robot-based and virtual reality-based intervention to support the therapy of improving the social, communication, emotional, and academic deficits of children with autism. Experimental data from the surveyed works are analyzed regarding the target behaviors and how each technology, robot, or virtual reality, was used during therapy sessions to improve the targeted behaviors. Furthermore, this study explores the different therapeutic roles that robots and virtual reality were observed to play. Finally, this study shares perspectives on the affordances and challenges of applying these technologies.
- Published
- 2021
- Full Text
- View/download PDF
11. Multi-touch gesture recognition of Braille input based on Petri Net and RBF Net
- Author
-
Juxiao Zhang and Xiaoqin Zeng
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer Networks and Communications ,Computer science ,Multi-touch ,020207 software engineering ,02 engineering and technology ,Petri net ,Net (mathematics) ,Braille ,Task (project management) ,Hardware and Architecture ,Gesture recognition ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,ComputingMilieux_COMPUTERSANDSOCIETY ,Input method ,Software ,Gesture - Abstract
The development of information accessibility is receiving more and more attention. One challenging task for the blind is to input Braille while by no way could they sense the location information on touch screens. The existing Braille input methods are suffering from problems including inaccurate positioning and lack of interactive prompts. In this paper, touch gestures are recognized by trained RBF network while combined gestures are modeled by Petri net that introduces logic, timing and spatial relationship description. By doing so, the Braille input concerning multi-touch gesture recognition is then implemented. The experimental results show that the method is effective and blind people can friendly Braille Input with almost real-time interaction. The input method makes full use of the inherent logic of Braille, making the blind easy to learn and remember, providing a new method for human-computer interaction between the blind and the touch screen.
- Published
- 2021
- Full Text
- View/download PDF
12. Filling in the gaps: observing gestures conveying additional information can compensate for missing verbal content
- Author
-
Megan Phillips, Naomi Sweller, and Nicole Dargue
- Subjects
Recall ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Filling-in ,Developmental and Educational Psychology ,Educational psychology ,Narrative ,Affect (psychology) ,Psychology ,Content (Freudian dream analysis) ,Education ,Cognitive psychology ,Gesture - Abstract
While observing gesture has been shown to benefit narrative recall and learning, research has yet to show whether gestures that provide information that is missing from speech benefit narrative recall. This study explored whether observing gestures that relay the same information as speech and gestures that provide information missing from speech differentially affect narrative recall in university students. Participants were presented with a videotaped narrative told in one of four conditions: with gestures and no missing verbal information, with gestures and missing verbal information, with no gestures and no missing verbal information, or with no gestures and missing verbal information. Results showed that observing gestures that provided additional information to speech (i.e., when the speech was missing vital information) enhanced narrative recall compared to observing no gestures, while observing gestures that did not provide additional information to speech were no more beneficial than observing no gestures at all. Findings from the current study provide valuable insight into the beneficial effect of iconic gesture on narrative recall, with important implications for education and learning.
- Published
- 2021
- Full Text
- View/download PDF
13. Pitch contours curve frequency domain fitting with vocabulary matching based music generation
- Author
-
Runnan Lang, Songhao Zhu, and Dongsheng Wang
- Subjects
Structure (mathematical logic) ,Vocabulary ,Matching (graph theory) ,Artificial neural network ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer Networks and Communications ,business.industry ,Computer science ,media_common.quotation_subject ,Perspective (graphical) ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,Hardware and Architecture ,Frequency domain ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Artificial intelligence ,Time domain ,business ,Software ,Pitch contour ,media_common - Abstract
In this paper, we present a whole new perspective on generating music. The method proposed in this paper is the first to be used which uses the frequency domain characteristics of pitch contour curve to generate music melody with long-term structure controllable. The music generated by this method has a good long-term structure that other basic music generation methods do not have. This method has great development potential and application ability, can be combined with other music generation methods, and improve the performance of long-term structure. This method firstly uses the neural network to fit the pitch contour curve in frequency domain, then combines the vocabulary matching method to perfect the detailed characteristics of melody generated in time domain and control the long-term trend of notes generated with respect to label information, finally generates music melody with real and controllable long-term structure. Through a large number of experiments, it can be seen that compared with the music generated based on the LSTM, the music generated by the proposed method has better long-term structure and has similar statistical characteristics.
- Published
- 2021
- Full Text
- View/download PDF
14. Tangible interfaces in early years’ education: a systematic review
- Author
-
Andrina Granić and Lea Dujic Rodic
- Subjects
Knowledge management ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,business.industry ,Computer science ,ComputingMilieux_PERSONALCOMPUTING ,Mobile computing ,020206 networking & telecommunications ,Context (language use) ,02 engineering and technology ,Management Science and Operations Research ,Library and Information Sciences ,Asset (computer security) ,Computer Science Applications ,child ,early years’ education ,interactions ,interfaces ,tangible user interface (TUI) ,systematic review ,Empirical research ,Hardware and Architecture ,Application domain ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,User interface ,business ,Set (psychology) ,Curriculum - Abstract
This paper presents a systematic review of the literature on Tangible User Interfaces (TUIs) and interactions in young children’s education by identifying 155 studies published between 2001 and 2019. The review was based on a set of clear research questions addressing application domains, forms of tangible objects, TUI design and assessment. The results indicate that (i) the form of tangible object is closely related to the application domain, (ii) the manipulatives are the most dominant form of tangible object, (iii) the majority of studies addressed all three stages of TUI development (design, implementation and evaluation) and declared a small sample of young children as a major shortcoming, and (iv) additional empirical research is required to collect evidence that TUIs are truly beneficial for children’s acquisition of knowledge. This review also identifies gaps in the current work, thus providing suggestions for future research in TUIs application in educational context expected to be beneficial for researchers, curriculum designers and practitioners in early years’ education. To the authors’ knowledge, this is the first systematic review specific to TUIs’ studies in early years’ education and is an asset to the scientific community.
- Published
- 2021
- Full Text
- View/download PDF
15. Working memory and handwriting and share a common resource: An investigation of shared attention
- Author
-
Mitchell G Longstaff and Richard Tindle
- Subjects
Recall ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Working memory ,05 social sciences ,050109 social psychology ,Cognition ,050105 experimental psychology ,Task (project management) ,Fluency ,Handwriting ,Finger tapping ,Tapping ,0501 psychology and cognitive sciences ,Psychology ,General Psychology ,Cognitive psychology - Abstract
Working memory and writing share a common resource. The authors investigated whether increasing the complexity of a finger-tapping task while simultaneously performing a serial verbal recall task while writing would decrease performance for recall, writing fluency, and tapping speed. Participants completed three verbal serial recall tasks which incrementally increased attentional load. Participants recalled word lists after listening to words while, (1) producing pseudo-handwriting movements without finger tapping (no-tapping), (2) producing pseudo-handwriting movements while tapping with a single finger (single-tapping), (3) producing pseudo-handwriting movements while tapping with two fingers (double tapping). The results showed that the double tapping condition caused a decrease in performance for recall, handwriting fluency, and tapping speed compared to the no-tapping condition and the single tapping conditions. However, no differences occurred between the no-tapping condition and the single tapping conditions. The authors concluded that by incrementally increasing the complexity of a concurrent tapping task we can achieve a decrease in performance across multiple cognitive processes. The results provided support for a central pool of shared resources that are utilised by non-working memory tasks and those reliant on working memory. The observed decreases in cognitive performance were dependent on task complexity rather than just performing a secondary task. The findings have implications for how multi-tasking while taking notes is detrimental for memory retention and handwriting fluency.
- Published
- 2021
- Full Text
- View/download PDF
16. Andrew Henderson, a Boundless Zeal for Palms
- Author
-
Rodrigo Bernal
- Subjects
ComputingMilieux_THECOMPUTINGPROFESSION ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,media_common.quotation_subject ,Art history ,Plant Science ,Art ,Plant ecology ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Reminiscence ,InformationSystems_MISCELLANEOUS ,Palm ,GeneralLiterature_REFERENCE(e.g.,dictionaries,encyclopedias,glossaries) ,Ecology, Evolution, Behavior and Systematics ,media_common - Abstract
A short reminiscence of Andrew Henderson’s interaction with the author is provided, as well as a general overview of his palm research.
- Published
- 2021
- Full Text
- View/download PDF
17. Affording embodied cognition through touchscreen and above-the-surface gestures during collaborative tabletop science learning
- Author
-
Alice Darrow, Carrie Schuman, Schuyler Gleaves, Hannah Neff, Lisa Anthony, Brittani Kirkland, Kathryn A. Stofer, Peter Chang, Amanda Morales, Nikita Soni, Annie Luc, and Jeremy Alexandre
- Subjects
Cooperative learning ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,05 social sciences ,Educational technology ,050301 education ,Collaborative learning ,Interaction design ,Education ,law.invention ,Human-Computer Interaction ,Touchscreen ,Embodied cognition ,Human–computer interaction ,law ,Learning theory ,0501 psychology and cognitive sciences ,0503 education ,050104 developmental & child psychology ,Gesture - Abstract
This paper draws upon the theory of embodied cognition to provide a robust account of how gestural interactions with and around multi-touch tabletops can play an important role in facilitating collaborative meaning-making, particularly in the context of science data visualizations. Embodied cognition is a theory of learning that implies that thinking and perception are shaped by interactions with the physical environment. Previous research has used embodied cognition as a theoretical framework to inform the design of large touchscreen learning applications such as for multi-touch tabletops. However, this prior work has primarily assumed that learning is occurring during any motion or interaction, without considering how specific interactions may be linked to particular instances of collaborative learning supported by embodiment. We investigate this question in the context of collaborative learning from data visualizations of global phenomena such as ocean temperatures. We followed a user-centered, iterative design approach to build a tabletop prototype that facilitated collaborative meaning-making and used this prototype as a testbed in a laboratory study with 11 family groups. We qualitatively analyzed learner groups’ co-occurring utterances and gestures to identify the nature of gestural interactions groups used when their utterances signaled the occurrence of embodiment during collaborative meaning-making. Our findings present an analysis of both touchscreen and above-the-surface gestural interactions that were associated with instances of embodied cognition. We identified four types of gestural interactions that promote scientific discussion and collaborative meaning-making through embodied cognition: (T1) gestures for orienting the group; (T2) cooperative gestures for facilitating group meaning-making; (T3) individual intentional gestures for facilitating group meaning-making; and (T4) gestures for articulating conceptual understanding to the group. Our work illustrates interaction design opportunities for affording embodied cognition and will inform the design of future interactive tabletop experiences in the domain of science learning.
- Published
- 2021
- Full Text
- View/download PDF
18. Commonalities of visual and auditory working memory in a spatial-updating task
- Author
-
Tomoki Maezawa and Jun Kawahara
- Subjects
Modalities ,Modality (human–computer interaction) ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Working memory ,media_common.quotation_subject ,05 social sciences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Virtual array ,Experimental and Cognitive Psychology ,050105 experimental psychology ,Task (project management) ,03 medical and health sciences ,0302 clinical medicine ,Neuropsychology and Physiological Psychology ,Arts and Humanities (miscellaneous) ,Perception ,Component (UML) ,0501 psychology and cognitive sciences ,Psychology ,Spatial analysis ,030217 neurology & neurosurgery ,Cognitive psychology ,media_common - Abstract
Although visual and auditory inputs are initially processed in separate perception systems, studies have built on the idea that to maintain spatial information these modalities share a component of working memory. The present study used working memory navigation tasks to examine functional similarities and dissimilarities in the performance of updating tasks. Participants mentally updated the spatial location of a target in a virtual array in response to sequential pictorial and sonant directional cues before identifying the target’s final location. We predicted that if working memory representations are modality-specific, mixed-modality cues would demonstrate a cost of modality switching relative to unimodal cues. The results indicate that updating performance using visual unimodal cues positively correlated with that using auditory unimodal cues. Task performance using unimodal cues was comparable to that using mixed modality cues. The results of a subsequent experiment involving updating of target traces were consistent with those of the preceding experiments and support the view of modality-nonspecific memory.
- Published
- 2021
- Full Text
- View/download PDF
19. Examining the usability of touchscreen gestures for adults with DS
- Author
-
Janio Jadán-Guerrero, Ployplearn Ravivanpong, Washington Caraguay, Leonardo Arellano, Doris Cáliz, and Andrea Schankin
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Renewable Energy, Sustainability and the Environment ,Computer Networks and Communications ,business.industry ,Computer science ,Best practice ,010401 analytical chemistry ,020206 networking & telecommunications ,Usability ,Special needs ,02 engineering and technology ,01 natural sciences ,0104 chemical sciences ,Computer Science Applications ,law.invention ,Software ,Touchscreen ,Artificial Intelligence ,Human–computer interaction ,law ,Vocational education ,0202 electrical engineering, electronic engineering, information engineering ,business ,Mobile device ,Gesture - Abstract
This document is part of a global investigation that aims to establish best practices in usability testing of mobile applications, that fit the specific needs of persons with cognitive disabilities. The motivating factor is to improve the quality of life of people with special needs. As a first step, we want to discover what are the skills of people with DS (DS) when using a mobile device. Thanks to its direct manipulation interaction style, multi-touch technology is the ideal mechanism for learning activities with a view to the social inclusion of people with DS. This paper investigates the most common touchscreen gestures on commercial existing software. A commercial analysis was carried out to discover the touch screen gestures used by 103 free software Apps running on a mobile touch screen tablet of 11 where the applications run”. The commercial analysis showed that most applications support tap and drag operations on multi-touch technology. Additionally, the research sought to discover the adults with DS (19–54 years of age) ability to perform other gestures on multi-touch surfaces. A DS user skill study was performed to assess the ability of this user segment to interact with multi-touch surfaces. The analysis involved 53 participants, aged between 19 and 54 years, from two vocational training centers attended by people with DS in Madrid. Authors used the Gesture Games App for experimenting with multi-touch gestures such as tap, double tap, long press, drag, scale up, scale down, and one-finger rotation. Tap, double tap, and drag were the three gestures that the most participant could use. In contrast, participants had difficulty performing one-finger rotation and long press gesture. Our statistical analysis showed that the ability to perform each gesture was independent of gender, age group, and previous experience with touchscreen of a participant.
- Published
- 2021
- Full Text
- View/download PDF
20. Use of an internet camera system in the neonatal intensive care unit: parental and nursing perspectives and its effects on stress
- Author
-
L. Berbert, Bonnie H. Arzuaga, E. Zahr, P. Clark, D. Williams, and Zuzanna Kubicka
- Subjects
Parents ,Neonatal intensive care unit ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,health care facilities, manpower, and services ,education ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Burnout ,Article ,Stress level ,InformationSystems_GENERAL ,03 medical and health sciences ,Medical research ,0302 clinical medicine ,Nursing ,Intensive Care Units, Neonatal ,Neonatal Nursing ,Surveys and Questionnaires ,030225 pediatrics ,Health care ,Stress (linguistics) ,Humans ,Medicine ,Prospective Studies ,030212 general & internal medicine ,Quality of care ,Internet ,business.industry ,Infant, Newborn ,Infant ,Obstetrics and Gynecology ,Pediatrics, Perinatology and Child Health ,Neonatal nursing ,Parental stress ,business - Abstract
Objective The objective of this study is to evaluate associations between webcam use in the neonatal intensive care unit (NICU) with parental stress and nursing work-related stress and burnout. Design Prospective validated and de novo questionnaires administered to NICU parents and nurses during two observation periods: (1) no webcam access (off webcam) and (2) webcam access (on webcam). Results Seventy-nine “off webcam” parents, 80 “on webcam” parents, and 35 nurses were included. Parental stress levels were significantly lower “on webcam” and perceptions of the technology were overwhelmingly positive. There were no significant differences in nursing stress levels and burnout between periods. Only 14% of nurses believed that webcam use improves infant’s quality of care. Majority nurses felt that webcams increase parental and nursing stress. Conclusions Webcam use in the NICU is associated with lower parental stress levels and has no effect on nursing stress levels or work-related burnout. These findings contradict nurses’ beliefs that webcams increase parent and nurse stress.
- Published
- 2021
- Full Text
- View/download PDF
21. Proprioceptively displayed interfaces: aiding non-visual on-body input through active and passive touch
- Author
-
Peter Presti, Clint Zeagler, Elizabeth D. Mynatt, Melody Moore Jackson, and Thad Starner
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Active touch ,Computer science ,Interface (computing) ,Mobile computing ,020206 networking & telecommunications ,02 engineering and technology ,Management Science and Operations Research ,Computer Science Applications ,Task (project management) ,Hardware and Architecture ,Human–computer interaction ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Visual attention - Abstract
On-body input interfaces that can be used accurately without visual attention could have a wide range of applications where vision is needed for a primary task: emergency responders, pilots, astronauts, and people with vision impairments could benefit by making interfaces accessible. This paper describes a between-participant study (104 participants) to determine how well users can locate e-textile interface discrete target touch points on the forearm without visual attention. We examine whether the addition of active touch embroidery and passive touch nubs (metal snaps with vibro-tactile stimulation) helps in locating input touch points accurately. We found that touch points towards the middle of the interface on the forearm were more difficult to touch accurately than at the ends. We also found that the addition of vibro-tactile stimulation aids in the accuracy of touch interactions by over 9% on average, and by almost 17% in the middle of the interface.
- Published
- 2021
- Full Text
- View/download PDF
22. Personal reminders: Self-generated reminders boost memory more than normatively related ones
- Author
-
Di Zhang and Jonathan G. Tullis
- Subjects
Point (typography) ,Recall ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,InformationSystems_INFORMATIONSYSTEMSAPPLICATIONS ,05 social sciences ,Metacognition ,Experimental and Cognitive Psychology ,GeneralLiterature_MISCELLANEOUS ,050105 experimental psychology ,03 medical and health sciences ,InformationSystems_MODELSANDPRINCIPLES ,0302 clinical medicine ,Neuropsychology and Physiological Psychology ,Arts and Humanities (miscellaneous) ,0501 psychology and cognitive sciences ,Psychology ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
People generate reminders in a variety of ways (e.g. putting items in special places or creating to-do lists) to support their memories. Successful remindings can result in retroactive facilitation of earlier information; in contrast, failures to remind can produce interference between memory for related information. Here, we compared the efficacy of different kinds of reminders, including participant’s self-generated reminders, reminders created by prior participants, and normatively associated reminders. Self-generated reminders boosted memory for the earlier target words more than normatively associated reminders in recall tests. Reminders generated by others enhanced memory as much as self-generated reminders when we controlled output order during recall. The results suggest that self-generated reminders boost memory for earlier studied information because they distinctly point towards the target information.
- Published
- 2021
- Full Text
- View/download PDF
23. Exploring playlist titles for cold-start music recommendation: an effectiveness analysis
- Author
-
Ali Yürekli, Alper Bilge, and Cihan Kaleli
- Subjects
General Computer Science ,Point (typography) ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,business.industry ,Usability ,02 engineering and technology ,Recommender system ,Space (commercial competition) ,Task (project management) ,World Wide Web ,Cold start ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Active listening ,business ,Theme (computing) - Abstract
In music recommender systems, automatic playlist continuation is an emerging task that aims to improve users’ listening experience by recommending music in line with their musical taste. The typical approach towards this goal is to identify playlist characteristics by inspecting the existing tracks (i.e., seeds) in target playlists. However, seeds are not always available, especially when users create new playlists. For such cold-start situations, user-generated titles can be a good starting point to understand the intended purpose of users. This paper investigates the effectiveness of titles as an auxiliary data source for playlists suffering from the cold-start problem. Employing three naive recommendation models, we conduct experiments on one million music playlists from the Spotify platform. Our analyses show that the prevalent attitude in naming playlists results in highly accurate recommendations for playlists concerning a specific theme, such as albums, artists, and soundtracks. As the title space moves away from a particular theme, recommendation accuracy drops. Furthermore, the correlation between the common preference of a title and its usability in recommendation is quite weak; a title without a common sense may be useless in recommender systems, even though many users favor that title. Consequently, our findings serve as a guideline to develop title-aware recommendation approaches that can provide coherent continuations to the cold-start playlists.
- Published
- 2021
- Full Text
- View/download PDF
24. A new haptic interaction with a visual tracker: implementation and stability analysis
- Author
-
Ahmad Mashayekhi, Ali Meghdari, Hamed Mohtasham Shad, and Ali Nahvi
- Subjects
0209 industrial biotechnology ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,media_common.quotation_subject ,010401 analytical chemistry ,Stiffness ,02 engineering and technology ,Workspace ,Collision ,01 natural sciences ,0104 chemical sciences ,Computer Science Applications ,020901 industrial engineering & automation ,Operator (computer programming) ,Artificial Intelligence ,medicine ,Contrast (vision) ,medicine.symptom ,Actuator ,Stylus ,Simulation ,ComputingMethodologies_COMPUTERGRAPHICS ,media_common ,Haptic technology - Abstract
In this paper, a new haptic interaction is presented where the operator is in contact with the haptic device (HD) only when she/he is in contact with the virtual environment (VE). This is in contrast with traditional haptic systems, where the operator is always in contact with the HD, even if she/he is out of the VE. In this haptic interaction, a visual tracking system is used to track the operator’s finger. When the finger is out of the VE, the HD tracks the finger so that the stylus of the HD keeps a constant distance of about 2 cm from the finger. When the finger gets close to the VE, the stylus slows down and stops upon reaching the VE; it then waits until the operator touches the stylus and feels the VE. Some advantages of this haptic interaction include more immersivity, higher margins of stability, bigger workspace, smaller actuators and, more feasible impact simulation. The speed of the HD at the onset of contacting the VE plays a significant role in the stability of the haptic system. The lower the collision speed, the bigger the maximum stiffness of the VE will be. The stability improvement of this presented haptic interaction is compared with the traditional one for both low and medium collision speeds for several time delays. For low collision speeds, theoretical and experimental results show an increase of 72% and 40% in the maximum stiffness of the VE, respectively. Similarly, for medium collision speeds, 44% and 28% increase in the maximum stiffness of the VE are achieved for theoretical and experimental results, respectively.
- Published
- 2021
- Full Text
- View/download PDF
25. Comparison of various controller design for the speed control of DC motors used in two wheeled mobile robots
- Author
-
Shahida Khatoon, Prerna Gaur, and Huma Khan
- Subjects
Electronic speed control ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer Networks and Communications ,Computer science ,Applied Mathematics ,Open-loop controller ,PID controller ,020206 networking & telecommunications ,Mobile robot ,02 engineering and technology ,Motion control ,DC motor ,Computer Science Applications ,InformationSystems_MODELSANDPRINCIPLES ,Computational Theory and Mathematics ,Artificial Intelligence ,Control theory ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,ComputingMethodologies_COMPUTERGRAPHICS ,Information Systems - Abstract
This work describes the study of modeling and controller on wheeled mobile robots designed the motors which is driving the wheels. According to the structure and design of wheeled mobile robot, DC motors are the best suited for the motion control. The kinematical model is required for the designing process of the wheels in the WMR. The analysis of the mathematical model is divided into angle and velocity of the dc motor build in wheeled mobile robot because of the importance of motor parameters for stability. The main focus of the work is to develop an efficient controller to control the speed of the dc motor applied in the wheels of the robot. PID tuning has been implemented in designing of the controller for the speed control of dc motor. The open loop and closed loop performance of a two wheeled mobile robot with PID and LQR controllers are obtained and compared by using MATLAB programs and simulations.
- Published
- 2021
- Full Text
- View/download PDF
26. Non-informative vision improves spatial tactile discrimination on the shoulder but does not influence detection sensitivity
- Author
-
Luca Brayda, Sara Nataletti, and Fabrizio Leo
- Subjects
Shoulder ,030506 rehabilitation ,medicine.medical_specialty ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Visuo-tactile ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Tactile sensitivity ,Audiology ,Somatosensory system ,GeneralLiterature_MISCELLANEOUS ,Judgment ,03 medical and health sciences ,InformationSystems_MODELSANDPRINCIPLES ,0302 clinical medicine ,Lateral inhibition ,medicine ,Humans ,Sensitivity (control systems) ,Tactile acuity ,Visual enhancement of touch ,Human Body ,Numerosity judgment ,Sensory stimulation therapy ,Tactile discrimination ,General Neuroscience ,Numerosity adaptation effect ,Hand ,Somatosensory cortex ,Touch ,Receptive field ,Fixation (visual) ,0305 other medical science ,Psychology ,030217 neurology & neurosurgery ,Research Article - Abstract
Vision of the body has been reported to improve tactile acuity even when vision is not informative about the actual tactile stimulation. However, it is currently unclear whether this effect is limited to body parts such as hand, forearm or foot that can be normally viewed, or it also generalizes to body locations, such as the shoulder, that are rarely before our own eyes. In this study, subjects consecutively performed a detection threshold task and a numerosity judgment task of tactile stimuli on the shoulder. Meanwhile, they watched either a real-time video showing their shoulder or simply a fixation cross as control condition. We show that non-informative vision improves tactile numerosity judgment which might involve tactile acuity, but not tactile sensitivity. Furthermore, the improvement in tactile accuracy modulated by vision seems to be due to an enhanced ability in discriminating the number of adjacent active electrodes. These results are consistent with the view that bimodal visuotactile neurons sharp tactile receptive fields in an early somatosensory map, probably via top-down modulation of lateral inhibition.
- Published
- 2020
- Full Text
- View/download PDF
27. An innovative method of algorithmic composition using musical tension
- Author
-
Chih-Fang Huang
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer Networks and Communications ,Pitch interval ,Computer science ,020207 software engineering ,02 engineering and technology ,Musical ,Hardware and Architecture ,Human–computer interaction ,Narratology ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Chord (music) ,Musical composition ,Algorithmic composition ,Software - Abstract
According to narratology or narrative theory, a piece of artwork should tell a story based on its various tensions. In this study, an automated music composition algorithm using musical tension energy was proposed; this algorithm can generate a musical piece by changing the musical tension. The proposed innovative Algorithmic Composition Musical Tension Energy (ACMTE) method uses the level of musical tension; this level is determined primarily by the chord progression and also the musical parameters of pitch interval and rhythm. The effects of musical tension energy on those parameters were analyzed. This paper presents a formula that unifies all generated parts. The experimental results demonstrate that thousands of beautiful pieces can easily be made without the use of a music database. This algorithmic composition method can be easily applied in both streaming media and to portable music devices, such as smart phones, notebooks, and MP3 players.
- Published
- 2020
- Full Text
- View/download PDF
28. Digitized Braille cell: a novel solution for visually impaired
- Author
-
Nishanth S. Murthy, C. Sandesh, S. Gayathri, Nikhil S. Joshi, Mukundh Balabhadra, and B. A. Sujathakumari
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer Networks and Communications ,Computer science ,Visually impaired ,First language ,Tactile device ,02 engineering and technology ,InformationSystems_MODELSANDPRINCIPLES ,Artificial Intelligence ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Electrical and Electronic Engineering ,business.industry ,Applied Mathematics ,Process (computing) ,020206 networking & telecommunications ,Usability ,Braille ,Computer Science Applications ,Computational Theory and Mathematics ,Braille reading ,ComputingMilieux_COMPUTERSANDSOCIETY ,020201 artificial intelligence & image processing ,business ,Information Systems - Abstract
Braille is a system of raised dots. Normal human beings read braille code with their eyes whereas visually impaired person with their fingers. Braille is not a language but is a code. With Braille code many languages such as English, French, Spanish etc. may be written and read. All over the world many people use this code in their own native language, thereby providing literacy to one and all. Braille is traditionally written on an embossed paper which is static in nature. Advanced Braille allows people to read digital documents with the use of refreshable braille displays. The aim is to design and implement a system capable of making the process of Braille reading easier and more economic to the visually impaired users. In this paper a new and improvised design for Braille Cell is proposed and implemented using the Arduino UNO and electromagnetic relays as tactile device. The proposed device outperforms the existing devices in terms of manufacturing complexity, cost and ease of use. A prototype was developed and tested with several visually impaired people to validate the operation. If implemented on a larger scale, the product promises to be of an efficient replacement to the existing braille books.
- Published
- 2020
- Full Text
- View/download PDF
29. A comparison of methods of assessing cue combination during navigation
- Author
-
Timothy P. McNamara and Phillip Newman
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,05 social sciences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Cue integration ,Experimental and Cognitive Psychology ,050105 experimental psychology ,03 medical and health sciences ,InformationSystems_MODELSANDPRINCIPLES ,0302 clinical medicine ,Arts and Humanities (miscellaneous) ,Human–computer interaction ,Developmental and Educational Psychology ,Spatial cues ,0501 psychology and cognitive sciences ,Psychology (miscellaneous) ,030217 neurology & neurosurgery ,General Psychology - Abstract
Mobile organisms make use of spatial cues to navigate effectively in the world, such as visual and self-motion cues. Over the past decade, researchers have investigated how human navigators combine spatial cues, and whether cue combination is optimal according to statistical principles, by varying the number of cues available in homing tasks. The methodological approaches employed by researchers have varied, however. One important methodological difference exists in the number of cues available to the navigator during the outbound path for single-cue trials. In some studies, navigators have access to all spatial cues on the outbound path and all but one cue is eliminated prior to execution of the return path in the single-cue conditions; in other studies, navigators only have access to one spatial cue on the outbound and return paths in the single-cue conditions. If navigators can integrate cues along the outbound path, single-cue estimates may be contaminated by the undesired cue, which will in turn affect the predictions of models of optimal cue integration. In the current experiment, we manipulated the number of cues available during the outbound path for single-cue trials, while keeping dual-cue trials constant. This variable did not affect performance in the homing task; in particular, homing performance was better in dual-cue conditions than in single-cue conditions and was statistically optimal. Both methodological approaches to measuring spatial cue integration during navigation are appropriate.
- Published
- 2020
- Full Text
- View/download PDF
30. The effects of spatial auditory and visual cues on mixed reality remote collaboration
- Author
-
Jing Yang, Gábor Sörös, Amit Barde, Prasanth Sasikumar, Huidong Bai, and Mark Billinghurst
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Object (computer science) ,Mixed reality ,Task (project management) ,Human-Computer Interaction ,Human–computer interaction ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Eye tracking ,0501 psychology and cognitive sciences ,Augmented reality ,Sensory cue ,050107 human factors ,Gesture - Abstract
Collaborative Mixed Reality (MR) technologies enable remote people to work together by sharing communication cues intrinsic to face-to-face conversations, such as eye gaze and hand gestures. While the role of visual cues has been investigated in many collaborative MR systems, the use of spatial auditory cues remains underexplored. In this paper, we present an MR remote collaboration system that shares both spatial auditory and visual cues between collaborators to help them complete a search task. Through two user studies in a large office, we found that compared to non-spatialized audio, the spatialized remote expert’s voice and auditory beacons enabled local workers to find small occluded objects with significantly stronger spatial perception. We also found that while the spatial auditory cues could indicate the spatial layout and a general direction to search for the target object, visual head frustum and hand gestures intuitively demonstrated the remote expert’s movements and the position of the target. Integrating visual cues (especially the head frustum) with the spatial auditory cues significantly improved the local worker’s task performance, social presence, and spatial perception of the environment.
- Published
- 2020
- Full Text
- View/download PDF
31. Improving the tactile perception of image textures based on adjustable amplitude, frequency, and waveform
- Author
-
Xiaoying Sun, Xuezhi Yan, Wu Qiushuang, and Guohong Liu
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Friction force ,Computer science ,business.industry ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Rendering algorithms ,Tactile perception ,Computer Graphics and Computer-Aided Design ,Rendering (computer graphics) ,Computer graphics ,Amplitude ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,Waveform ,020201 artificial intelligence & image processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,ComputingMethodologies_COMPUTERGRAPHICS ,media_common - Abstract
In this paper, we present a tactile rendering algorithm applied to an electrostatic tactile display that adjusts three parameters of the driving signal (amplitude, frequency, and waveform) to modulate the tangential friction force between a user’s finger and a touch screen. The aim of this work is to find an effective electrostatic tactile rendering algorithm to improve the tactile perception of image textures. The key idea is to jointly adjust the three parameters of the driving signal to increase the perceptual difference interval between image textures. We first explore the tactile representation characteristics of amplitude, frequency and waveform through subjective perception experiments. Based on these characteristics, we establish tactile mapping models between the three parameters of the driving signal and image textures. Finally, we use subjective evaluation experiments to verify the effectiveness of the proposed rendering method. The results show that the proposed rendering method can achieve a better tactile experience compared with rendering methods that realize tactile representation by only varying amplitude.
- Published
- 2020
- Full Text
- View/download PDF
32. A Style-Specific Music Composition Neural Network
- Author
-
Bai Yong, Xin Lv, Yun Tie, Shouxun Liu, and Cong Jin
- Subjects
Structure (mathematical logic) ,0209 industrial biotechnology ,Neutral network ,Artificial neural network ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer Networks and Communications ,Computer science ,business.industry ,General Neuroscience ,Computational intelligence ,02 engineering and technology ,Style (sociolinguistics) ,Sequence (music) ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Musical composition ,Subjective validation ,Artificial intelligence ,business ,Software - Abstract
Automatic music composition could dramatically decrease music production costs, lower the threshold for the non-professionals to compose as well as improve the efficiency of music creation. In this paper, we proposed an intelligent music composition neutral network to automatically generate a specific style of music. The advantage of our model is the innovative structure: we obtained the music sequence through an actor’s long short term memory, then fixed the probability of sequence by a reward-based procedure which serves as feedback to improve the performance of music composition. The music theoretical rule is introduced to constrain the style of generated music. We also utilized a subjective validation in experiment to guarantee the superiority of our model compared with state-of-the-art works.
- Published
- 2020
- Full Text
- View/download PDF
33. Integration of eye tracking and lip motion for hands-free computer access
- Author
-
Huei Sheng Shiue, Yi Ying Tsai, Chien Ming Lin, Bo Han Zeng, Gwo Ching Chang, and Ing Shiou Hwang
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer Networks and Communications ,Computer science ,Computer access ,business.industry ,05 social sciences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,050301 education ,Eye movement ,Input device ,Standard deviation ,Human-Computer Interaction ,Eye tracking ,Keypad ,0501 psychology and cognitive sciences ,Computer vision ,Control chart ,Artificial intelligence ,business ,0503 education ,050107 human factors ,Software ,Smoothing ,Information Systems - Abstract
Healthy people use a keyboard and mouse as standard input devices for controlling a computer. However, these input devices are usually not suitable for people with severe physical disabilities. This study aims to design and implement a suitable and reliable human–computer interactive (HCI) interface for disabled users by integrating both eye tracking technology and lip motion recognition. Eye movements control the cursor position on the computer screen. An eye gaze followed by a lip motion of the mouth opening served as a mouse click action. Seven lip motion features were extracted to discriminate mouth openings and mouth closures with the cumulative sum control chart algorithm. A novel smoothing technique called the threshold-based Savitzky–Golay smoothing filter was proposed to stabilize the cursor movement due to the inherent jittery motions of eyes and to reduce eye tracking latencies. In this study, a fixation experiment with nine dots was carried out to evaluate the efficacy of eye gaze data smoothing. A Chinese text entry experiment based on the on-screen keyboard with four keypad sizes was designed to evaluate the influence of the keypad size on the Chinese text entry rate. The results of the fixation experiment indicated that the threshold-based Savitzky–Golay smoothing filter with a threshold value of two standard deviations, a polynomial order of 3, and a window length of 61 can significantly improve the stability of eye cursor movements by 44.86% on average. The averaged Chinese text entry rate achieved 4.41 wpm with the dynamical enlargeable keypads. The current results encouraged us to utilize the proposed HCI interface for disabled users in the future.
- Published
- 2020
- Full Text
- View/download PDF
34. The influence of the Nearpod application on learning social geography in a grammar school in Czecha
- Author
-
Tomáš Mekota and Miroslav Marada
- Subjects
Cooperative learning ,Class (computer programming) ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Teaching method ,05 social sciences ,Social geography ,Educational technology ,050301 education ,Grammar school ,Library and Information Sciences ,Education ,0502 economics and business ,Human geography ,ComputingMilieux_COMPUTERSANDEDUCATION ,Mathematics education ,Technology integration ,050211 marketing ,Psychology ,0503 education - Abstract
Digital technologies are essential for almost all human activities, so children should learn to work with them at schools. Some studies of the effect of using computers, tablets or cell phones at schools have been carried out, but in Czechia, where the research was conducted, the number of such studies is very low. We wanted to examine how tablets help pupils of the high school with learning social geography in Czech educational system. We worked with two classes of a high school in Prague. Two lessons were taught in each class, one with the Nearpod application on the tablet, the other without it. Pupils wrote pre-tests before the lesson and post-tests after it. Interviews were conducted with some of the pupils after the lesson with the tablet. Then, we worked with the results of both classes together and with the results of each class separately. We found out that pupils enjoyed the lessons with tablet, they felt more motivated and they thought, that they had learnt more with the tablet than without it. However, statistics showed us different results: the results of post-test were better when pupils had used tablet, but this finding was not valid for one of the two classes, where the difference between tablet and non-tablet lesson was not statistically significant. There were differences in pupils’ relationship to digital technologies and in level of collaboration between the classes. The level of collaboration provides a good explanation of different success of both classes.
- Published
- 2020
- Full Text
- View/download PDF
35. Touch vs. click: how computer interfaces polarize consumers’ evaluations
- Author
-
Hean Tat Keh, Hongrui Zhao, Yijie Ai, and Xiaoyu Wang
- Subjects
Marketing ,Economics and Econometrics ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,Interface (computing) ,media_common.quotation_subject ,05 social sciences ,Object (computer science) ,050105 experimental psychology ,Field (computer science) ,law.invention ,Touchscreen ,Human–computer interaction ,law ,Perception ,Visual information processing ,0502 economics and business ,050211 marketing ,0501 psychology and cognitive sciences ,Business and International Management ,Stylus ,Haptic technology ,media_common - Abstract
Increasingly powerful computer technologies have enabled the development and widespread growth of touchscreen devices such as computers, tablets, and smartphones in consumers’ daily lives, including online shopping. Nonetheless, it is not clear to what extent and how direct-touch (i.e., finger) vs. indirect-touch (e.g., mouse click or stylus) interfaces have differential effects on consumers’ evaluations toward the object on the screen. Results from a lab experiment and a field study indicate that a direct-touch (vs. indirect-touch) interface has a polarizing effect on consumer evaluations. For an object about which consumers have a prior positive attitude, a direct-touch interface enhances consumer evaluations; for an object about which consumers have a prior negative attitude, a direct-touch interface lowers consumer evaluations. We find that consumers’ visual information processing style can moderate the polarizing effect. In addition, the polarizing effect can be explained by consumers’ vividness perception. These findings make useful contributions to the literature on haptic effects and human-computer interactions, as well as have significant managerial implications.
- Published
- 2020
- Full Text
- View/download PDF
36. Beginning to Multiply (with) Dynamic Digits: Fingers as Physical–Digital Hybrids
- Author
-
Sandy Bakos and David Pimm
- Subjects
Video recording ,Focus (computing) ,Touchscreen ,Relation (database) ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Human–computer interaction ,law ,education ,Educational technology ,Multiplication ,Affordance ,GeneralLiterature_MISCELLANEOUS ,law.invention - Abstract
The development of touchscreen technology is providing alternative ways for learners to conceptualise, visualise, experiment with and communicate about mathematical ideas and relationships. While the multi-touch affordances of touchscreens enable children to produce and transform ‘screen objects’ with their fingers (by means of varied forms of pressure and propulsion), they also invoke an intricate interrelationship between the user’s fingers and the surface of the device itself. Drawing on a half-hour video recording of two primary school children using the TouchTimes iPad app (about multiplication) for the first time, we examine how mutually interactive the children’s fingers were, both with each other and with this particular touchscreen technology, not least their combining in ways which challenge the seemingly clear distinction between digital and physical tools, when viewed as discrete and disjoint entities. We also explore fingers being used as objects in themselves, while examining ways of doing multiplication digitally with fingers, with a particular focus on the singular role of fingers as physical intermediaries. Our aim is to consider possible ways to develop well-educated fingers in relation to engaging with mathematics.
- Published
- 2020
- Full Text
- View/download PDF
37. Enhanced tactile identification of musical emotion in the deaf
- Author
-
Andréanne Sharp, François Champoux, and Benoit-Antoine Bacon
- Subjects
Adult ,Male ,Melody ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,media_common.quotation_subject ,Emotions ,Happiness ,Deafness ,behavioral disciplines and activities ,GeneralLiterature_MISCELLANEOUS ,050105 experimental psychology ,03 medical and health sciences ,0302 clinical medicine ,Functional neuroimaging ,Neuroplasticity ,otorhinolaryngologic diseases ,Humans ,0501 psychology and cognitive sciences ,Modality (semiotics) ,media_common ,General Neuroscience ,05 social sciences ,Cognition ,Middle Aged ,Tactile perception ,Touch Perception ,ComputingMilieux_COMPUTERSANDSOCIETY ,Female ,Identification (psychology) ,Psychology ,Music ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
Functional neuroimaging studies have demonstrated that following deafness, auditory regions can respond to tactile stimuli. However, research to date has not conclusively demonstrated the behavioral correlates of these functional changes, with most studies showing normal-like tactile capabilities in the deaf. It has recently been suggested that more cognitive and complex tactile processes, such as music perception, could help to uncover superior tactile capabilities in the deaf. Indeed, following deafness music seems to be perceived through vibration, but the extent to which they can perceive musical features though the tactile modality remains undetermined. The goal of this study was to investigate tactile identification of musical emotion in the deaf. Participants had to rate melodies based on their emotional perception. Stimuli were presented through an haptic glove. Data suggest that deaf and control participants were comparable in the identification of three of the four emotions tested (sad, fear/threat, peacefulness). However and most importantly, for the simplest emotion (happiness), significant differences emerged between groups, suggesting an improved tactile identification of musical emotion in the deaf. Results support the hypothesis that brain plasticity following deafness can lead to improved complex tactile ability.
- Published
- 2020
- Full Text
- View/download PDF
38. Touching the audience: musical haptic wearables for augmented and participatory live music performances
- Author
-
Travis West, Marcelo M. Wanderley, and Luca Turchet
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,business.industry ,Computer science ,Wearable computer ,020206 networking & telecommunications ,02 engineering and technology ,Musical ,Management Science and Operations Research ,Computer Science Applications ,Personalization ,Hardware and Architecture ,Electronic music ,Human–computer interaction ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Active listening ,business ,Wearable technology ,Haptic technology ,Gesture - Abstract
This paper introduces the musical haptic wearables for audiences (MHWAs), a class of wearable devices for musical applications targeting audiences of live music performances. MHWAs are characterized by embedded intelligence, wireless connectivity to local and remote networks, a system to deliver haptic stimuli, and tracking of gestures and/or physiological parameters. They aim to enrich musical experiences by leveraging the sense of touch as well as providing new capabilities for creative participation. The embedded intelligence enables the communication with other external devices, processes input data, and generates music-related haptic stimuli. We validate our vision with two concert-experiments. The first experiment involved a duo of electronic music performers and twenty audience members. Half of the audience used an armband-based prototype of MHWA delivering vibro-tactile feedback in response to performers’ actions on their digital musical instruments, and the other half was used as a control group. In the second experiment, a smart mandolin performer played live for twelve audience members wearing a gilet-based MHWA, which provided vibro-tactile sensations in response to the performed music. Overall, results from both experiments suggest that MHWAs have the potential to enrich the experience of listening to live music in terms of arousal, valence, enjoyment, and engagement. Nevertheless, results showed that the audio-haptic experience was not homogeneous across participants, who could be grouped as those appreciative of the vibrations and those less appreciative of them. The causes for a lack of appreciation of the haptic experience were mainly identified as the sensation of unpleasantness caused by the vibrations in certain parts of the body and the lack of the comprehension of the relation between what was felt and what was heard. Based on the reported results, we offer suggestions for practitioners interested in designing wearables for enriching the musical experience of audiences of live music via the sense of touch. Such suggestions point towards the need of mechanisms of personalization, systems able to minimize the latency between the sound and the vibrations, and a time of adaptation to the vibrations.
- Published
- 2020
- Full Text
- View/download PDF
39. Integration of a music generator and a song lyrics generator to create Spanish popular songs
- Author
-
Hugo Gonçalo Oliveira, María Navarro-Cáceres, Amílcar Cardoso, and Pedro Martins
- Subjects
Melody ,General Computer Science ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,05 social sciences ,Context (language use) ,02 engineering and technology ,Lyrics ,050105 experimental psychology ,Linguistics ,Field (computer science) ,Classical music ,Rhythm ,Popular music ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Generator (mathematics) - Abstract
The automatic generation of music is an emerging field of research that has attracted wide attention in Computer Science. However, most works are centered in classical music. This work develops ETHNO-MUSIC, an intelligent system that generates melodies based on popular music. ETHNO-MUSIC generates melodies with Markov models, which learns from a corpus of Spanish popular music. Then, given the importance of the lyrics in this context, ETHNO-MUSIC was integrated with Tra-La-Lyrics, an existing system that generates lyrics following a melody, which has been specifically adapted to suit this purpose. Several experiments were carried out to evaluate the quality of the results, based on human opinions towards generated pieces of music and lyrics. Overall, results are positive. Briefly, they reflect that, on the one hand, the melodies transmit a feeling of Spanish popular music, and on the other hand, the text of the lyrics is related to the topics analyzed, and the rhythm follows the melodic aspects of the music.
- Published
- 2020
- Full Text
- View/download PDF
40. Embedding Naturalistic Communication Teaching Strategies During Shared Interactive Book Reading for Preschoolers with Developmental Delays: A Guide for Caregivers
- Author
-
Hedda Meadan, Yusuf Akemoglu, and Jacqueline A. Towson
- Subjects
Medical education ,Shared reading ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Teaching method ,media_common.quotation_subject ,05 social sciences ,050301 education ,Grandparent ,Interpersonal communication ,Education ,Vignette ,Intervention (counseling) ,Reading (process) ,Developmental and Educational Psychology ,ComputingMilieux_COMPUTERSANDSOCIETY ,0501 psychology and cognitive sciences ,Sociology of Education ,Psychology ,0503 education ,050104 developmental & child psychology ,media_common - Abstract
Shared interactive book reading (SIBR) is a broad term used to describe the act of adults reading aloud to children, while encouraging interaction by asking questions and engaging in a discussion about the book. SIBR can be used to embed naturalistic communication teaching strategies, creating learning opportunities to promote a child’s language and communication skills. This article presents practical information on how caregivers can use naturalistic communication teaching strategies during SIBR with their child with developmental delays or disabilities (DD). The intended audiences are caregivers (e.g., parents, grandparents) and practitioners (e.g., classroom teachers, speech-language pathologists, reading specialists) who work with caregivers of children with DD. We explain the importance of early communication skills, naturalistic communication teaching strategies, and SIBR, and describe how to embed naturalistic communication teaching strategies within SIBR. Through a vignette we illustrate how caregivers can implement the techniques and strategies and how practitioners can support caregivers’ implementation.
- Published
- 2020
- Full Text
- View/download PDF
41. Mapping for meaning: the embodied sonification listening model and its implications for the mapping problem in sonic information design
- Author
-
Stephen Roddy and Brian Bridges
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,Conceptual metaphor ,Context (language use) ,06 humanities and the arts ,Information design ,060401 art practice, history & theory ,060404 music ,Data mapping ,Human-Computer Interaction ,Image schema ,Conceptual blending ,Sonification ,Embodied cognition ,Human–computer interaction ,Signal Processing ,0604 arts - Abstract
This is a theoretical paper that considers the mapping problem, a foundational issue which arises when designing a sonification, as it applies to sonic information design. We argue that this problem can be addressed by using models from the field of embodied cognitive science, including embodied image schema theory, conceptual metaphor theory and conceptual blends, and from research which treats sound and musical structures using these models, when mapping data to sound. However, there are currently very few theoretical frameworks for applying embodied cognition principles in a sonic information design context. This article describes one such framework, the embodied sonification listening model, which provides a theoretical description of sonification listening in terms of conceptual metaphor theory.
- Published
- 2020
- Full Text
- View/download PDF
42. Embracing indigenous metaphors: a new/old way of thinking about sustainability
- Author
-
John Reid and Matthew Rout
- Subjects
Sustainable development ,Global and Planetary Change ,Health (social science) ,010504 meteorology & atmospheric sciences ,Sociology and Political Science ,Ecology ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Metaphor ,media_common.quotation_subject ,ComputingMethodologies_MISCELLANEOUS ,Geography, Planning and Development ,Environmental ethics ,010501 environmental sciences ,Management, Monitoring, Policy and Law ,01 natural sciences ,Indigenous ,Sustainability ,Sociology ,Landscape ecology ,0105 earth and related environmental sciences ,Nature and Landscape Conservation ,media_common - Abstract
This paper explores the role of metaphor in cognition, outlining how the machine metaphor came to dominate and the problematic assumptions for culture/nature relationships and the consequences for the environment that underpin this metaphor. It then argues that the machine metaphor needs to be complemented by an animistic metaphor, which embodies and underpins indigenous relationships to nature, using examples from the New Zealand Māori.
- Published
- 2020
- Full Text
- View/download PDF
43. A high-capacity performance-preserving blind technique for reversible information hiding via MIDI files using delta times
- Author
-
Da-Chun Wu and Yi-Hsin Liu
- Subjects
Cover (telecommunications) ,MIDI ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer Networks and Communications ,Event (computing) ,Computer science ,Payload (computing) ,Byte ,020207 software engineering ,02 engineering and technology ,computer.file_format ,Musical ,Hardware and Architecture ,Information hiding ,Data_FILES ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Algorithm ,computer ,Software - Abstract
A new high-capacity information hiding method for embedding secret messages into MIDI files is proposed. The method can preserve the original musical performance of the cover MIDI file. The property of the variable-length quantity, which expresses the magnitude of the delta time before every event in a MIDI file, is utilized for secret bit embedding. The embedding is accomplished by padding the delta times with different numbers of leading constant bytes of 8016 to represent the secret bits. The method is both reversible and blind because the original cover MIDI file can be restored completely from the stego-MIDI file by extracting the embedded data out from the resulting stego-MIDI file without referencing the original cover MIDI file. A capability of hiding a large amount of secret information is achieved since the delta time is a basic parameter that appears before every event in the MIDI file. Good experimental results yielded by the proposed method as well as a comparison of the method with five existing performance-preserving methods from the viewpoints of stego-file quality, payload capacity, and data security show the superiority and feasibility of the proposed method.
- Published
- 2020
- Full Text
- View/download PDF
44. Principles of Constructing Systems for Monitoring the Vibrational State of Power Equipment Using IEEE 1451.X Smart Sensors
- Author
-
A. V. Erpalov, D. V. Taraday, S. A. Belousova, V. I. Sirotkin, D. V. Melkov, V. A. Vasiliev, and A. Yu. Nitsky
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,business.industry ,Computer science ,ComputingMilieux_PERSONALCOMPUTING ,Electrical engineering ,Energy Engineering and Power Technology ,Thermal power station ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,IEEE 1451 ,Vibration ,Physics::Atomic and Molecular Clusters ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,State (computer science) ,Physics::Chemical Physics ,business ,Power equipment - Abstract
The principles for developing smart sensors for monitoring the vibrational state and for vibration diagnostics of power equipment in thermal power plants are analyzed.
- Published
- 2020
- Full Text
- View/download PDF
45. A portable three-degrees-of-freedom force feedback origami robot for human–robot interactions
- Author
-
Alexandre Cherpillod, Stefano Mintchev, Jamie Paik, Marco Salerno, and Simone Scaduto
- Subjects
0301 basic medicine ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer Networks and Communications ,Computer science ,Kinesthetic learning ,Human–robot interaction ,Human-Computer Interaction ,03 medical and health sciences ,Software portability ,030104 developmental biology ,0302 clinical medicine ,Artificial Intelligence ,Human–computer interaction ,Joystick ,Teleoperation ,Robot ,Computer Vision and Pattern Recognition ,Mobile device ,030217 neurology & neurosurgery ,Software ,ComputingMethodologies_COMPUTERGRAPHICS ,Haptic technology - Abstract
Haptic interfaces can recreate the experience of touch and are necessary to improve human–robot interactions. However, at present, haptic interfaces are either electromechanical devices eliciting very limited touch sensations or devices that may provide more comprehensive kinesthetic cues but at the cost of their large volume: there is a clear trade-off between the richness of feedback and the device size. The design and manufacturing challenges in creating complex touch sensations from compact platforms still need to be solved. To overcome the physical limitation of miniaturizing force feedback robots, we adapted origami design principles to achieve portability, accuracy and scalable manufacturing. The result is Foldaway, a foldable origami robot that can render three-degrees-of-freedom force feedback in a compact platform that can fit in a pocket. This robotic platform can track the movement of the user’s fingers, apply a force of up to 2 newtons and render stiffness up to 1.2 newtons per millimetre. We experimented with different human–machine interactions to demonstrate the broad applicability of Foldaway prototypes: a portable interface for the haptic exploration of an anatomy atlas; a handheld joystick for interacting with virtual objects; and a bimanual controller for intuitive and safe teleoperation of drones. Haptic interfaces are important for the development of immersive human–machine interactions. To create a compact design with rich touch-sensitive functions, a robotic device called Foldaway, which folds flat, has been designed that can render three-degrees-of-freedom force feedback.
- Published
- 2019
- Full Text
- View/download PDF
46. Review of Ear Biometrics
- Author
-
Ying Zhu, Zhaobin Wang, and Jing Yang
- Subjects
Matching (statistics) ,Biometrics ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Process (engineering) ,Computer science ,Applied Mathematics ,Speech recognition ,Fingerprint (computing) ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science Applications ,Identification (information) ,ComputingMethodologies_PATTERNRECOGNITION ,Face (geometry) ,Preprocessor - Abstract
As one of the most important biometrics, ear biometrics is getting more and more attention. Ear recognition has unique advantages and can make identification more secure and reliable together with other biometrics (e.g. face and fingerprint). Therefore, we investigate related information about ear recognition and classify the entire process of ear recognition, including detection, preprocessing, unimodal recognition including feature extraction and decision of classification or matching, and multimodal recognition based on inter-level and intra-level fusion. Unimodal and multimodal recognition are proposed comprehensively. In addition, inter-level and intra-level fusion are divided into different fusion ways. At the same time, we compare recognition results under the same dataset and analyze the difficulty of some datasets. In the end, challenges and outlook of ear recognition are also mentioned to expect to provide readers with some help about future directions and problems that should be overcome.
- Published
- 2019
- Full Text
- View/download PDF
47. The relationship between the positivity effect and facial-cue based trustworthiness evaluations in older adults
- Author
-
Siyao Wang, Zhibin Guo, Shen Liu, Shangfeng Han, Xiujuan Wang, Lin Zhang, and Ye Xu
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Negative information ,05 social sciences ,ComputingMilieux_PERSONALCOMPUTING ,Information processing ,050109 social psychology ,050105 experimental psychology ,Developmental psychology ,InformationSystems_MODELSANDPRINCIPLES ,Trustworthiness ,Younger adults ,ComputingMilieux_COMPUTERSANDSOCIETY ,0501 psychology and cognitive sciences ,Positivity effect ,Young adult ,Psychology ,General Psychology - Abstract
This study investigates if there is an age-related positivity effect on how older adults in eastern cultures perform trustworthiness evaluations based on facial cues. Furthermore, it explores the potential method information processing affected by the positivity effect during trustworthiness evaluations. The results show that the level of trustworthiness older adults in China assign to faces is significantly higher than that of younger adults; older adults’ trustworthiness evaluations based on facial cues in the valid cueing condition is significantly higher than in the invalid cueing condition; and for younger adults, trustworthiness evaluation of untrustworthy faces in the valid cueing condition was significantly lower than in the invalid cueing condition. Overall, these results show that positive information processing in older adults is regulated by attention capacity, while negative information processing by young adults is regulated by attention capacity, which indicates that the age-related positivity effect in older adults is due to increased positive information processing.
- Published
- 2019
- Full Text
- View/download PDF
48. Multimodal Literacy and Social Interaction: Young Children’s Literacy Learning
- Author
-
Sheryl V. Taylor and Cynthia B. Leung
- Subjects
Language arts ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,media_common.quotation_subject ,05 social sciences ,050301 education ,Social semiotics ,Literacy ,Social relation ,Education ,Interpersonal relationship ,ComputerApplications_MISCELLANEOUS ,Sociocultural perspective ,ComputingMilieux_COMPUTERSANDEDUCATION ,Developmental and Educational Psychology ,Mathematics education ,0501 psychology and cognitive sciences ,Multilingualism ,Sociology of Education ,Psychology ,0503 education ,050104 developmental & child psychology ,media_common - Abstract
For young children, literacy is multimodal. Visual images, oral language, gestures, numbers, and other signs are intermingled with printed words during language arts activities in preschool and kindergarten. All children, in particular children from multilingual/multicultural backgrounds, draw on the social, cultural, and emotional roles and structures they observe and experience daily in their homes and communities when presented with multiple modes of literacy in meaningful classroom contexts. This article employs social semiotics as a framework for presenting multimodal literacies used by young children to make and create meaning in their social and learning interactions. Following an overview of multimodal literacy related to children’s literacy learning, topics explored in this article include multimodal literacy activities that embed social interaction, classroom multimodal literacy events that reflect the sociocultural patterns children bring to the classroom, and approaches to implementing a culturally responsive pedagogy in order to make multimodal literacy meaningful to young learners. Classroom examples are provided throughout to illustrate scenarios with teachers who are knowledgeable about multimodal literacy experiences and social interaction, embrace a sociocultural perspective, and are committed to implementing culturally responsive pedagogy.
- Published
- 2019
- Full Text
- View/download PDF
49. Learning Stories Through Gesture: Gesture’s Effects on Child and Adult Narrative Comprehension
- Author
-
Naomi Sweller and Nicole Dargue
- Subjects
Age differences ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,05 social sciences ,050301 education ,Educational psychology ,Variety (linguistics) ,050105 experimental psychology ,Task (project management) ,Narrative comprehension ,Nonverbal communication ,Free recall ,Developmental and Educational Psychology ,0501 psychology and cognitive sciences ,Psychology ,0503 education ,Cognitive psychology ,Gesture - Abstract
Through providing an external support to speech, gesture observation may benefit a student’s learning in a variety of areas, including narrative comprehension. Across two studies, we investigated factors that could moderate when gestures are most beneficial to narrative comprehension, including gesture type, task difficulty, and age, in order to help determine when gestures benefit narrative comprehension most. Crucially, observing typical gestures significantly benefited narrative comprehension (measured by specific questions relating to gesture points) compared with atypical or no gestures, which did not differ significantly. When measured through free recall and specific questions, these effects were not moderated by total task difficulty or age. This finding suggests that how beneficial a gesture is for narrative comprehension may depend more on the kind of gesture observed rather than on age or the difficulty of the task. It may be that typical gestures benefit narrative comprehension more than atypical gestures due to the typical gestures being more semantically related to accompanying speech. In a second study, typical gestures were rated as more semantically related to the speech than were atypical gestures. We argue that educators’ use of typical, frequently produced iconic gestures that are adequately semantically related to speech may enhance student understanding.
- Published
- 2019
- Full Text
- View/download PDF
50. User-defined gesture interaction for in-vehicle information systems
- Author
-
Jiali Qiu, Yu Wang, Jiayi Liu, Xiaolong Zhang, and Huiyue Wu
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer Networks and Communications ,Computer science ,020207 software engineering ,User defined ,02 engineering and technology ,Field (computer science) ,Hardware and Architecture ,Human–computer interaction ,Participatory design ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,In vehicle ,Information system ,Set (psychology) ,Software ,Gesture - Abstract
Gesture elicitation study, a technique emerging from the field of participatory design, has been extensively applied in emerging interaction and sensing technologies in recent years. However, traditional gesture elicitation study often suffers from the gesture disagreement and legacy bias problem and may not generate optimal gestures for a target system. This paper reports a research project on user-defined gestures for interacting with in-vehicle information systems. The main contribution of our research lies in a 3-stage, participatory design method we propose for deriving more reliable gestures than traditional gesture elicitation methods. Using this method, we generated a set of user-defined gestures for secondary tasks in an in-vehicle information system. Drawing on our research, we develop a set of design guidelines for freehand gesture design. We highlight the implications of this work for the gesture elicitation for all gestural interfaces.
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.