520 results on '"Woo, Woontack"'
Search Results
502. The Design of Software Architecture for E-Learning Platforms
- Author
-
Zhou, Dongdai, Zhang, Zhuo, Zhong, Shaochun, Xie, Pan, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Pan, Zhigeng, editor, Zhang, Xiaopeng, editor, El Rhalibi, Abdennour, editor, Woo, Woontack, editor, and Li, Yi, editor
- Published
- 2008
- Full Text
- View/download PDF
503. A Model for Knowledge Innovation in Online Learning Community
- Author
-
Zhan, Qinglong, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Pan, Zhigeng, editor, Zhang, Xiaopeng, editor, El Rhalibi, Abdennour, editor, Woo, Woontack, editor, and Li, Yi, editor
- Published
- 2008
- Full Text
- View/download PDF
504. u-Teacher: Ubiquitous Learning Approach
- Author
-
Fernando, Zacarías F., Rosalba, Cuapa C., Francisco, Lozano T., Andres, Vazquez F., Dionicio, Zacarías F., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Pan, Zhigeng, editor, Zhang, Xiaopeng, editor, El Rhalibi, Abdennour, editor, Woo, Woontack, editor, and Li, Yi, editor
- Published
- 2008
- Full Text
- View/download PDF
505. WRITE: Writing Revision Instrument for Teaching English
- Author
-
Lo, Jia-Jiunn, Wang, Ying-Chieh, Yeh, Shiou-Wen, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Pan, Zhigeng, editor, Zhang, Xiaopeng, editor, El Rhalibi, Abdennour, editor, Woo, Woontack, editor, and Li, Yi, editor
- Published
- 2008
- Full Text
- View/download PDF
506. Physiological evidence for a dual process model of the social effects of emotion in computers.
- Author
-
Choi, Ahyoung, de Melo, Celso M., Khooshabeh, Peter, Woo, Woontack, and Gratch, Jonathan
- Subjects
- *
COMPUTER systems , *SOCIAL informatics , *EMPIRICAL research , *HEART rate monitoring , *HUMAN-computer interaction - Abstract
There has been recent interest on the impact of emotional expressions of computers on people’s decision making. However, despite a growing body of empirical work, the mechanism underlying such effects is still not clearly understood. To address this issue the paper explores two kinds of processes studied by emotion theorists in human–human interaction: inferential processes, whereby people retrieve information from emotion expressions about other’s beliefs, desires, and intentions; affective processes, whereby emotion expressions evoke emotions in others, which then influence their decisions. To tease apart these two processes as they occur in human–computer interaction, we looked at physiological measures (electrodermal activity and heart rate deceleration). We present two experiments where participants engaged in social dilemmas with embodied agents that expressed emotion. Our results show, first, that people’s decisions were influenced by affective and cognitive processes and, according to the prevailing process, people behaved differently and formed contrasting subjective ratings of the agents; second we show that an individual trait known as electrodermal lability, which measures people’s physiological sensitivity, predicted the extent to which affective or inferential processes dominated the interaction. We discuss implications for the design of embodied agents and decision making systems that use emotion expression to enhance interaction between humans and computers. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
507. Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration
- Author
-
Hayun Kim, Weidong Huang, Woontack Woo, Seungwon Kim, Mark Billinghurst, Gun A. Lee, Kim, Seungwon, Lee, Gun, Huang, Weidong, Kim, Hayun, Woo, Woontack, Billinghurst, Mark, and CHI Conference on Human Factors in Computing Systems, CHI 2019 Glasgow, UK 4-9 May 2019
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,05 social sciences ,020207 software engineering ,Usability ,02 engineering and technology ,Mixed reality ,Sketch ,Task (project management) ,usability ,Feeling ,Human–computer interaction ,remote collaboration ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Visual communication ,business ,co-presence ,mixed reality ,050107 human factors ,media_common ,Gesture - Abstract
© 2019 Copyright held by the owner/author(s). Many researchers have studied various visual communication cues (e.g. pointer, sketching, and hand gesture) in Mixed Reality remote collaboration systems for real-world tasks. However, the effect of combining them has not been so well explored. We studied the effect of these cues in four combinations: hand only, hand + pointer, hand + sketch, and hand + pointer + sketch, with three problem tasks: Lego, Tangram, and Origami. The study results showed that the participants completed the task significantly faster and felt a significantly higher level of usability when the sketch cue is added to the hand gesture cue, but not with adding the pointer cue. Participants also preferred the combinations including hand and sketch cues over the other combinations. However, using additional cues (pointer or sketch) increased the perceived mental effort and did not improve the feeling of co-presence. We discuss the implications of these results and future research directions.
- Published
- 2019
508. The effect of avatar appearance on social presence in an augmented reality remote collaboration
- Author
-
Mark Billinqhurst, Woontack Woo, Boram Yoon, Gun A. Lee, Hyung-il Kim, Yoon, Boram, Kim, Hyung il, Lee, Gun A, Billinghurst, Mark, Woo, Woontack, and VR 2019: 26th IEEE Conference on Virtual Reality and 3D User Interfaces Osaka, Japan 23–27 March 2019
- Subjects
telepresence ,Computer science ,Upper body ,media_common.quotation_subject ,05 social sciences ,Visibility (geometry) ,avatars ,020207 software engineering ,Context (language use) ,02 engineering and technology ,collaboration ,augmented reality ,Style (sociolinguistics) ,User studies ,three-dimensional displays ,Human–computer interaction ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,task analysis ,0501 psychology and cognitive sciences ,Augmented reality ,050107 human factors ,Avatar ,media_common ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This paper investigates the effect of avatar appearance on Social Presence and users’ perception in an Augmented Reality (AR) telepresence system. Despite the development of various commercial 3D telepresence systems, there has been little evaluation and discussions about the appearance of the collaborator’s avatars. We conducted two user studies comparing the effect of avatar appearances with three levels of body part visibility (head & hands, upperbody, and whole body) and two different character styles (realistic and cartoon-like) on Social Presence while performing two different remote collaboration tasks. We found that a realistic whole body avatar was perceived as being the best for remote collaboration, but an upper body or cartoon style could be considered as a substitute depending on the collaboration context. We discuss these results and suggest guidelines for designing future avatar-mediated AR remote collaboration systems. Refereed/Peer-reviewed
- Published
- 2019
509. A usability study of multimodal input in an augmented reality environment
- Author
-
Richard Green, Mark Billinghurst, Woonhyuk Baek, Woontack Woo, Minkyung Lee, Lee, Minkyung, Billinghurst, Mark, Woonhyuk, Baek, Green, Richard, and Woo, Woontack
- Subjects
Multimedia ,business.industry ,Computer science ,Interface (computing) ,satisfaction ,effectiveness ,Usability ,Virtual reality ,computer.software_genre ,USable ,Computer Graphics and Computer-Aided Design ,augmented reality ,usability ,Human-Computer Interaction ,Computer graphics ,efficiency ,Human–computer interaction ,Systems architecture ,multimodal interface ,Augmented reality ,business ,computer ,Software ,Gesture - Abstract
In this paper, we describe a user study evaluating the usability of an augmented reality (AR) multimodal interface (MMI). We have developed an AR MMI that combines free-hand gesture and speech input in a natural way using a multimodal fusion architecture. We describe the system architecture and present a study exploring the usability of the AR MMI compared with speech-only and 3D-hand-gesture-only interaction conditions. The interface was used in an AR application for selecting 3D virtual objects and changing their shape and color. For each interface condition, we measured task completion time, the number of user and system errors, and user satisfactions. We found that the MMI was more usable than the gesture-only interface conditions, and users felt that the MMI was more satisfying to use than the speech-only interface conditions; however, it was neither more effective nor more efficient than the speech-only interface. We discuss the implications of this research for designing AR MMI and outline directions for future work. The findings could also be used to help develop MMIs for a wider range of AR applications, for example, in AR navigation tasks, mobile AR interfaces, or AR game applications. Refereed/Peer-reviewed
- Published
- 2013
- Full Text
- View/download PDF
510. Advances in Tangible Interaction and Ubiquitous Virtual Reality
- Author
-
Tobias Höllerer, Haruo Takemura, Ali Mazalek, Eva Hornecker, Brygg Ullmer, Caroline Hummels, Mark Billinghurst, Albrecht Schmidt, Elise van den Hoven, Dongpyo Hong, Adrian David Cheok, Robert J. K. Jacob, Gerard Jounghyun Kim, Michael J. Haller, Woontack Woo, Hong, Dongpyo, Höllerer, Tobias, Haller, Michael, Takemura, Haruo, Cheok, Adrian David, Kim, Gerard Jounghyun, Billinghurst, Mark, Woo, Woontack, Hornecker, Eva, Jacob, Robert JK, Hummels, Caroline, Ullmer, Brygg, Schmidt, Albrecht, Van, Den Hoven Elise, and Mazalek, Ali
- Subjects
Focus (computing) ,Ubiquitous computing ,Tangible ,Event (computing) ,Computer science ,business.industry ,Conference ,Usability ,Augmented reality ,Virtual reality ,Development theory ,Computer Science Applications ,World Wide Web ,Embedded ,Software ,Computational Theory and Mathematics ,Ubiquitous virtual reality ,Human–computer interaction ,TUI ,business - Abstract
The first article reports on context-sensitive augmented-reality research presented at the 2007 International Symposium on Ubiquitous Virtual Reality. This student-organized event explored the use of contextual information, design principles, and effective user evaluation for developing AR applications for ubiquitous computing environments. The second article reports on The International Conference on Tangible and Embedded Interaction, the first conference series worldwide to focus on tangible and embedded interaction. The conference is interdisciplinary, covering the arts, hardware design, software toolkits for prototyping, and user studies and theory development. © 2006 IEEE. Refereed/Peer-reviewed
- Published
- 2008
- Full Text
- View/download PDF
511. OzCHI 2016 workshop proposal: The first international workshop on mixed and augmented reality innovations (MARI)
- Author
-
Kiyoshi Kiyokawa, Bruce H. Thomas, Hideo Saito, Woontack Woo, Kiyokawa, Kiyoshi, Saito, Hideo, Thomas, Bruce, Woo, Woontack, and 28th Australian Conference on Computer-Human Interaction Tasmania, Australia 29 November - 2 December 2016
- Subjects
User studies ,Engineering management ,Engineering ,Software ,Early results ,business.industry ,Human–computer interaction ,Research community ,Augmented reality ,business - Abstract
As the advancement of mixed and augmented reality technologies accelerates, not only the matured, complete studies but also early results of innovative ideas and unique experiences of case studies are getting more important to the research community. Examples of such studies include those with innovative concepts, hardware, software or applications but lacking rigorous user studies, case studies in industry and education, cultural and artistic installations using AR/MR technologies, and interdisciplinary studies that do not fit well into existing conference venues. The First International Workshop on Mixed and Augmented Reality Innovations (MARI) provides an opportunity to share and discuss such innovative ideas and precious experiences with researchers from all over the world.
- Published
- 2016
512. International Symposium on Ubiquitous Virtual Reality 2009
- Author
-
Woontack Woo, Wonwoo Lee, Kari Pulli, Daniel Wagner, Antonio Krüger, Bruce H. Thomas, Lee, Wonwoo, Kruger, Antonio, Thomas, Bruce Hunter, Wagner, Daniel, Pulli, Kari, and Woo, Woontack
- Subjects
mobile phone ,Ubiquitous robot ,Context-aware pervasive systems ,Ubiquitous computing ,Multimedia ,Computer science ,ubiquitous computing ,Virtual reality ,Computer-mediated reality ,computer.software_genre ,augmented reality ,Computer Science Applications ,Computational Theory and Mathematics ,Mobile phone ,Human–computer interaction ,Augmented reality ,ubiquitous virtual reality ,computer ,Mobile device ,Software - Abstract
We report on the current state and future direction of ubiquitous VR, based on presentations and discussions during the 2009 International Symposium on Ubiquitous Virtual Reality. In the symposium, enabling technologies and applications for realizing ubiquitous VR on mobile devices are discussed and several approaches were proposed aimed at improving ubiquitous VR applications with pervasive and ubiquitous computing technologies.
- Published
- 2010
- Full Text
- View/download PDF
513. An interactive 3D movement path manipulation method in an augmented reality environment
- Author
-
Woontack Woo, Mark Billinghurst, Taejin Ha, Ha, Taejin., Billinghurst, Mark, and Woo, Woontack
- Subjects
Computer science ,immersive augmented reality ,movement path editing ,tangible user interface ,3D object selection and manipulation ,Translation (geometry) ,Human-Computer Interaction ,Computer graphics (images) ,augmented reality authoring ,Line (geometry) ,Control point ,Path (graph theory) ,Tangible user interface ,Augmented reality ,Point (geometry) ,Software ,Interpolation - Abstract
In this paper, we evaluate a path editing method using a tangible user interface to generate and manipulate the movement path of a 3D object in an Augmented Reality (AR) scene. To generate the movement path, each translation point of a real 3D manipulation prop is examined to determine which point should be used as a control point for the path. Interpolation using splines is then used to reconstruct the path with a smooth line. A dynamic score-based selection method is also used to effectively select small and dense control points of the path. In an experimental evaluation, our method took the same time and generated a similar amount of errors as a more traditional approach, however the number of control points needed was significantly reduced. For control manipulation, the task completion time was quicker and there was less hand movement needed. Our method can be applied to drawing or curve editing methods in AR educational, gaming, and simulation applications. © 2011 British Informatics Society Limited. Published by Elsevier B.V. All rights reserved. Refereed/Peer-reviewed
- Published
- 2012
514. A sensor-based interaction for ubiquitous virtual reality systems
- Author
-
Woontack Woo, Hartmut Seichter, Julian Looser, Dongpyo Hong, Mark Billinghurst, International Symposium on Ubiquitous Virtual Reality (ISUVR) Gwangju, South Korea 10-13 July 2008, Hong, Dongpyo, Looser, Julian, Seichter, Hartmut, Billinghurst, Mark, and Woo, Woontack
- Subjects
Ubiquitous computing ,Computer science ,business.industry ,Mobile computing ,Computer-mediated reality ,Virtual reality ,sensory data ,Human–computer interaction ,sensor based-interaction ,virtual reality ,Augmented reality ,User interface ,business ,Graphical user interface ,Virtual prototyping - Abstract
In this paper, we propose a sensor-based interaction for ubiquitous virtual reality (U-VR) systems that users are able to interact implicitly or explicitly with through a sensor. Due to the advances in sensor technology, we can utilize sensory data as a means of user interactions. To show the feasibility of the proposed method, we extend the Composar augmented reality (AR) authoring tool to add support for sensor-based interaction. In this way the user can write simple scripts to rapidly prototype interaction with virtual 3D contents through a sensor. We believe that the proposed method provides natural user interactions for U-VR systems. Refereed/Peer-reviewed
- Published
- 2008
515. TMAR: Extension of a Tabletop Interface Using Mobile Augmented Reality
- Author
-
Woontack Woo, Se Won Na, Mark Billinghurst, 3rd International Conference on E-Learning and Game Nanjing, China 25-27 June 2008, Na, Sewon, Billinghurst, Mark, and Woo, Woontack
- Subjects
3D interaction ,Multimedia ,Computer science ,Digital content ,Interface (computing) ,mobile augmented reality ,Input device ,tangible user interface ,mobile interaction ,computer.software_genre ,Mixed reality ,Human–computer interaction ,tabletop user interface ,Tangible user interface ,Augmented reality ,tabletop mobile augmented reality ,Mobile interaction ,computer - Abstract
Recently, many researchers have worked on tabletop systems. One issue with tabletop interfaces is how to control the table without using conventional desktop input devices such as a keyboard or mouse. A second issue is allowing multiple users to simultaneously share the tabletop system. In this paper we explore how Augmented/mixed reality (AR/MR) technology can be used to explore these issues. One advantage of AR technology is being able to bring 3D virtual objects into the real world without needing to use a desktop monitor and allows users to intuitively interact with the objects. In this paper we describe a Tabletop Mobile AR system that combines a tabletop and a mobile interface. The Tabletop system can recognize user gestures and objects and intuitively manipulate and control them. In addition, multiple users have equal access for information on the table enabling them to easily share digital content. This makes possible unique entertainment and education applications. In this paper we describe the technology, sample entertainment interfaces and initial user feedback. Refereed/Peer-reviewed
- Published
- 2008
- Full Text
- View/download PDF
516. The Effects of Spatial Complexity on Narrative Experience in Space-Adaptive AR Storytelling.
- Author
-
Shin JE, Yoon B, Kim D, and Woo W
- Abstract
A critical yet unresolved challenge in designing space-adaptive narratives for Augmented Reality (AR) is to provide consistently immersive user experiences anywhere, regardless of physical features specific to a space. For this, we present a comprehensive analysis on a series of user studies investigating how the size, density, and layout of real indoor spaces affect users playing Fragments, a space-adaptive AR detective game. Based on the studies, we assert that moderate levels of traversability and visual complexity afforded in counteracting combinations of size and complexity are beneficial for narrative experience. To confirm our argument, we combined the experimental data of the studies (n=112) to compare how five different spatial complexity conditions impact narrative experience when applied to contrasting room sizes. Results show that whereas factors of narrative experience are rated significantly higher in relatively simple settings for a small space, they are less affected by complexity in a large space. Ultimately, we establish guidelines on the design and placement of space-adaptive augmentations in location-independent AR narratives to compensate for the lack or excess of affordances in various real spaces and enhance user experiences therein.
- Published
- 2023
- Full Text
- View/download PDF
517. Effects of Avatar Transparency on Social Presence in Task-Centric Mixed Reality Remote Collaboration.
- Author
-
Yoon B, Shin JE, Kim HI, Young Oh S, Kim D, and Woo W
- Abstract
Despite the importance of avatar representation on user experience for Mixed Reality (MR) remote collaboration involving various device environments and large amounts of task-related information, studies on how controlling visual parameters for avatars can benefit users in such situations have been scarce. Thus, we conducted a user study comparing the effects of three avatars with different transparency levels (Nontransparent, Semi-transparent, and Near-transparent) on social presence for users in Augmented Reality (AR) and Virtual Reality (VR) during task-centric MR remote collaboration. Results show that avatars with a strong visual presence are not required in situations where accomplishing the collaborative task is prioritized over social interaction. However, AR users preferred more vivid avatars than VR users. Based on our findings, we suggest guidelines on how different levels of avatar transparency should be applied based on the context of the task and device type for MR remote collaboration.
- Published
- 2023
- Full Text
- View/download PDF
518. Visualizing Hand Force with Wearable Muscle Sensing for Enhanced Mixed Reality Remote Collaboration.
- Author
-
Kim HI, Yoon B, Oh SY, and Woo W
- Subjects
- Computer Graphics, Muscles, Augmented Reality, Wearable Electronic Devices
- Abstract
In this paper, we present a prototype system for sharing a user's hand force in mixed reality (MR) remote collaboration on physical tasks, where hand force is estimated using wearable surface electromyography (sEMG) sensor. In a remote collaboration between a worker and an expert, hand activity plays a crucial role. However, the force exerted by the worker's hand has not been extensively investigated. Our sEMG-based system reliably captures the worker's hand force during physical tasks and conveys this information to the expert through hand force visualization, overlaid on the worker's view or on the worker's avatar. A user study was conducted to evaluate the impact of visualizing a worker's hand force on collaboration, employing three distinct visualization methods across two view modes. Our findings demonstrate that sensing and sharing hand force in MR remote collaboration improves the expert's awareness of the worker's task, significantly enhances the expert's perception of the collaborator's hand force and the weight of the interacting object, and promotes a heightened sense of social presence for the expert. Based on the findings, we provide design implications for future mixed reality remote collaboration systems that incorporate hand force sensing and visualization.
- Published
- 2023
- Full Text
- View/download PDF
519. Spatial transition management for improving outdoor cinematic augmented reality experience of the TV show.
- Author
-
Park H, Shakeri M, Jeon I, Kim J, Sadeghi-Niaraki A, and Woo W
- Abstract
There have been attempts to provide new cinematic experiences by connecting TV or movie content to suitable locations through augmented reality (AR). However, few studies have suggested a method to manage breakdowns in continuity due to spatial transitions. Thus, we propose a method to manage the spatial transition that occurs when we create a TV show trajectory by mapping TV show scenes with spatiotemporal information to the real world. Our approach involved two steps. The first step is to reduce the spatial transition considering the sequence, location, and importance of TV show scenes when creating the TV show trajectory in the authoring tool. The second is to fill the spatial transition with additional TV show scenes considering sequence, importance, and user interest when providing the TV show trajectory in the mobile application. The user study results showed that reducing spatial transition increases narrative engagement by allowing participants to see important content within the trajectory. The additional content in spatial transition decreased the physical demand and effort in terms of the perceived workload, although it increased the task completion time. Integrated spatial transition management improved the overall cinematic augmented reality (CAR) experience of the TV show. Furthermore, we suggest design implications for realizing the CAR of TV shows based on our findings., (© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022.)
- Published
- 2022
- Full Text
- View/download PDF
520. Instant Panoramic Texture Mapping with Semantic Object Matching for Large-Scale Urban Scene Reproduction.
- Author
-
Park J, Jeon IB, Yoon SE, and Woo W
- Abstract
This paper proposes a novel panoramic texture mapping-based rendering system for real-time, photorealistic reproduction of large-scale urban scenes at a street level. Various image-based rendering (IBR) methods have recently been employed to synthesize high-quality novel views, although they require an excessive number of adjacent input images or detailed geometry just to render local views. While the development of global data, such as Google Street View, has accelerated interactive IBR techniques for urban scenes, such methods have hardly been aimed at high-quality street-level rendering. To provide users with free walk-through experiences in global urban streets, our system effectively covers large-scale scenes by using sparsely sampled panoramic street-view images and simplified scene models, which are easily obtainable from open databases. Our key concept is to extract semantic information from the given street-view images and to deploy it in proper intermediate steps of the suggested pipeline, which results in enhanced rendering accuracy and performance time. Furthermore, our method supports real-time semantic 3D inpainting to handle occluded and untextured areas, which appear often when the user's viewpoint dynamically changes. Experimental results validate the effectiveness of this method in comparison with the state-of-the-art approaches. We also present real-time demos in various urban streets.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.