1,446 results on '"Virtual actor"'
Search Results
2. A Roadmap to Emotionally Intelligent Creative Virtual Assistants
- Author
-
Eidlin, Alexander A., Samsonovich, Alexei V., Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Samsonovich, Alexei V., editor, and Klimov, Valentin V., editor
- Published
- 2018
- Full Text
- View/download PDF
3. Before Body Scanning There was Looker: Building the Proto-Digital Hollywood Actor, circa 1981.
- Author
-
Staiti, Alana
- Subjects
MANUFACTURING processes ,COMPUTER simulation ,ACTORS ,AUTOMATION ,INTERNATIONAL business enterprises - Abstract
This paper uses the film Looker (Michael Crichton, 1981) to highlight how a group of filmmakers and technologists imagined the sinister side of computer-automated biometrics would unfold in late twentieth century United States. The film depicts a scene in which a young female model gets her naked body scanned for a multinational corporation that will capitalize on the 3D computer model created from her likeness. An analysis of the body scanning scene and behind-the-scenes production processes raise new questions about the legacy of biometric imaginaries in the United States and helps us see in new ways how a set of concerns crystallized around the computerization of personal identifiable information, including physical bodies. While fears of computer automation existed in previous decades, by the early 1980s, the stakes heightened as biometric and photosensing technologies could sense and gather data computationally. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
4. Creative virtual composer assistant based on the eBICA framework.
- Author
-
Kostenko, Dmitry O., Mashtak, Ivan A., Fomin, Danila D., Mashtak, Darya V., Razheva, Anastasia V., Kim, Natalia V., and Samsonovich, Alexei V.
- Subjects
COMPOSERS ,EMOTIONAL state ,INTELLIGENT personal assistants ,AFFECTIVE computing ,ARTIFICIAL intelligence ,UNDERGRADUATES ,MUSIC software ,COMPUTER composition ,EMOTIONAL intelligence - Abstract
The topic addressed here is the role of emotions in music creation. The fact that musical fragments can be evaluated on a number of emotional scales allows one to create a semantic map of fragments and to use this map as a guidance in music creation. The work presents results of the study of a virtual composer assistant based on the semantic map and on the graph of similarity (computed using LibROSA) of selected 158 musical fragments (Apple Loops). The assistant was built based on the eBICA cognitive architecture framework (Samsonovich, 2013, 2018). Participants were 16 NRNU MEPhI Undergraduate Students. Fragments were combined into compositions by participants guided by the assistant, and then evaluated on a number of scales by other participants. Four experimental conditions were compared: with and without guidance by the map, with and without guidance by the similarity graph. Results indicate that the usage of similarity data by the assistant significantly improves the quality of resulting compositions. At the same time, the effect of usage of the semantic map was found significant only in the absence of hints based on the similarity data. Nevertheless, obtained results support the idea of the semantic map significance in composer assistant's function. One of the next goals in this study will be to generate music automatically, following the dynamics of actual or expected human emotional states, e.g., during a videogame. In general, the advantage of this new technology is in its generality: it will have broad applications in many areas of artificial intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. Virtual Listener: A Turing-like test for behavioral believability.
- Author
-
Chubarov, Arthur A., Tikhomirova, Daria V., Shirshova, Anastasia V., Veselov, Nikolai O., and Samsonovich, Alexei V.
- Subjects
FACIAL expression ,BODY language ,SOCIAL interaction ,VIRTUAL prototypes ,EYE movements - Abstract
Virtual Listener (VL) is a generalized prototype of a virtual character based on the principles of cognitive architecture eBICA, which uses facial expressions and "body language" (eyes movements, head rotation) to keep social and emotional contact with the user. Such contact also implies that VL needs to perceive user's facial expression and gaze, and in the long term – also intonation of the user's voice, the sentiment and content of user's speech. In this work, we present an approach to modeling a perceptive 3D virtual listener with emotional capabilities. The virtual character has a 3D face that performs real-time, realistic and believable facial expression dynamics. Our primary goal in this study was to evaluate the concept: e.g., to find out whether a virtual-agent-generated behavior can engender feelings of rapport in human speakers comparable to those that a real human listener can cause? At the same time, this article serves a limited purpose and only describes our current progress so far in addressing this question. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. ArmSym: A Virtual Human–Robot Interaction Laboratory for Assistive Robotics
- Author
-
Bustamante Gomez, Samuel, Peters, Jan, Schölkopf, Bernhard, Grosse-Wentrup, Moritz, and Jayaram, Vinay
- Subjects
Computer Networks and Communications ,Computer science ,Human Factors and Ergonomics ,Virtual reality ,simulated laboratory ,ArmSym ,Artificial Intelligence ,Human–computer interaction ,human–robot interaction ,Rehabilitation robotics ,Virtual actor ,Prosthetics ,Focus (computing) ,business.industry ,Testbed ,Usability ,Computer Science Applications ,Human-Computer Interaction ,robot arm ,Control and Systems Engineering ,Signal Processing ,Task analysis ,virtual reality ,Robot ,Laboratories ,Human-robot interaction ,business ,Robotic arm - Abstract
Research in human–robot interaction for assistive robotics usually presents many technical challenges for experimenters, forcing researchers to split their time between solving technical problems and conducting experiments. In addition, previous work in virtual reality setups tends to focus on a single assistive robotics application. In order to alleviate these problems, we present ArmSym, a virtual reality laboratory with a fully simulated and developer-friendly robot arm. The system is intended as a testbed to run many sorts of experiments on human control of a robotic arm in a realistic environment, ranging from an upper limb prosthesis to a wheelchair-mounted robotic manipulator. To highlight the possibilities of this system, we perform a study comparing different sorts of prosthetic control types. Looking at nonimpaired subjects, we study different psychological metrics that evaluate the interaction of the user with the robot under different control conditions. Subjects report a perception of embodiment in the absence of realistic cutaneous touch, supporting previous studies in the topic. We also find interesting correlations between control and perceived ease of use. Overall our results confirm that ArmSym can be used to gather data from immersive experiences prosthetics, opening the door to closer collaboration between device engineers and experience designers in the future.
- Published
- 2021
7. Use of a virtual human cadaver to improve knowledge of human anatomy in nursing students: research article
- Author
-
Melanie Neumeier and Yuwaraj (Raj) Narnaware
- Subjects
Class (computer programming) ,Research and Theory ,Leadership and Management ,business.industry ,education ,Gold standard ,Cornerstone ,Nursing ,Cadaver ,Human anatomy ,Health care ,Fundamentals and skills ,Prosection ,Psychology ,business ,Virtual actor - Abstract
Anatomy is regarded as a cornerstone of health care education and is normally a pre-requisite for clinicians. Even though cadaver dissection and prosection is perceived as the "gold standard," in recent years, its use appeared to be replaced by a myriad of innovative teaching technologies. The present study incorporated a three-dimensional (3D) virtual human cadaver—Anatomage Table (AT)—in teaching human anatomy to first-year nursing students in a quasi-experimental design. The results show that the class average in mid-term and final examinations and the overall Grade Point Average (GPA) was significantly higher in students taught with the AT than students taught without the AT. On a satisfaction survey, 84.0% of students reported a positive experience with the AT, and 85.4% indicated they would recommend this teaching tool to other students. For nursing programs without cadaveric dissection, the AT may serve as an effective teaching tool to increase the knowledge of anatomy and may enhance student's long-term knowledge retention.
- Published
- 2021
8. The virtual human resource development (VHRD) approach: an integrative literature review
- Author
-
Somaye Rahimi, Abasalt Khorasani, Morteza Rezaei-Zadeh, and John Waterworth
- Subjects
Organizational Behavior and Human Resource Management ,Knowledge management ,Scope (project management) ,Descriptive statistics ,Computer science ,business.industry ,media_common.quotation_subject ,05 social sciences ,Socialization ,050301 education ,Popularity ,Adult education ,0502 economics and business ,Conceptual model ,business ,Human resources ,0503 education ,050203 business & management ,media_common ,Virtual actor - Abstract
Purpose Given the growing popularity of virtual human resources development (VHRD) in organizations and among human resource development (HRD) professionals, it is highly essential to deeply examine the nature and scope of the affective dimensions of the VHRD approach. Over the past decade, VHRD has become an important part of the HRD process. Design/methodology/approach The present study used an integrative literature review to investigate the nature of VHRD in the literature, present a descriptive analysis of the literature and categorize the existing VHRD research. Findings The results indicated three major themes, namely, VHRD and socialization, VHRD and learning and VHRD and the psychological characteristics of the work environment. In addition, a new conceptual model was developed based on the findings. Research limitations/implications This study has reviewed the main concepts of VHRD. The potential actions which HRD researchers can take to address the identified challenges are discussed. Originality/value This integrative literature review could provide a roadmap for future research. Based on this model, the VHRD position is within the organizational context and different tools and processes in constant interaction are introduced. Finally, a general view of the VHRD approach was provided, which can help human resources experts deal with a wide range of technologies in the organization.
- Published
- 2021
9. Mixed Reality Tabletop Gameplay: Social Interaction With a Virtual Human Capable of Physical Influence
- Author
-
Gregory F. Welch, Pamela Wisniewski, Myungho Lee, Nahal Norouzi, and Gerd Bruder
- Subjects
Adult ,Male ,Time Factors ,Adolescent ,Computer science ,Movement ,Social Interaction ,Context (language use) ,02 engineering and technology ,Virtual reality ,Young Adult ,Human–computer interaction ,Computer Graphics ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Virtual actor ,Augmented Reality ,Perceived realism ,Virtual Reality ,020207 software engineering ,Computer Graphics and Computer-Aided Design ,Mixed reality ,Social relation ,Video Games ,Signal Processing ,Female ,Smart Glasses ,Augmented reality ,Computer Vision and Pattern Recognition ,Software - Abstract
In this article, we investigate the effects of the physical influence of a virtual human (VH) in the context of face-to-face interaction in a mixed reality environment. In Experiment 1, participants played a tabletop game with a VH, in which each player takes a turn and moves their own token along the designated spots on the shared table. We compared two conditions as follows: the VH in the virtual condition moves a virtual token that can only be seen through augmented reality (AR) glasses, while the VH in the physical condition moves a physical token as the participants do; therefore the VH’s token can be seen even in the periphery of the AR glasses. For the physical condition, we designed an actuator system underneath the table. The actuator moves a magnet under the table which then moves the VH’s physical token over the surface of the table. Our results indicate that participants felt higher co-presence with the VH in the physical condition, and participants assessed the VH as a more physical entity compared to the VH in the virtual condition. We further observed transference effects when participants attributed the VH’s ability to move physical objects to other elements in the real world. Also, the VH’s physical influence improved participants’ overall experience with the VH. In Experiment 2, we further looked into the question how the physical-virtual latency in movements affected the perceived plausibility of the VH’s interaction with the real world. Our results indicate that a slight temporal difference between the physical token reacting to the virtual hand’s movement increased the perceived realism and causality of the mixed reality interaction. We discuss potential explanations for the findings and implications for future shared mixed reality tabletop setups.
- Published
- 2021
10. Conclusions: The Ends of Classical Death
- Author
-
Hagin, Boaz and Hagin, Boaz
- Published
- 2010
- Full Text
- View/download PDF
11. Technology-Enhanced Role-Play for Intercultural Learning Contexts
- Author
-
Lim, Mei Yii, Kriegel, Michael, Aylett, Ruth, Enz, Sibylle, Vannini, Natalie, Hall, Lynne, Rizzo, Paola, Leichtenstern, Karin, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Natkin, Stéphane, editor, and Dupire, Jérôme, editor
- Published
- 2009
- Full Text
- View/download PDF
12. Collisions in Time: Twenty-First-Century Actors Explore Delsarte on the Holodeck
- Author
-
Carnicke, Sharon Marie, Riley, Shannon Rose, editor, and Hunter, Lynette, editor
- Published
- 2009
- Full Text
- View/download PDF
13. An Empirical Study of Bringing Audience into the Movie
- Author
-
Lin, Tao, Maejima, Akinobu, Morishima, Shigeo, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Butz, Andreas, editor, Fisher, Brian, editor, Krüger, Antonio, editor, Olivier, Patrick, editor, and Christie, Marc, editor
- Published
- 2008
- Full Text
- View/download PDF
14. MagicChem: a MR system based on needs theory for chemical experiments
- Author
-
Mingmin Zhang, Jinda Miao, Tianren Luo, Zheng Li, Jijian Lu, Ning Cai, Yongheng Li, Pan Zhigeng, Yuze Shen, and Zhipeng Pan
- Subjects
Maslow's hierarchy of needs ,Chemistry education ,Need theory ,Computer science ,Virtual-real occlusion ,Chemical education ,Virtual reality ,Computer Graphics and Computer-Aided Design ,Mixed reality ,Fundamental human needs ,Human-Computer Interaction ,Human–computer interaction ,Original Article ,Multi-camera collaboration ,Virtual-real interaction ,Software ,Virtual actor ,Gesture - Abstract
Real chemical experiments may be dangerous or pollute the environment; meanwhile, the preparation of drugs and reagents is time-consuming. Due to the above-mentioned reasons, few experiments can be actually operated by students, which is not conducive to the chemistry learning and the phenomena principle understanding. Recently, due to the impact of Covid-19, many schools adopt online teaching, which is even more detrimental to students’ learning of chemistry. Fortunately, MR(mixed reality) technology provides us with the possibility of solving the safety issues and breaking the space-time constraints, while the theory of human needs (Maslow’s hierarchical needs) provides us with a way to design a comfortable and stimulant MR system with realistic visual presentation and interaction. The paper combines with the theory of human needs to propose a new needs model for virtual experiment. Based on this needs model, we design and develop a comprehensive MR system called MagicChem, which offers a robust 6-DoF interactive and illumination consistent experimental space with virtual-real occlusion, supporting realistic visual interaction, tangible interaction, gesture interaction with touching, voice interaction, temperature interaction, olfactory interaction and virtual human interaction. User study shows that MagicChem satisfies the needs model better than other MR experimental environments that partially meet the needs model. In addition, we explore the application of the needs model in VR environment. Supplementary Information The online version contains supplementary material available at 10.1007/s10055-021-00560-z.
- Published
- 2021
15. The effects of virtual human gesture frequency and reduced video speed on satisfaction and learning outcomes
- Author
-
Joseph Vincent, Robert O. Davis, Li Li Wan, and Yong Jik Lee
- Subjects
Design framework ,050101 languages & linguistics ,05 social sciences ,Applied psychology ,Foreign language ,Educational technology ,050301 education ,Persona ,Moderation ,Education ,Listening comprehension ,0501 psychology and cognitive sciences ,Psychology ,0503 education ,Gesture ,Virtual actor - Abstract
Educators use various strategies to increase listening comprehension for nonnative English speakers in the classroom and multimedia environments. Research on audio reduction has shown mixed results, whereas a study that enhanced (doubled) virtual human gesturing found increased listening comprehension with procedural information (Davis and Vincent, British Journal of Educational Technology 50:3252–3263, 2019). This research examined the use of virtual human gesture frequency (enhanced, average, no) and video speed (normal, reduced 25%) on participant satisfaction and learning outcomes with procedural information. Analysis based on 234 multinational university students indicated that normal video speed significantly increased satisfaction compared to reduced speed; satisfaction was rated significantly higher with agents that gestured compared with the no-gesture condition; and enhancing the gesture frequency significantly increased learning outcomes compared to the average and no-gesture conditions. These findings support previous studies that indicated enhanced gestures significantly increase the learning of procedural information. Also, agent gesturing increased the satisfaction with the agent, which supports systematic review findings that gesturing is a principal moderator for agent persona (Davis et al., Journal of Research on Technology in Education 53:89–106, 2021). However, this research provides evidence that a 25% reduction in video speed may be too slow to maintain satisfaction with advanced foreign language users and that less reduced rates such as 15% or 10% should to be considered. Finally, this research puts forth a gesture design framework for designers to create gesturing virtual humans in multimedia environments.
- Published
- 2021
16. Developments in Affect Detection from Text in Open-Ended Improvisational E-Drama
- Author
-
Zhang, Li, Barnden, John A., Hendley, Robert J., Wallington, Alan M., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Dough, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Pan, Zhigeng, editor, Aylett, Ruth, editor, Diener, Holger, editor, Jin, Xiaogang, editor, Göbel, Stefan, editor, and Li, Li, editor
- Published
- 2006
- Full Text
- View/download PDF
17. A Knowledge-Driven Approach for Korean Traditional Costume (Hanbok) Modeling
- Author
-
Nam, Yang-Hee, Lee, Bo-Ran, Oh, Crystal S., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Dough, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Aizawa, Kiyoharu, editor, Nakamura, Yuichi, editor, and Satoh, Shin’ichi, editor
- Published
- 2005
- Full Text
- View/download PDF
18. Structural Self-Similarity Framework for Virtual Human’s Whole Posture Generation
- Author
-
Ying Xie, Zheng Guolei, Shiying Wu, Wu Zhenfa, Rongbin Xu, and Zhao Huangjin
- Subjects
Generation process ,Multidisciplinary ,Self-similarity ,Computer science ,010102 general mathematics ,Process (computing) ,Degrees of freedom (mechanics) ,computer.software_genre ,01 natural sciences ,Simulation software ,0101 mathematics ,computer ,Simulation ,Virtual actor - Abstract
There are many virtual human joints and degrees of freedom involved in man–machine simulation; however, the commonly used man–machine simulation software cannot provide enough constraints for the whole posture solution of virtual human. This study proposes a novel framework for whole posture calculation and generation. First, a new typical posture database is created. Based on these typical postures, virtual human approximates whole posture that is generated according to the new posture generation conditions and selection rules. Second, a novel method for accurate calculation of partial postures is presented. This method can calculate and adjust the partial posture of the virtual human. It can also achieve the whole posture generation to design the calculation and generation process. Then, this study proposes a meaningful calculation method of anthropomorphism which can evaluate the generated posture accurately. The experimental process is verified by abundant simulation examples compared with three state-of-the-art methods. It turns out that the novel framework proposed can reduce interaction effort and improve the effectiveness of posture generation. Meanwhile, the evaluation of anthropomorphism can achieve 0.91 to get Level 1 which indicates that the generated posture is beneficial to simulation requirements.
- Published
- 2021
19. Direct 3D model extraction method for color volume images
- Author
-
Jianxin Zhang, Qifeng Wang, Shujun Liu, Bin Liu, Liang Yang, Yanjie Chen, Guanning Shang, and Xiaolei Niu
- Subjects
Computer science ,volume data segmentation ,0206 medical engineering ,Biomedical Engineering ,Biophysics ,Health Informatics ,Bioengineering ,02 engineering and technology ,Virtual human ,computer.software_genre ,Image (mathematics) ,Biomaterials ,Imaging, Three-Dimensional ,matting components ,Voxel ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Segmentation ,Computer vision ,Eigenvalues and eigenvectors ,Virtual actor ,3D organ models ,business.industry ,Process (computing) ,020601 biomedical engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Laplacian matrix ,business ,computer ,Algorithms ,Research Article ,Information Systems ,Volume (compression) - Abstract
BACKGROUND: There is a great demand for the extraction of organ models from three-dimensional (3D) medical images in clinical medicine diagnosis and treatment. OBJECTIVE: We aimed to aid doctors in seeing the real shape of human organs more clearly and vividly. METHODS: The method uses the minimum eigenvectors of Laplacian matrix to automatically calculate a group of basic matting components that can properly define the volume image. These matting components can then be used to build foreground images with the help of a few user marks. RESULTS: We propose a direct 3D model segmentation method for volume images. This is a process of extracting foreground objects from volume images and estimating the opacity of the voxels covered by the objects. CONCLUSIONS: The results of segmentation experiments on different parts of human body prove the applicability of this method.
- Published
- 2021
20. The Visible Korean: movable surface models of the hip joint
- Author
-
Yong-Wook Jung, Chung Yoh Kim, and Jin Seo Park
- Subjects
Models, Anatomic ,musculoskeletal diseases ,Surface (mathematics) ,Students, Medical ,Stereoscopy ,projects ,Pathology and Forensic Medicine ,law.invention ,law ,Republic of Korea ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Range of Motion, Articular ,Simulation Training ,Joint (geology) ,Virtual actor ,Visible human project ,Movement (music) ,business.industry ,Visible Human Projects ,projects.project ,medicine.anatomical_structure ,Muscle relaxation ,Hip Joint ,Surgery ,Muscles of the hip ,Artificial intelligence ,Anatomy ,business ,Software ,Computer-Assisted Instruction ,Education, Medical, Undergraduate - Abstract
In this study, we presented movable surface models to help medical students understand the multiaxial movements of the hip joint. The secondary objective was to demonstrate a simple method to make movable surface models for other researchers. We used 166 surface models of the virtual human, and the commercial software was used for all the processes described in this study. Virtual joints were created for the hip joint of the surface models to simulate realistic movements of the joints. Bone surface models were processed to maintain the original shape of the bones during movement. Muscle surface models were processed to express deformation of the muscle shapes during movement. Next, the muscle and bone surface models were moved over six movements of the hip joint (flexion, extension, abduction, adduction, lateral rotation, and medial rotation). The surface models of these six movements were saved and packaged in a PDF file. The PDF file enabled users to see the stereoscopic shapes of the bones and muscles of the hip joint and to scrutinize the six movements on the X, Y, and Z axes of the joint. The movable surface models of the hip joint of this study will be helpful for medical students to learn the multiaxial movements of the hip joint. We expect to develop simulations of other joints that can be used in the education of medical students using the materials and methods described in this study.
- Published
- 2021
21. Pengembangan Animasi Virtual Karakter Anak dengan Autisme dengan Model ADDIE
- Author
-
Restu Rakhmawati, Rahadian Kurniawan, and Febriana Kurniasari
- Subjects
addie ,animasi ,Animation ,Virtual reality ,medicine.disease ,Special education ,Engineering (General). Civil engineering (General) ,Test (assessment) ,autisme ,SAFER ,medicine ,Mathematics education ,Autism ,gesture ,TA1-2040 ,Psychology ,ADDIE Model ,Virtual actor ,3d - Abstract
Kenaikan jumlah penyandang autisme di Indonesia diikuti dengan bertambahnya kebutuhan terhadap pengajar. Permasalahan utama terkait pengajar anak dengan autisme adalah kurangnya keterampilan pengajar dalam menangani anak dengan autisme. Salah satu metode yang digunakan calon pengajar anak dengan autisme untuk meningkatkan keterampilan mengajarnya adalah dengan memainkan drama bertukar peran. Satu orang berperan sebagai guru, sedangkan yang lain berperan sebagai anak dengan autisme. Dari wawancara yang dilakukan terhadap beberapa mahasiswa pendidikan khusus autis, metode tersebut dinilai kurang efektif untuk memahami dan merespons perilaku anak dengan autisme. Dewasa ini, Virtual Reality (VR) memungkinkan calon pengajar berlatih meningkatkan kemampuan mengajarnya pada lingkungan yang lebih aman. Umumnya, proses simulasi pembelajaran menggunakan VR memanfaatkan manusia virtual yang memiliki fitur maupun kemampuan spesifik. Makalah ini menyajikan penggunaan model ADDIE untuk mengembangkan animasi manusia virtual yang dapat digunakan sebagai agen pembelajaran bagi calon pengajar anak dengan autisme. Animasi tersebut merupakan representasi gerakan yang sering dilakukan oleh anak dengan autisme yang dapat digunakan sebagai modul pada VR. Dari hasil pengujian yang dilakukan, diketahui bahwa animasi yang dikembangkan dianggap sudah dapat merepresentasikan perilaku anak dengan autisme.
- Published
- 2021
22. Virtual Reality
- Author
-
Vince, John and Vince, John
- Published
- 2004
- Full Text
- View/download PDF
23. BEAT: the Behavior Expression Animation Toolkit
- Author
-
Cassell, Justine, Vilhjálmsson, Hannes Högni, Bickmore, Timothy, Gabbay, M., editor, Siekmann, Jörg, editor, Prendinger, Helmut, editor, and Ishizuka, Mitsuru, editor
- Published
- 2004
- Full Text
- View/download PDF
24. Accurate Modeling and NN to Reproduce Human Like Motion Trajectories
- Author
-
Cerveri, Pietro, Andreoni, Giuseppe, Ferrigno, Giancarlo, Kacprzyk, Janusz, editor, Bonarini, Andrea, editor, Masulli, Francesco, editor, and Pasi, Gabriella, editor
- Published
- 2003
- Full Text
- View/download PDF
25. New Degree Programmes at Augsburg University: Bachelor’s/Masters for 'Informatics and Multimedia'
- Author
-
André, Elisabeth, Sachs-Hombach, Klaus, editor, Rehkämper, Klaus, editor, Schneider, Jochen, editor, Strothotte, Thomas, editor, and Marotzki, Winfried, editor
- Published
- 2003
- Full Text
- View/download PDF
26. Natural Language Communication with Virtual Actors
- Author
-
Cavazza, Marc and Pazienza, Maria Teresa, editor
- Published
- 2003
- Full Text
- View/download PDF
27. A New Architecture for Simulating the Behavior of Virtual Agents
- Author
-
Luengo, F., Iglesias, A., Goos, G., editor, Hartmanis, J., editor, van Leeuwen, J., editor, Sloot, Peter M. A., editor, Abramson, David, editor, Bogdanov, Alexander V., editor, Dongarra, Jack J., editor, Zomaya, Albert Y., editor, and Gorbachev, Yuriy E., editor
- Published
- 2003
- Full Text
- View/download PDF
28. Animating Behavior of Virtual Agents: The Virtual Park
- Author
-
Luengo, 2 F., Iglesias, A., Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Kumar, Vipin, editor, Gavrilova, Marina L., editor, Tan, Chih Jeng Kenneth, editor, and L’Ecuyer, Pierre, editor
- Published
- 2003
- Full Text
- View/download PDF
29. Virtual Actors in Interactivated Storytelling
- Author
-
Iurgel, Ido A., Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Carbonell, Jaime G., editor, Siekmann, Jörg, editor, Rist, Thomas, editor, Aylett, Ruth S., editor, Ballin, Daniel, editor, and Rickel, Jeff, editor
- Published
- 2003
- Full Text
- View/download PDF
30. Interacting with Virtual Agents in Mixed Reality Interactive Storytelling
- Author
-
Cavazza, Marc, Martin, Olivier, Charles, Fred, Mead, Steven J., Marichal, Xavier, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Carbonell, Jaime G., editor, Siekmann, Jörg, editor, Rist, Thomas, editor, Aylett, Ruth S., editor, Ballin, Daniel, editor, and Rickel, Jeff, editor
- Published
- 2003
- Full Text
- View/download PDF
31. Learning with virtual humans: Introduction to the special issue
- Author
-
Noah L. Schroeder and Scotty D. Craig
- Subjects
050101 languages & linguistics ,Computer science ,05 social sciences ,050301 education ,computer.software_genre ,Computer Science Applications ,Education ,Embodied agent ,Embodied cognition ,Human–computer interaction ,0501 psychology and cognitive sciences ,0503 education ,computer ,Virtual actor - Abstract
Virtual humans are embodied agents with a human-like appearance. In educational contexts, virtual humans are often designed to help people learn. In this special issue, we see six representations o...
- Published
- 2021
32. The Impacts of Visual Effects on User Perception With a Virtual Human in Augmented Reality Conflict Situations
- Author
-
Jae-In Hwang, Myungho Lee, Hanseob Kim, and Gerard Jounghyun Kim
- Subjects
Visual perception ,General Computer Science ,Computer science ,media_common.quotation_subject ,Illusion ,Augmented reality ,02 engineering and technology ,virtual human ,Space (commercial competition) ,visual effects ,Human–computer interaction ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,General Materials Science ,perceptual issue ,Electrical and Electronic Engineering ,050107 human factors ,media_common ,Virtual actor ,05 social sciences ,General Engineering ,020207 software engineering ,Object (philosophy) ,pervasive AR ,physicality conflict ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,lcsh:TK1-9971 ,Social behavior - Abstract
Virtual humans (VHs) in augmented reality (AR) can provide users an illusory sense of being together in the real space. However, such an illusion can easily break when the augmented VH conflicts (or is overlaid) with the real objects. Recent spatial understanding technology is starting to make physically plausible VHs in response to collisions, but there are still limitations (e.g., resolution, accuracy) and inevitable conflict situations (e.g., unexpected passer-by), especially in daily life. Moreover, depending on the situation, VH's plausible behavior to avoid collision may rather interfere with the original interaction with the users. In this paper, we investigate three such situations: (1) when VH appears in a room through a closed door, (2) when the VH's body overlaps with static real objects, and (3) when a real moving object passes through the VH. While we considered (2) as an avoidable situation where physically plausible behaviors of VHs might be required, (1) and (3) were considered as inevitable situations (e.g., VH appearing out of nowhere, or passer-by cannot be aware of a virtual being), and we may not present VH's plausible behaviors, so alternatives might be required. Thus, for each of these notable situations in AR, we tested different visual effects as presentation methods for physical conflicts between a VH and real objects. Our findings indicate that visual effects improve VH's social/co-presence and physicality depending on the situations and effect types as well as influence users' attention/social behaviors. We discuss the implications of our findings and future research directions.
- Published
- 2021
33. Toward a socially acceptable model of emotional artificial intelligence
- Author
-
Alexei V. Samsonovich, Vladimir S. Tsarkov, and Vladislav A. Enikeev
- Subjects
Cognitive model ,Cognitive science ,Computer science ,Virtual machine ,General Earth and Planetary Sciences ,Emotional behavior ,Robot ,Cognitive architecture ,computer.software_genre ,computer ,Social relation ,General Environmental Science ,Virtual actor - Abstract
The framework of emotional Biologically Inspired Cognitive Architecture (eBICA) is used to define a cognitive model, producing believable socially emotional behavior in a social interaction paradigm in a virtual environment. The paradigm selected for this study is a virtual pet interacting with a human. Empirical results indicate that the combination of somatic factors, moral schemas and rational constraints in one model has the potential to make behavior of a virtual actor more believable, humanlike and socially acceptable. Implications concern future intelligent collaborative robots and virtual assistants.
- Published
- 2021
34. Direct 3D Organ Extraction Method in Virtual Human-Body Color Volume Image
- Author
-
Qifeng Wang, Bin Liu, Liang Yang, Shujun Liu, Xiaolei Niu, Yanjie Chen, and Zongge Yue
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Health Informatics ,Radiology, Nuclear Medicine and imaging ,Extraction methods ,Computer vision ,Artificial intelligence ,business ,Virtual actor ,Volume (compression) ,Image (mathematics) - Abstract
Extracting 3D structures from voxel based images can make doctors more directly observe the situation of the target in the clinic, making it easier for doctors to diagnose the condition and make the medicine teaching more directly and easier to understand. For this purpose, we propose a 3D volume image segmentation method based on the max-flow/min-cut algorithm. Our segmentation method can be applied directly to 3D volume image. After users marking small amount tags (foreground and background pixels), we put forward a method to use a directed connected graph structure to represent the volume image. In the directed connected graph, in order to speed up the efficiency of the segmentation in subsequent steps, we divide each voxel node in the graph into different color ranges, and each color range match up with an auxiliary node. In order to divide the color range more finely, we propose a method to calculate the color similarity. We then use the max-flow/min-cut algorithm to segment the directed connected graph. The result of experiments performed in multiple sets of slice images shows that our proposed method improves the efficiency, reduces human error on the 3D volume image segmentation task, and the result is complete and accurate.
- Published
- 2020
35. Characters in Search of an Author: AI-Based Virtual Storytelling
- Author
-
Cavazza, Marc, Charles, Fred, Mead, Steven J., Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Balet, Olivier, editor, Subsol, Gérard, editor, and Torguet, Patrice, editor
- Published
- 2001
- Full Text
- View/download PDF
36. The VISIONS Project
- Author
-
Balet, Olivier, Kafno, Paul, Jordan, Fred, Polichroniadis, Tony, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Balet, Olivier, editor, Subsol, Gérard, editor, and Torguet, Patrice, editor
- Published
- 2001
- Full Text
- View/download PDF
37. Agents’ Interaction in Virtual Storytelling
- Author
-
Cavazza, Marc, Charles, Fred, Mead, Steven J., Goos, G., editor, Hartmanis, J., editor, van Leeuwen, J., editor, Carbonell, Jaime G., editor, Siekmann, Jörg, editor, de Antonio, Angélica, editor, Aylett, Ruth, editor, and Ballin, Daniel, editor
- Published
- 2001
- Full Text
- View/download PDF
38. Creating Emotive Responsive Characters Within Virtual Worlds
- Author
-
Perlin, Ken, Goos, G., editor, Hartmanis, J., editor, van Leeuwen, J., editor, Carbonell, Jaime G., editor, Siekmann, Jörg, editor, and Heudin, Jean-Claude, editor
- Published
- 2000
- Full Text
- View/download PDF
39. The MIRACloth Software
- Author
-
Volino, Pascal, Magnenat-Thalmann, Nadia, Volino, Pascal, and Magnenat-Thalmann, Nadia
- Published
- 2000
- Full Text
- View/download PDF
40. VirtualActor: Endowing Virtual Characters with a Repertoire for Acting
- Author
-
Iurgel, Ido A., Spierling, Ulrike, editor, and Szilas, Nicolas, editor
- Published
- 2008
- Full Text
- View/download PDF
41. Multi-Platform Expansion of the Virtual Human Toolkit: Ubiquitous Conversational Agents
- Author
-
Arno Hartholt, Sharon Mozgai, Adam Reilly, Ed Fast, Matt Liewer, and Wendy R. Whitcup
- Subjects
Linguistics and Language ,020205 medical informatics ,Computer Networks and Communications ,Computer science ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Computer Science Applications ,Artificial Intelligence ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Character animation ,Augmented reality ,Multi platform ,Software ,Information Systems ,Range (computer programming) ,Virtual actor - Abstract
We present an extension of the Virtual Human Toolkit to include a range of computing platforms, including mobile, web, Virtual Reality (VR) and Augmented Reality (AR). The Toolkit uses a mix of in-house and commodity technologies to support audio-visual sensing, speech recognition, natural language processing, nonverbal behavior generation and realization, text-to-speech generation and rendering. It has been extended to support computing platforms beyond Windows by leveraging microservices. The resulting framework maintains the modularity of the underlying architecture, allows re-use of both logic and content through cloud services, and is extensible by porting lightweight clients. We present the current state of the framework, discuss how we model and animate our characters, and offer lessons learned through several use cases, including expressive character animation in seated VR, shared space and navigation in room-scale VR, autonomous AI in mobile AR, and real-time user performance feedback leveraging mobile sensors in headset AR.
- Published
- 2020
42. Empirical evaluation and pathway modeling of visual attention to virtual humans in an appearance fidelity continuum
- Author
-
Bart P. Knijnenburg, Roshan Venkatakrishnan, Sabarish V. Babu, Rohith Venkatakrishnan, Matias Volonte, Reza Ghaiumy Anaraky, and Andrew T. Duchowski
- Subjects
Failure to rescue ,Eye tracking system ,Computer science ,media_common.quotation_subject ,05 social sciences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Fidelity ,020207 software engineering ,02 engineering and technology ,Gaze ,050105 experimental psychology ,Rendering (computer graphics) ,Human-Computer Interaction ,Human–computer interaction ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Medical training ,Visual attention ,0501 psychology and cognitive sciences ,ComputingMethodologies_COMPUTERGRAPHICS ,media_common ,Virtual actor - Abstract
In this contribution we studied how different rendering styles of a virtual human impacted users’ visual attention in an interactive medical training simulator. In a mixed design experiment, 78 participants interacted with a virtual human representing a sample from the non-photorealistic (NPR) to the photorealistic (PR) rendering continuity. We presented five rendering style samples scenarios, namely All Pencil Shaded (APS), Pencil Shaded (PS), All Cartoon Shaded (ACT), Cartoon Shaded (CT), and Human-Like (HL), and compared how visual attention differed between groups of users. For this study, we employed an eye tracking system for collecting and analyzing users’ gaze during interaction with the virtual human in a failure to rescue medical training simulation. Results shows that users spent more total time in the APS and ACT conditions but users visually attended more to virtual humans in the PS, CT and HL appearance conditions.
- Published
- 2020
43. Design of reliable virtual human facial expressions and validation by healthy people
- Author
-
Patricia Fernández-Sotos, Guillermo Lahera, Arturo S. García, Roberto Rodriguez-Jimenez, Antonio Fernández-Caballero, and Miguel A. Vicente-Querol
- Subjects
Facial expression ,Computational Theory and Mathematics ,Artificial Intelligence ,Human–computer interaction ,Computer science ,Software ,Computer Science Applications ,Theoretical Computer Science ,Virtual actor - Published
- 2020
44. Empirical and modeling study of emotional state dynamics in social videogame paradigms
- Author
-
Alexei V. Samsonovich, Arthur A. Chubarov, and Daria V. Tikhomirova
- Subjects
Facial expression ,Process (engineering) ,Cognitive Neuroscience ,Experimental and Cognitive Psychology ,Cognitive architecture ,computer.software_genre ,Social relation ,Facial muscles ,medicine.anatomical_structure ,Artificial Intelligence ,Virtual machine ,Dynamics (music) ,medicine ,Psychology ,computer ,Software ,Cognitive psychology ,Virtual actor - Abstract
The objective of this work was to study the dynamics of human emotional states in the process of social interaction in a virtual environment. The previously developed for this purpose prototypes of the virtual actor (NPC) and its virtual environment simulator “Teleport” underwent significant re-design and modification. The experimental platform was re-implemented and used in experiments with college student participants, combining electromyography, emotion recognition in facial recordings and model-based game log analysis in a social videogame paradigm. Participants interacted with two virtual actors implemented based on the eBICA cognitive architecture (Samsonovich, 2013, 2018). Positive correlations were found between eBICA model predictions and participant affects extracted from their facial expressions and facial muscle activity. Affective dynamics of social phenomena, such as the establishment of partnership or an act of betrayal, were characterized and found consistent with the model predictions. Other findings include a gradually developing emotional reaction, possibly due to the integration of appraisals of game events. Overall, obtained results confirm the eBICA model, suggesting its further extension and refinement.
- Published
- 2020
45. Effects of Depth Information on Visual Target Identification Task Performance in Shared Gaze Environments
- Author
-
Joseph J. LaViola, Austin Erickson, Kangsoo Kim, Nahal Norouzi, Gregory F. Welch, and Gerd Bruder
- Subjects
Adult ,Male ,Adolescent ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Fixation, Ocular ,02 engineering and technology ,Cursor (databases) ,Task (project management) ,Young Adult ,Human–computer interaction ,Task Performance and Analysis ,Computer Graphics ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Eye-Tracking Technology ,Virtual actor ,Augmented Reality ,Cursor (user interface) ,020207 software engineering ,Computer Graphics and Computer-Aided Design ,Gaze ,Visualization ,Identification (information) ,Signal Processing ,Female ,Augmented reality ,Computer Vision and Pattern Recognition ,Software - Abstract
Human gaze awareness is important for social and collaborative interactions. Recent technological advances in augmented reality (AR) displays and sensors provide us with the means to extend collaborative spaces with real-time dynamic AR indicators of one's gaze, for example via three-dimensional cursors or rays emanating from a partner's head. However, such gaze cues are only as useful as the quality of the underlying gaze estimation and the accuracy of the display mechanism. Depending on the type of the visualization, and the characteristics of the errors, AR gaze cues could either enhance or interfere with collaborations. In this paper, we present two human-subject studies in which we investigate the influence of angular and depth errors, target distance, and the type of gaze visualization on participants' performance and subjective evaluation during a collaborative task with a virtual human partner, where participants identified targets within a dynamically walking crowd. First, our results show that there is a significant difference in performance for the two gaze visualizations ray and cursor in conditions with simulated angular and depth errors: the ray visualization provided significantly faster response times and fewer errors compared to the cursor visualization. Second, our results show that under optimal conditions, among four different gaze visualization methods, a ray without depth information provides the worst performance and is rated lowest, while a combination of a ray and cursor with depth information is rated highest. We discuss the subjective and objective performance thresholds and provide guidelines for practitioners in this field.
- Published
- 2020
46. A closed-loop healthcare processing approach based on deep reinforcement learning
- Author
-
Khan Muhammad, Shuai Liu, Guojun Wang, and Yinglong Dai
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Information processing ,020207 software engineering ,02 engineering and technology ,Construct (python library) ,Human body ,Hardware and Architecture ,Human–computer interaction ,Health care ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Reinforcement learning ,State (computer science) ,business ,Wireless sensor network ,Software ,Virtual actor - Abstract
In healthcare, the human body is a controlled input-output system, which generates different observations with the variations of external interventions. The intervention acts as the input, and the output is the phenotype observation that reflects the latent health state of the body system. The objective of healthcare is to determine effective intervention strategies that can nurse an unhealthy human body to a healthy state. With the advances of Internet-of-Things (IoT) and body sensor networks, it becomes convenient to observe the multimedia data of the human body anywhere and anytime. To aid healthcare decision making, we put forward to construct the human body simulators based on deep neural networks (DNNs) for healthcare research. At first, we formulate the model of the human body system based on DNNs. During our analysis, we realize that DNN-based models could simulate practical situations, e.g. some health states are unreachable. Then, we combine deep reinforcement learning (DRL) with conceptual embedding techniques to explore effective healthcare strategies for simulated human bodies. We implement a virtual human body simulator, which can take interventions and represent its hidden states by high-dimensional images, and a DRL-based treatment module, which can diagnose latent health state through the image observations and choose interventions to nurse the simulated body to a target state. By combining the body simulator and treatment module, we create a dynamic closed-loop for healthcare information processing. Experimental simulations are performed to validate the feasibility of the offered approach.
- Published
- 2020
47. Three-dimensional organ extraction method for color volume image based on the closed-form solution strategy
- Author
-
Bin Liu, Jianxin Zhang, Liang Yang, and Xiaohui Zhang
- Subjects
business.industry ,Computer science ,020207 software engineering ,02 engineering and technology ,Surgical training ,Robustness (computer science) ,Technical Note ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Extraction methods ,Artificial intelligence ,Closed-form expression ,business ,Organ Model ,Image based ,Computer technology ,Virtual actor - Abstract
With the rapid development of computer technology, surgical training, and the digitalized teaching of human body morphology are gaining prominence in medical education. Accurate, true organ models are essential digital material for these computer-assisted systems. However, no direct three-dimensional (3D) true organ model acquisition method currently exists. Thus, the direct extraction of the interested organ models based on the existing Virtual Human Project (VHP) image set is urgently needed. In this paper, a closed-form solution-based volume matting method is proposed. Using a small quantity of graffiti in the foreground and background, target 3D regions can be extracted by closed-form solution computing. The upper triangular storage strategy and the preconditioned conjugate-gradient (PCG) method also promote robustness. Four image data sets (2 virtual human male and 2 virtual human female) from the United States National Library of Medicine (including brain slices, eye slices, lung slices, heart slices, liver slices, kidney slices, spine slices, arm slices, vastus slices, and foot slices) were selected to extract the 3D volume organ models. The experimental results show that the extracted 3D organs were acceptable and satisfactory. This method may provide technical support for medical and other scientific research fields.
- Published
- 2020
48. Real Time Virtual Human Hand for Diagnostic Robot (DiagBot) Arm Using IOT
- Author
-
R. Arivoli and R. Satheeshkumar
- Subjects
General Computer Science ,Computer science ,Human–computer interaction ,business.industry ,General Engineering ,Robot ,Internet of Things ,business ,Virtual actor - Published
- 2020
49. The Ability of Place
- Author
-
Tom Boellstorff
- Subjects
Archeology ,Interface (Java) ,business.industry ,Context (language use) ,Participant observation ,Placemaking ,Digital media ,Conceptual framework ,Human–computer interaction ,Anthropology ,The Internet ,Sociology ,business ,Virtual actor - Abstract
In this article I explore the relationship between digital place and disability through an ethnographic study of disability experience in the virtual world Second Life. I discuss how forms of landscape and interface shape disability experience, how building relates to “being-inworld” in digital place, and how proximity and collaboration relate to disability embodiment in a virtual context. “Participant building” on a virtual island created for this research, “Ethnographia,” complements participant observation and other methods to investigate these questions of digital place. Through these lines of analysis, I develop a notion of “digital topography” to illuminate the implications of digital place for disability and human experience more generally. This allows for differentiating digital places from digital media and thus forging conceptual frameworks that reflect how the internet is not a unitary entity. It also allows for considering digital emplacement as related to, but distinct from, digital embodiment. This helps draw attention to questions of digital placemaking alongside the better-known phenomena of avatars. Avatars are important, but it is crucial to highlight the virtual geographies without which the emplacement of those avatars would be impossible. These materials speak to broad questions regarding embodiment, ability, the digital, and the real.
- Published
- 2020
50. (Re)Animating Stanislavsky: realizing aliveness in the virtual actor
- Author
-
Jon Weinbren
- Subjects
Character (mathematics) ,Visual Arts and Performing Arts ,Computer science ,Human–computer interaction ,Stanislavski's system ,Realization (linguistics) ,Depiction ,ComputingMethodologies_COMPUTERGRAPHICS ,Virtual actor ,Practice research - Abstract
This paper describes my initial attempts to create a believable Virtual Actor: an autonomous computer graphic character depiction that can convincingly portray actions and emotions in-the-moment, i...
- Published
- 2020
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.