692 results on '"video annotation"'
Search Results
2. Towards Semi-Automated Game Analytics: An Exploratory Study on Deep Learning-Based Image Classification of Characters in Auto Battler Games
- Author
-
Thiele, Jeannine, Thiele, Elisa, Roschke, Christian, Heinzig, Manuel, Ritter, Marc, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, van Leeuwen, Jan, Series Editor, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Kobsa, Alfred, Series Editor, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Nierstrasz, Oscar, Series Editor, Pandu Rangan, C., Editorial Board Member, Sudan, Madhu, Series Editor, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Vardi, Moshe Y, Series Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, and Fang, Xiaowen, editor
- Published
- 2024
- Full Text
- View/download PDF
3. Accelerated Video Annotation Driven by Deep Detector and Tracker
- Author
-
Price, Eric, Ahmad, Aamir, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Lee, Soon-Geul, editor, An, Jinung, editor, Chong, Nak Young, editor, Strand, Marcus, editor, and Kim, Joo H., editor
- Published
- 2024
- Full Text
- View/download PDF
4. A dataset of text prompts, videos and video quality metrics from generative text-to-video AI models
- Author
-
Iya Chivileva, Philip Lynch, Tomás E. Ward, and Alan F. Smeaton
- Subjects
Generative AI ,Video annotation ,Video naturalness ,Video perception ,Video alignment ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Science (General) ,Q1-390 - Abstract
Evaluating the quality of videos which have been automatically generated from text-to-video (T2V) models is important if the models are to produce plausible outputs that convince a viewer of their authenticity. This paper presents a dataset of 201 text prompts used to automatically generate 1,005 videos using 5 very recent T2V models namely Tune-a-Video, VideoFusion, Text-To-Video Synthesis, Text2Video-Zero and Aphantasia. The prompts are divided into short, medium and longer lengths. We also include the results of some commonly used metrics used to automatically evaluate the quality of those generated videos. These include each video's naturalness, the text similarity between the original prompt and an automatically generated text caption for the video, and the inception score which measures how realistic is each generated video.Each of the 1,005 generated videos was manually rated by 24 different annotators for alignment between the videos and their original prompts, as well as for the perception and overall quality of the video. The data also includes the Mean Opinion Scores (MOS) for alignment between the generated videos and the original prompts.The dataset of T2V prompts, videos and assessments can be reused by those building or refining text-to-video generation models to compare the accuracy, quality and naturalness of their new models against existing ones.
- Published
- 2024
- Full Text
- View/download PDF
5. Analyzing Information Leakage on Video Object Detection Datasets by Splitting Images Into Clusters With High Spatiotemporal Correlation
- Author
-
Ravi B. D. Figueiredo and Hugo A. Mendes
- Subjects
Data preprocessing ,clustering ,information leakage ,supervised training ,video annotation ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Random splitting strategy is a common approach for training, testing, and validating object detection algorithms based on deep learning. Is common for datasets to have images extracted from video sources, in which there are frames with high spatial correlation, i.e., frames with rotated positions or different view angles of the same object. These highly correlated frames may lead to information leakage in training, if these frames are not well-distributed. In this work, it is shown that datasets created with highly spatial correlation frames from the same video have information leakage if using the random splitting strategy to distribute the image into the sub-datasets. It proposed a clustering dataset split algorithm in which images are distributed randomly in the sub-datasets in a pack or clusters instead of a single image at the time. The clusters are created by extracting the image features from a video of the dataset using an image-text pre-trained model, CLIP, and reducing the feature vector dimensionality with t-Distributed Stochastic Neighbor embedding (t-SNE). In this reduced dimensional representation, images are separated into clusters using a clustering algorithms like DBSCAN, OPTICS, and Agglomerative Clustering. These clusters are distributed into the train, test, and validation datasets randomly to avoiding information leakage by highly spatial correlation frames. YOLOv8 is used as the object detector algorithm to test the dataset splitting.
- Published
- 2024
- Full Text
- View/download PDF
6. Using expansive learning to design and implement video-annotated peer feedback in an undergraduate general education module
- Author
-
Gatrell, Dave, Mark, KaiPan, Au-Yeung, Cypher, and Leung, Ka Yee
- Published
- 2024
- Full Text
- View/download PDF
7. Formación para la competencia argumentativa con anotaciones multimedia.
- Author
-
Cebrián-Robles, Violeta, Raposo-Rivas, Manuela, and Cebrián-de-la-Serna, Manuel
- Subjects
BASIC education ,TAGS (Metadata) ,SOCIAL skills ,DIGITAL video ,GRADUATE students - Abstract
Copyright of Campus Virtuales is the property of Campus Virtuales and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
8. Video Indexing and Retrieval Techniques: A Review
- Author
-
Poovaraghan, R. J., Prabhavathy, P., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Tanwar, Sudeep, editor, Wierzchon, Slawomir T., editor, Singh, Pradeep Kumar, editor, Ganzha, Maria, editor, and Epiphaniou, Gregory, editor
- Published
- 2023
- Full Text
- View/download PDF
9. Screening the financial crisis: A case study for ontology-based film analytical video annotations.
- Author
-
Bakels, Jan-Hendrik, Grotkopp, Matthias, Scherer, Thomas J.J., and Stratil, Jasper
- Subjects
FINANCIAL crises ,ANNOTATIONS ,VIDEOS - Published
- 2023
- Full Text
- View/download PDF
10. cometrics: A New Software Tool for Behavior-analytic Clinicians and Machine Learning Researchers
- Author
-
Arce, Walker S., Walker, Seth G., Hurtz, Morgan L., and Gehringer, James E.
- Published
- 2023
- Full Text
- View/download PDF
11. Participant’s Video Annotations as a Database to Measure Professional Development
- Author
-
Steffen, Bianca, Pouta, Maikki, Billett, Stephen, Series Editor, Harteis, Christian, Series Editor, Gruber, Hans, Series Editor, Goller, Michael, editor, Kyndt, Eva, editor, Paloniemi, Susanna, editor, and Damşa, Crina, editor
- Published
- 2022
- Full Text
- View/download PDF
12. Semantic Video Entity Linking
- Author
-
Grams, Tim, Li, Honglin, Tong, Bo, Shaban, Ali, Weller, Tobias, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Groth, Paul, editor, Rula, Anisa, editor, Schneider, Jodi, editor, Tiddi, Ilaria, editor, Simperl, Elena, editor, Alexopoulos, Panos, editor, Hoekstra, Rinke, editor, Alam, Mehwish, editor, Dimou, Anastasia, editor, and Tamper, Minna, editor
- Published
- 2022
- Full Text
- View/download PDF
13. Semantic Annotation of Videos Based on Mask RCNN for a Study of Animal Behavior
- Author
-
Hammouda, Nourelhouda, Mahfoudh, Mariem, Cherif, Mohamed, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Memmi, Gerard, editor, Yang, Baijian, editor, Kong, Linghe, editor, Zhang, Tianwei, editor, and Qiu, Meikang, editor
- Published
- 2022
- Full Text
- View/download PDF
14. E-spect@tor for performing arts.
- Author
-
Chantraine Braillon, Cécile and Idmhand, Fatiha
- Subjects
- *
PERFORMING arts , *HUMANITIES , *SCIENTIFIC community , *PUBLIC meetings , *EDUCATION - Abstract
In the framework of DiMPAH (Digital Methods Platform for Arts and Humanities), an online course "e-spect@tor for performing arts" has been designed to make available digital methods created by the Digital Humanities project "The spectator's school" to the scientific community and the students. The main aim of this course is to train learners to analyse performative works from video recordings. Those analysis are notably achieved with the use of the digital tool e-spect@tor developed by "The spectator's school" to annotate live art videos with different features that allows to study performative aspects. While "Unit I" is focused on introducing the "Performing arts", "Unit II" is dedicated to the analysis of case studies proposed to serve as models. The course can be used on e-learning and in a hybrid teaching format through a possible course structure and aims as described and tested with students. Furthermore, by using digital technology, this course can be at the forefront of new stories for Europe in terms of pedagogy as it trains to enable new media literacy and to analyse any activity that is staged such as political discourses, public meetings, or interviews. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Video Region Annotation with Sparse Bounding Boxes.
- Author
-
Xu, Yuzheng, Wu, Yang, binti Zuraimi, Nur Sabrina, Nobuhara, Shohei, and Nishino, Ko
- Subjects
- *
ANNOTATIONS , *GLOBAL optimization - Abstract
Video analysis has been moving towards more detailed interpretation (e.g., segmentation) with encouraging progress. These tasks, however, increasingly rely on densely annotated training data both in space and time. Since such annotation is labor-intensive, few densely annotated video data with detailed region boundaries exist. This work aims to resolve this dilemma by learning to automatically generate region boundaries for all frames of a video from sparsely annotated bounding boxes of target regions. We achieve this with a Volumetric Graph Convolutional Network (VGCN), which learns to iteratively find keypoints on the region boundaries using the spatio-temporal volume of surrounding appearance and motion. We show that the global optimization of VGCN leads to more accurate annotation that generalizes better. Experimental results using three latest datasets (two real and one synthetic), including ablation studies, demonstrate the effectiveness and superiority of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. AffecTube — Chrome extension for YouTube video affective annotations
- Author
-
Daniel Kulas and Michal R. Wrobel
- Subjects
Emotion recognition ,Dataset ,Video annotation ,Computer software ,QA76.75-76.765 - Abstract
The shortage of emotion-annotated video datasets suitable for training and validating machine learning models for facial expression-based emotion recognition stems primarily from the significant effort and cost required for manual annotation. In this paper, we present AffecTube as a comprehensive solution that leverages crowdsourcing to annotate videos directly on the YouTube platform, resulting in ready-to-use emotion-annotated datasets. AffecTube provides a low-resource environment with an intuitive interface and customizable options, making it a versatile tool applicable not only to emotion annotation, but also to various video-based behavioral annotation processes.
- Published
- 2023
- Full Text
- View/download PDF
17. Cognitive and affective effects of teachers' annotations and talking heads on asynchronous video lectures in a web development course.
- Author
-
Garcia, Manuel B. and Yousef, Ahmed Mohamed Fahmy
- Subjects
WEB development ,CLUSTER randomized controlled trials ,STREAMING video & television ,VIDEOS ,COGNITIVE learning theory ,WEB design - Abstract
When it comes to asynchronous online learning, the literature recommends multimedia content like videos of lectures and demonstrations. However, the lack of emotional connection and the absence of teacher support in these video materials can be detrimental to student success. We proposed incorporating talking heads and annotations to alleviate these weaknesses. In this study, we investigated the cognitive and affective effects of integrating these solutions in asynchronous video lectures. Guided by the theoretical lens of Cognitive Theory of Multimedia Learning and Cognitive-Affective Theory of Learning with Media, we produced a total of 72 videos (average = four videos per subtopic) with a mean duration of 258 seconds (range = 193 to 318 seconds). To comparatively assess our video treatments (i.e., regular videos, videos with face, videos with annotation, or videos with face and annotation), we conducted an educational-based cluster randomized controlled trial within a 14-week academic period with four cohorts of students enrolled in an introductory web design and development course. We recorded a total of 42,425 total page views (212.13 page views per student) for all web browsing activities within the online learning platform. Moreover, 39.92% (16,935 views) of these page views were attributed to the video pages accumulating a total of 47,665 minutes of watch time. Our findings suggest that combining talking heads and annotations in asynchronous video lectures yielded the highest learning performance, longest watch time, and highest satisfaction, engagement, and attitude scores. These discoveries have significant implications for designing video lectures for online education to support students' activities and engagement. Therefore, we concluded that academic institutions, curriculum developers, instructional designers, and educators should consider these findings before relocating face-to-face courses to online learning systems to maximize the benefits of video-based learning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Revisión de la literatura sobre anotaciones de vídeo en la formación docente.
- Author
-
Cebrián-Robles, Violeta, Pérez-Torregrosa, Ana-Belén, and Cebrián de la Serna, Manuel
- Subjects
LITERATURE reviews ,TEACHER education ,TEACHER training ,FOLKSONOMIES ,CLASS groups (Mathematics) ,DIGITAL video - Abstract
Copyright of Pixel-Bit, Revista de Medios y Educacion is the property of Pixel-Bit, Revista de Medios y Educacion and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
19. Machine learning architectures for video annotation and retrieval
- Author
-
Markatopoulou, Foteini
- Subjects
006.3 ,Electronic Engineering and Computer Science ,Machine Learning Architectures ,video annotation - Abstract
In this thesis we are designing machine learning methodologies for solving the problem of video annotation and retrieval using either pre-defined semantic concepts or ad-hoc queries. Concept-based video annotation refers to the annotation of video fragments with one or more semantic concepts (e.g. hand, sky, running), chosen from a predefined concept list. Ad-hoc queries refer to textual descriptions that may contain objects, activities, locations etc., and combinations of the former. Our contributions are: i) A thorough analysis on extending and using different local descriptors towards improved concept-based video annotation and a stacking architecture that uses in the first layer, concept classifiers trained on local descriptors and improves their prediction accuracy by implicitly capturing concept relations, in the last layer of the stack. ii) A cascade architecture that orders and combines many classifiers, trained on different visual descriptors, for the same concept. iii) A deep learning architecture that exploits concept relations at two different levels. At the first level, we build on ideas from multi-task learning, and propose an approach to learn concept-specific representations that are sparse, linear combinations of representations of latent concepts. At a second level, we build on ideas from structured output learning, and propose the introduction, at training time, of a new cost term that explicitly models the correlations between the concepts. By doing so, we explicitly model the structure in the output space (i.e., the concept labels). iv) A fully-automatic ad-hoc video search architecture that combines concept-based video annotation and textual query analysis, and transforms concept-based keyframe and query representations into a common semantic embedding space. Our architectures have been extensively evaluated on the TRECVID SIN 2013, the TRECVID AVS 2016, and other large-scale datasets presenting their effectiveness compared to other similar approaches.
- Published
- 2018
20. t-EVA: Time-Efficient t-SNE Video Annotation
- Author
-
Poorgholi, Soroosh, Kayhan, Osman Semih, van Gemert, Jan C., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Del Bimbo, Alberto, editor, Cucchiara, Rita, editor, Sclaroff, Stan, editor, Farinella, Giovanni Maria, editor, Mei, Tao, editor, Bertini, Marco, editor, Escalante, Hugo Jair, editor, and Vezzani, Roberto, editor
- Published
- 2021
- Full Text
- View/download PDF
21. Intelligent and Interactive Video Annotation for Instance Segmentation Using Siamese Neural Networks
- Author
-
Schneegans, Jan, Bieshaar, Maarten, Heidecker, Florian, Sick, Bernhard, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Del Bimbo, Alberto, editor, Cucchiara, Rita, editor, Sclaroff, Stan, editor, Farinella, Giovanni Maria, editor, Mei, Tao, editor, Bertini, Marco, editor, Escalante, Hugo Jair, editor, and Vezzani, Roberto, editor
- Published
- 2021
- Full Text
- View/download PDF
22. ARiana: Augmented Reality Based In-Situ Annotation of Assembly Videos
- Author
-
Truong an Pham, Tim Moesgen, Sanni Siltanen, Joanna Bergstrom, and Yu Xiao
- Subjects
Augmented reality ,first-person videos ,multimodal interaction ,process documentation ,video annotation ,workflow extraction ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Annotated videos are commonly produced for documenting assembly and maintenance processes in the manufacturing industry. However, according to a semi-structured interview we conducted with industrial experts, the current process of creating annotated assembly videos, in which the annotator annotates the video capturing the expert’s demonstration of assembly and maintenance process, is cumbersome and time-consuming. The key challenges include three key problems in annotation: (1) unnecessary extra communications between field workers and annotators, (2) lack of suitable camera gear, and (3) wasting time in the manual removal of non-informative portions of captured videos. Because annotation always follows video capture, the problem 1 remains out of scope for state-of-the-art video annotation tools. And making the assumption of a perfect captured video, which covers no occlusion and contains only relevant assembly or maintenance information, causes problem 2 and 3. As a result, we have developed ARiana, a wearable augmented reality-based in-situ video annotation tool that guides field experts to create annotations efficiently while conducting the assembly or maintenance tasks. ARiana has three key features that include context-awareness enabled by hand-object interaction, multimodal interaction for annotation on the fly, and real-time audiovisual guidance enabled by edge offloading. We have implemented ARiana on Android-based smart glasses, equipped with first-person camera and microphone. In a usability test based on attempting to assemble a toy model and to annotate the recorded video simultaneously, ARiana demonstrated higher efficiency and effectiveness compared to one of the state-of-the-art video annotation tools, in which the assembling process is followed by the annotation process. In particular, ARiana helps users finish annotation tasks four times faster, and increase the annotation accuracy by 23%.
- Published
- 2022
- Full Text
- View/download PDF
23. Semi-automation of gesture annotation by machine learning and human collaboration.
- Author
-
Ienaga, Naoto, Cravotta, Alice, Terayama, Kei, Scotney, Bryan W., Saito, Hideo, and Busà, M. Grazia
- Subjects
- *
GESTURE , *ANNOTATIONS , *MACHINE learning , *ACTIVE learning - Abstract
Gesture and multimodal communication researchers typically annotate video data manually, even though this can be a very time-consuming task. In the present work, a method to detect gestures is proposed as a fundamental step towards a semi-automatic gesture annotation tool. The proposed method can be applied to RGB videos and requires annotations of part of a video as input. The technique deploys a pose estimation method and active learning. In the experiment, it is shown that if about 27% of the video is annotated, the remaining parts of the video can be annotated automatically with an F-score of at least 0.85. Users can run this tool with a small number of annotations first. If the predicted annotations for the remainder of the video are not satisfactory, users can add further annotations and run the tool again. The code has been released so that other researchers and practitioners can use the results of this research. This tool has been confirmed to work in conjunction with ELAN. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Open Source Multipurpose Multimedia Annotation Tool
- Author
-
da Silva, Joed Lopes, Tabata, Alan Naoto, Broto, Lucas Cardoso, Cocron, Marta Pereira, Zimmer, Alessandro, Brandmeier, Thomas, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Campilho, Aurélio, editor, Karray, Fakhri, editor, and Wang, Zhou, editor
- Published
- 2020
- Full Text
- View/download PDF
25. Massive Semantic Video Annotation in High-End Customer Service : Example in Airline Service Value Assessment
- Author
-
Fukuda, Ken, Vizcarra, Julio, Nishimura, Satoshi, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nah, Fiona Fui-Hoon, editor, and Siau, Keng, editor
- Published
- 2020
- Full Text
- View/download PDF
26. Hybrid Technology in Video Annotation by Using the APP and Raspberry Pi—Applied in Agricultural Surveillance System
- Author
-
Tan, Yong-Kok, Wang, Lin-Lin, Theng, Deng-Yuan, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Liang, Qilian, Series Editor, Martin, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zhang, Junjie James, Series Editor, Hung, Jason C., editor, Yen, Neil Y., editor, and Hui, Lin, editor
- Published
- 2019
- Full Text
- View/download PDF
27. Development of a Promotion System for Home-Based Squat Training for Elderly People
- Author
-
Hirasawa, Yuki, Ishioka, Takuya, Gotoda, Naka, Hirata, Kosuke, Akagi, Ryota, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Pandu Rangan, C., Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Yamamoto, Sakae, editor, and Mori, Hirohiko, editor
- Published
- 2019
- Full Text
- View/download PDF
28. Multimodal Video Annotation for Retrieval and Discovery of Newsworthy Video in a News Verification Scenario
- Author
-
Nixon, Lyndon, Apostolidis, Evlampios, Markatopoulou, Foteini, Patras, Ioannis, Mezaris, Vasileios, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Kompatsiaris, Ioannis, editor, Huet, Benoit, editor, Mezaris, Vasileios, editor, Gurrin, Cathal, editor, Cheng, Wen-Huang, editor, and Vrochidis, Stefanos, editor
- Published
- 2019
- Full Text
- View/download PDF
29. Wearables in sociodrama: An embodied mixed-methods study of expressiveness in social interactions.
- Author
-
El-Raheb, Katerina, Kalampratsidou, Vilelmini, Issari, Philia, Georgaca, Eugenie, Koliouli, Flora, Karydi, Evangelia, Skali, Theodora (Dora), Diamantides, Pandelis, and Ioannidis, Yannis
- Subjects
SOCIODRAMA ,WEARABLE technology ,SOCIAL interaction ,HEART beat ,INTERPERSONAL relations - Abstract
This mixed-methods study investigates the use of wearable technology in embodied psychology research and explores the potential of incorporating bio-signals to focus on the bodily impact of the social experience. The study relies on scientifically established psychological methods of studying social issues, collective relationships and emotional overloads, such as sociodrama, in combination with participant observation to qualitatively detect and observe verbal and nonverbal aspects of social behavior. We evaluate the proposed method through a pilot sociodrama session and reflect on the outcomes. By utilizing an experimental setting that combines video cameras, microphones, and wearable sensors measuring physiological signals, specifically, heart rate, we explore how the synchronization and analysis of the different signals and annotations enables a mixed-method that combines qualitative and quantitative instruments in studying embodied expressiveness and social interaction. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Wearables in sociodrama: An embodied mixed-methods study of expressiveness in social interactions
- Author
-
Katerina El-Raheb, Vilelmini Kalampratsidou, Philia Issari, Eugenie Georgaca, Flora Koliouli, Evangelia Karydi, Theodora (Dora) Skali, Pandelis Diamantides, and Yannis Ioannidis
- Subjects
embodiment ,heart-rate ,mixed-methods ,sociodrama ,video annotation ,wearables ,Mechanical engineering and machinery ,TJ1-1570 ,Electronics ,TK7800-8360 - Abstract
This mixed-methods study investigates the use of wearable technology in embodied psychology research and explores the potential of incorporating bio-signals to focus on the bodily impact of the social experience. The study relies on scientifically established psychological methods of studying social issues, collective relationships and emotional overloads, such as sociodrama, in combination with participant observation to qualitatively detect and observe verbal and nonverbal aspects of social behavior. We evaluate the proposed method through a pilot sociodrama session and reflect on the outcomes. By utilizing an experimental setting that combines video cameras, microphones, and wearable sensors measuring physiological signals, specifically, heart rate, we explore how the synchronization and analysis of the different signals and annotations enables a mixed-method that combines qualitative and quantitative instruments in studying embodied expressiveness and social interaction.
- Published
- 2022
- Full Text
- View/download PDF
31. Participatory Media Literacy in Collaborative Video Annotation.
- Author
-
Howard, Craig D.
- Subjects
- *
PARTICIPATORY media , *MEDIA literacy , *ANNOTATIONS , *VIDEOS , *SEMIOTICS , *EDUCATORS , *STREAMING video & television - Abstract
Collaborative Video Annotation (CVA) is a kludge where learners annotate video together, experiencing both the video and each other's annotations in a dynamic discussion. Three scenes from small group CVA discussions were selected for analysis from 14 CVA discussions where 8–12 learners interacted via the annotation tool on top of a video. The twenty-second scenes were analyzed for semiotic meaning-making practices and this revealed a variety of participatory media literacy levels among these undergraduates. Topics of discussions were related but not identical, and the selected exemplars showed a range of attention to communicative features of the media. Discussions evolved in dramatically different ways due to the interplay of images, text, and learner choices. Results suggest that converged media require new literacies educators would be wise to explore and wiser still to educate our learners about. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. Current Trends and Future Directions of Large Scale Image and Video Annotation: Observations From Four Years of BIIGLE 2.0
- Author
-
Martin Zurowietz and Tim W. Nattkemper
- Subjects
marine imaging ,image annotation ,video annotation ,environmental monitoring ,machine learning ,Science ,General. Including nature conservation, geographical distribution ,QH1-199.5 - Abstract
Marine imaging has evolved from small, narrowly focussed applications to large-scale applications covering areas of several hundred square kilometers or time series covering observation periods of several months. The analysis and interpretation of the accumulating large volume of digital images or videos will continue to challenge the marine science community to keep this process efficient and effective. It is safe to say that any strategy will rely on some software platform supporting manual image and video annotation, either for a direct manual annotation-based analysis or for collecting training data to deploy a machine learning–based approach for (semi-)automatic annotation. This paper describes how computer-assisted manual full-frame image and video annotation is currently performed in marine science and how it can evolve to keep up with the increasing demand for image and video annotation and the growing volume of imaging data. As an example, observations are presented how the image and video annotation tool BIIGLE 2.0 has been used by an international community of more than one thousand users in the last 4 years. In addition, new features and tools are presented to show how BIIGLE 2.0 has evolved over the same time period: video annotation, support for large images in the gigapixel range, machine learning assisted image annotation, improved mobility and affordability, application instance federation and enhanced label tree collaboration. The observations indicate that, despite novel concepts and tools introduced by BIIGLE 2.0, full-frame image and video annotation is still mostly done in the same way as two decades ago, where single users annotated subsets of image collections or single video frames with limited computational support. We encourage researchers to review their protocols for education and annotation, making use of newer technologies and tools to improve the efficiency and effectivity of image and video annotation in marine science.
- Published
- 2021
- Full Text
- View/download PDF
33. What Do I Annotate Next? An Empirical Study of Active Learning for Action Localization
- Author
-
Heilbron, Fabian Caba, Lee, Joon-Young, Jin, Hailin, Ghanem, Bernard, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Ferrari, Vittorio, editor, Hebert, Martial, editor, Sminchisescu, Cristian, editor, and Weiss, Yair, editor
- Published
- 2018
- Full Text
- View/download PDF
34. Life beneath the ice: jellyfish and ctenophores from the Ross Sea, Antarctica, with an image-based training set for machine learning.
- Author
-
Verhaegen, Gerlien, Cimoli, Emiliano, and Lindsay, Dhugal
- Subjects
JELLYFISHES ,CTENOPHORA ,MACHINE learning ,BIODIVERSITY ,ZOOPLANKTON - Abstract
Background Southern Ocean ecosystems are currently experiencing increased environmental changes and anthropogenic pressures, urging scientists to report on their biodiversity and biogeography. Two major taxonomically diverse and trophically important gelatinous zooplankton groups that have, however, stayed largely understudied until now are the cnidarian jellyfish and ctenophores. This data scarcity is predominantly due to many of these fragile, soft-bodied organisms being easily fragmented and/or destroyed with traditional net sampling methods. Progress in alternative survey methods including, for instance, optics-based methods is slowly starting to overcome these obstacles. As video annotation by human observers is both time-consuming and financially costly, machinelearning techniques should be developed for the analysis of in situ/in aqua image-based datasets. This requires taxonomically accurate training sets for correct species identification and the present paper is the first to provide such data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Using Hypervideo to support undergraduate students’ reflection on work practices: a qualitative study
- Author
-
Marco Perini, Alberto A. P. Cattaneo, and Giuseppe Tacconi
- Subjects
Hypervideo ,Video annotation ,Reflective activities ,Crossing boundaries ,Educational technologies ,Special aspects of education ,LC8-6691 ,Information technology ,T58.5-58.64 - Abstract
Abstract According to several exploratory studies, the HyperVideo seems to be particularly useful in highlighting the existing connections between the school-based and the work-based contexts, between authentic work situations and theoretical underpinnings. This tool and its features, in particular, the video annotation, seems to constitute an instrument which facilitates the students’ reflection on work-practices. Even though several researchers have already studied the efficacy of HyperVideo, studies concerning the qualitative differences between a reflection process activated with or without its use are still missing. Therefore, the present contribution is focused on the reflective processes activated by two groups of students engaged in a higher education course while they carry out a reflective activity on work practices using the HyperVideo or not. The aim is to investigate wether the HyperVideo can be useful for students to foster the connection between theoretical concepts and work practices. Through multi-step qualitative analysis which combined Thematic Qualitative Text Analysis and Grounded Theory, a sample of reflective reports drafted by a group of students who employed HiperVideo to make a video-interview on a work-practice and to reflect on it (Group A) was compared with a sample of reflective reports drafted by a group who did not use it to complete the same task (Group B). The results emerging from the comparison of the coding frequencies between the two groups show that HyperVideo can support the reflective processes of students, better connecting theory and professional practice.
- Published
- 2019
- Full Text
- View/download PDF
36. Was it worth the effort? An exploratory study on the usefulness and acceptance of video annotation for in-service teachers training in VET sector
- Author
-
Boldrini Elena, Cattaneo Alberto, and Evi-Colombo Alessia
- Subjects
video annotation ,in-service teacher training ,vocational education and training ,feedback ,professional practices ,Education (General) ,L7-991 ,Communication. Mass media ,P87-96 - Abstract
In the field of teachers training of different levels (primary and secondary) and types (in-service and pre-service), exploiting video support for teaching practices analysis is a well-established training method to foster reflection on professional practices, self- and hetero-observation, and finally to improve teaching. While video has long been used to capture microteaching episodes, illustrate classroom cases and practices, and to review teaching practices, recent developments in video annotation tools may help to extend and augment the potentialities of video viewing. Various, although limited, numbers of studies have explored this field of research, especially with respect to in-service teachers training. However, this is less the case for Vocational Education and Training. The study presented here is a pilot experience in the field of in-service teachers training in the vocational sector. A two-year training programme using video annotation has been evaluated and analysed. The dimensions investigated are teachers’ perceptions on the usefulness, acceptance and sustainability of video annotation in teaching practices analysis. Results show a very good acceptance and usefulness of video annotation for reflecting on practice and to deliver feedbacks. Implications for the integration of a structural programme of analysis of practices based on video annotation are presented.
- Published
- 2019
- Full Text
- View/download PDF
37. Interactive video retrieval using implicit user feedback
- Author
-
Vrochidis, Stefanos
- Subjects
621.38 ,Electronic Engineering ,Video annotation ,Video retrieval - Abstract
In the recent years, the rapid development of digital technologies and the low cost of recording media have led to a great increase in the availability of multimedia content worldwide. This availability places the demand for the development of advanced search engines. Traditionally, manual annotation of video was one of the usual practices to support retrieval. However, the vast amounts of multimedia content make such practices very expensive in terms of human effort. At the same time, the availability of low cost wearable sensors delivers a plethora of user-machine interaction data. Therefore, there is an important challenge of exploiting implicit user feedback (such as user navigation patterns and eye movements) during interactive multimedia retrieval sessions with a view to improving video search engines. In this thesis, we focus on automatically annotating video content by exploiting aggregated implicit feedback of past users expressed as click-through data and gaze movements. Towards this goal, we have conducted interactive video retrieval experiments, in order to collect click-through and eye movement data in not strictly controlled environments. First, we generate semantic relations between the multimedia items by proposing a graph representation of aggregated past interaction data and exploit them to generate recommendations, as well as to improve content-based search. Then, we investigate the role of user gaze movements in interactive video retrieval and propose a methodology for inferring user interest by employing support vector machines and gaze movement-based features. Finally, we propose an automatic video annotation framework, which combines query clustering into topics by constructing gaze movement-driven random forests and temporally enhanced dominant sets, as well as video shot classification for predicting the relevance of viewed items with respect to a topic. The results show that exploiting heterogeneous implicit feedback from past users is of added value for future users of interactive video retrieval systems.
- Published
- 2013
38. Annotating Movement Phrases in Vietnamese Folk Dance Videos
- Author
-
Ma-Thi, Chau, Tabia, Karim, Lagrue, Sylvain, Le-Thanh, Ha, Bui-The, Duy, Nguyen-Thanh, Thuy, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Benferhat, Salem, editor, Tabia, Karim, editor, and Ali, Moonis, editor
- Published
- 2017
- Full Text
- View/download PDF
39. Comparative video annotation and visual literacy: performance analysis of Rina Yerushalmi's theatre language.
- Author
-
Aronson-Lehavi, Sharon, Skop, Natan, and Via Dorembus, Yael
- Subjects
THEATERS ,VISUAL literacy ,ANNOTATIONS ,VIDEO recording ,COVID-19 ,STREAMING video & television - Abstract
The growing availability of video recordings of theatre performances, a phenomenon that has increased during Covid19 as theatres worldwide share videos online, affects the field of theatre and performance studies. Video recordings of theatre performances are archival documents and mediated 'performance texts' that enable new kinds of performance analysis and studying bodily practices. This paper addresses annotative methodologies as part of the research project, 'The Art of Adaptation: The Theatre of Rina Yerushalmi and the Itim Ensemble.' The project studies the video archive of the ensemble, which includes recordings of full productions and rehearsal processes. We discuss three kinds of digital comparative annotative methodologies to show how annotation can be used as a research tool for performance analysis: (1) Accumulative annotation: analysis of multi-layered theatrical sequences, revealing theatrical moments in their multiplicity. (2) Annotation of different scenes in a corpus of works: juxtaposing scenes from different productions enables the articulation of repetitive performative patterns, embodied practices, and visual images that construct a theatre language. (3) Annotation of the same scene at different phases: juxtaposing video recordings of the same scene at different moments of its development reveals nuanced sets of information about directorial choices, acting, movement, duration, and more. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. Differentiating (an)notation practices: an artist-scholar's observation.
- Author
-
Stancliffe, Rebecca
- Subjects
ANNOTATIONS - Abstract
Video annotation is an emergent practice and not (yet) a common method in dance studies or research. Subsequently, there are limited accounts that detail the practice of using annotation in dance but those that are available point to how annotation serves diverse and particular purposes. However, a common understanding of what annotation is does not theoretically cohere. Furthermore, the tendency to use the terms annotation and notation synonymously conflates these practices and risks overlooking the significant contributions of each. In discussing my experience, reflections, and observations of working with four different approaches to annotation I offer an understanding of what it offers in analysing and transmitting ideas about dance from an artist-scholar's perspective. Crucially, drawing from Bernard Stiegler's philosophy of technology, I position annotations as technical memories created in dialogue with existing mnemotechnical forms, or technical objects. Such characterisation illuminates how annotation helps to overcome limitations of documentary forms and highlight information otherwise missing or previously unnoticed. To further emphasise annotation as a method of amplification I compare my experience of annotation and of Labanotation to highlight the similarities and differences between these distinctive methodological tools. While the examples primarily focus on dance the insight developed in this article is valuable for other fields working with time-based media. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. ancestors: an illuminated video.
- Author
-
Spatz, Ben, Erçin, Nazlıhan Eda, and Mendel, Agnieszka
- Subjects
VIDEOS ,VIDEO editing ,JEWISH identity ,ANCESTORS - Abstract
This video article consists of three repetitions or cycles of a single audiovisual fragment. The underpinning fragment is just longer than three minutes. In the first cycle, the video fragment is presented with only subtitles added to clarify the recorded dialog. The second cycle augments the first by adding a set of textual 'illuminations' that provide the basic details of what is happening and begin to reveal the interactive dynamics at play in this recorded moment. In the third cycle, yet another layer of textual illumination is added, this time bringing to bear a range of critical scholarly sources that link the dynamics of the moment to larger contexts of history, memory, and nation. An accompanying research statement defines the form of illuminated video and imagines its possible futures. Together, the video and the statement are conceived as a teaching tool, introducing some of the potential that video editing brings to the analysis and publication of embodied research. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. Effectiveness of reflective learning in skill-based teaching among postgraduate anesthesia students: An outcome-based study using video annotation tool.
- Author
-
Gayathri, Balasubrmaniam, Vedavyas, Raksha, Sharanya, P., and Karthik, K.
- Subjects
REFLECTIVE learning ,GRADUATE students ,STUDENT attitudes ,UNIVERSITY faculty - Abstract
Medical education all over the world is undergoing paradigm shift. Video recording of student's performance and self-annotation are emerging as valuable tools for self-directed learning among students. Study was conducted to find the effectiveness of video annotation tool in reflective learning. The learning outcome was to find whether the video annotation is helping in critical reflection and improving the perception of students on guideline compliance while learning the technique of epidural insertion. In phase 1; following pretest, the students were made to observe three epidural insertions and perform one epidural insertion. In phase 2; following posttest, two faculty members analyzed the depth of reflection using the Reflection Rubric. Students perception was recorded using the Reflective practice survey. The average score of students after pretest was 76%. The posttest score was 84% (p value 0.003). In depth analysis using the reflection rubric we found 52.38% of the total reflections had a score of two, showing they were at introspection level only. 25.71% of reflections were having score of one, showing that they were just habitual answers. Only 21.9% of the total reflections had score of three; and none of them were critically reflecting. All the students (18/18) agreed that recording the session was meaningful. The art of critical reflection is learnt by relentless effort. Yet it helps the students to reflect on the whole process introspecting and understanding what went wrong. Video annotation turns out to be a valuable tool in reflective learning [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. A dataset of text prompts, videos and video quality metrics from generative text-to-video AI models.
- Author
-
Chivileva I, Lynch P, Ward TE, and Smeaton AF
- Abstract
Evaluating the quality of videos which have been automatically generated from text-to-video (T2V) models is important if the models are to produce plausible outputs that convince a viewer of their authenticity. This paper presents a dataset of 201 text prompts used to automatically generate 1,005 videos using 5 very recent T2V models namely Tune-a-Video, VideoFusion, Text-To-Video Synthesis, Text2Video-Zero and Aphantasia. The prompts are divided into short, medium and longer lengths. We also include the results of some commonly used metrics used to automatically evaluate the quality of those generated videos. These include each video's naturalness, the text similarity between the original prompt and an automatically generated text caption for the video, and the inception score which measures how realistic is each generated video. Each of the 1,005 generated videos was manually rated by 24 different annotators for alignment between the videos and their original prompts, as well as for the perception and overall quality of the video. The data also includes the Mean Opinion Scores (MOS) for alignment between the generated videos and the original prompts. The dataset of T2V prompts, videos and assessments can be reused by those building or refining text-to-video generation models to compare the accuracy, quality and naturalness of their new models against existing ones., (© 2024 The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
44. Content-based digital video processing : digital videos segmentation, retrieval and interpretation
- Author
-
Chen, Juan, Jiang, Jianmin., and Ipson, Stanley S.
- Subjects
005.3 ,Shot boundary detection ,Video copy detection ,Video annotation ,Intellectual property rights (IPR) ,Content-based indexing ,Retrieval ,Dgital video processing ,Video highlights indexing - Abstract
Recent research approaches in semantics based video content analysis require shot boundary detection as the first step to divide video sequences into sections. Furthermore, with the advances in networking and computing capability, efficient retrieval of multimedia data has become an important issue. Content-based retrieval technologies have been widely implemented to protect intellectual property rights (IPR). In addition, automatic recognition of highlights from videos is a fundamental and challenging problem for content-based indexing and retrieval applications. In this thesis, a paradigm is proposed to segment, retrieve and interpret digital videos. Five algorithms are presented to solve the video segmentation task. Firstly, a simple shot cut detection algorithm is designed for real-time implementation. Secondly, a systematic method is proposed for shot detection using content-based rules and FSM (finite state machine). Thirdly, the shot detection is implemented using local and global indicators. Fourthly, a context awareness approach is proposed to detect shot boundaries. Fifthly, a fuzzy logic method is implemented for shot detection. Furthermore, a novel analysis approach is presented for the detection of video copies. It is robust to complicated distortions and capable of locating the copy of segments inside original videos. Then, iv objects and events are extracted from MPEG Sequences for Video Highlights Indexing and Retrieval. Finally, a human fighting detection algorithm is proposed for movie annotation.
- Published
- 2009
45. Revisión de la literatura sobre anotaciones de vídeo en la formación docente
- Author
-
Ana-Belén Pérez-Torregrosa, Manuel Cebrián de la Serna, Violeta Cebrián-Robles, and Junta de Andalucía
- Subjects
Initial teacher training ,Computer Networks and Communications ,Digital video ,Video annotation ,Formación inicial de docentes ,Computer Science Applications ,Education ,Formación del personal docente ,Teacher training ,Vídeo digital ,In-service teacher training ,Anotaciones de video ,Formación de docentes en activo ,Information Systems - Abstract
Existe una tradición en el uso de videos digitales en la formación del profesorado donde las anotaciones de vídeo son una tecnología emergente que ofrece un uso más activo a los vídeos disponibles en internet y los creados en las prácticas de aula. Sus posibilidades técnicas consisten en crear anotaciones en los vídeos de forma individual para el portafolios o compartida en grupos de clases y redes profesionales. Como tecnología en evolución ofrece nuevas posibilidades para la formación docente mediante etiquetado social e hipervínculos que pueden ser analizados. El presente estudio es una revisión de la literatura reciente desde 2018 a 2022 sobre el uso de las anotaciones multimedia en la formación docente mediante el protocolo PRISMA, seleccionando desde bases de datos (WOS, Scopus y ERIC) 244 referencias que tras aplicar los criterios de exclusión e inclusión se obtuvieron 25 referencias que aplican sólo en el tópico de la formación de docentes. Se ofrece una lista de las plataformas más empleadas, los métodos de investigación utilizados, el perfil de los participantes y los tópicos de aplicación. Es compartida la utilidad de las anotaciones de video en la formación de docentes especialmente para el análisis y reflexión sobre las prácticas. There is a tradition in the use of digital video in teacher training where video annotations are an emerging technology that offers a more active use of videos available on the Internet and those created in classroom practices. Its technical possibilities are to create annotations on videos individually for the portfolio or shared in class groups and professional networks. As an evolving technology it offers new possibilities for teacher training through social tagging and hyperlinks that can be analyzed. The present study is a review of recent literature from 2018 to 2022 on the use of multimedia annotations in teacher education using protocol PRISMA, selecting from databases (WOS, Scopus y ERIC) 244 references that after applying exclusion and inclusion criteria we selected the 25 references that apply only in the topic of teacher education. A list of the most used platforms, the research methods used, the profile of the participants and the topics of application is provided. The usefulness of video annotations in teacher training is shared, especially for analysis and reflection on practices.
- Published
- 2023
46. 基于知识注释的 MOOC 视频快速检索系统研究.
- Author
-
许邓艳, 卢民荣, and 王 莹
- Abstract
Copyright of Experimental Technology & Management is the property of Experimental Technology & Management Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2020
- Full Text
- View/download PDF
47. A Distributed Automatic Video Annotation Platform.
- Author
-
Islam, Md Anwarul, Uddin, Md Azher, and Lee, Young-Koo
- Subjects
DESCRIPTOR systems ,DIGITAL cameras ,ANNOTATIONS ,VIDEOS ,VIDEO on demand ,VIDEO processing - Abstract
Featured Application: This work can be applied for automatic video annotation. In the era of digital devices and the Internet, thousands of videos are taken and share through the Internet. Similarly, CCTV cameras in the digital city produce a large amount of video data that carry essential information. To handle the increased video data and generate knowledge, there is an increasing demand for distributed video annotation. Therefore, in this paper, we propose a novel distributed video annotation platform that explores the spatial information and temporal information. Afterward, we provide higher-level semantic information. The proposed framework is divided into two parts: spatial annotation and spatiotemporal annotation. Therefore, we propose a spatiotemporal descriptor, namely, volume local directional ternary pattern-three orthogonal planes (VLDTP–TOP) in a distributed manner using Spark. Moreover, we developed several state-of-the-art appearance-based and spatiotemporal-based feature descriptors on top of Spark. We also provide the distributed video annotation services for the end-users so that they can easily use the video annotation and APIs for development to produce new video annotation algorithms. Due to the lack of a spatiotemporal video annotation dataset that provides ground truth for both spatial and temporal information, we introduce a video annotation dataset, namely, STAD which provides ground truth for spatial and temporal information. An extensive experimental analysis was performed in order to validate the performance and scalability of the proposed feature descriptors, which proved the excellence of our proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
48. Semi-Automatic Cloud-Native Video Annotation for Autonomous Driving.
- Author
-
Sánchez-Carballido, Sergio, Senderos, Orti, Nieto, Marcos, and Otaegui, Oihana
- Subjects
DRIVER assistance systems ,HIGH performance computing ,ANNOTATIONS ,COMPUTER workstation clusters - Abstract
An innovative solution named Annotation as a Service (AaaS) has been specifically designed to integrate heterogeneous video annotation workflows into containers and take advantage of a cloud native highly scalable and reliable design based on Kubernetes workloads. Using the AaaS as a foundation, the execution of automatic video annotation workflows is addressed in the broader context of a semi-automatic video annotation business logic for ground truth generation for Autonomous Driving (AD) and Advanced Driver Assistance Systems (ADAS). The document presents design decisions, innovative developments, and tests conducted to provide scalability to this cloud-native ecosystem for semi-automatic annotation. The solution has proven to be efficient and resilient on an AD/ADAS scale, specifically in an experiment with 25 TB of input data to annotate, 4000 concurrent annotation jobs, and 32 worker nodes forming a high performance computing cluster with a total of 512 cores, and 2048 GB of RAM. Automatic pre-annotations with the proposed strategy reduce the time of human participation in the annotation up to 80% maximum and 60% on average. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
49. Impact of video annotation on undergraduate nursing students' communication performance and commenting behaviour during an online peer-assessment activity.
- Author
-
Chin-Yuan Lai, Li-Ju Chen, Yung-Chin Yen, and Kai-Yin Lin
- Subjects
NURSING students ,PROFESSIONALISM ,UNDERGRADUATES ,RATING of students ,STUDENT development ,ACCELEROMETERS ,VIDEOS - Abstract
This article reports on the implementation of a web-based video-annotation system that supports online peer-assessment activities in a nursing communication training scenario. A quasi-experimental design was applied to investigate the effects of using video annotation on communication skills and professional attitudes. The participants were fourth-year students from two classes at a nursing college in Taiwan. One class of 50 students served as the experimental group, who used the video-annotation tool we designed to add their comments to videos. The other class of 50 students served as the control group and used YouTube to add comments. Although YouTube also provides video-annotation features, these are not often used. Two rounds of peer-assessment activities indicated that the videoannotation tool notably enhanced nursing students' communication performance. Specifically, the tool was helpful in promoting students' development of communication skills, but not their professional attitudes. The students using the video-annotation tool provided more suggestions in their peer comments than those who did not use it. Moreover, video annotation resulted in closer agreement between peer and expert ratings of students' communication. The use of a video-annotation feature could improve the effectiveness of online peer assessment and thus promote student performance. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
50. Conclusions
- Author
-
Fisher, Robert B., Kacprzyk, Janusz, Series editor, Jain, Lakhmi C., Series editor, Fisher, Robert B., editor, Chen-Burger, Yun-Heh, editor, Giordano, Daniela, editor, Hardman, Lynda, editor, and Lin, Fang-Pang, editor
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.