9 results on '"Caitlin Mills"'
Search Results
2. Design Recommendations for Using Textual Aids in Data-Science Programming Courses
- Author
-
Heeryung Choi, Caitlin Mills, Christopher Brooks, Stephen Doherty, and Anjali Singh
- Published
- 2022
- Full Text
- View/download PDF
3. Towards Computational Identification of Visual Attention on Interactive Tabletops
- Author
-
Caitlin Mills, Alberta Ansah, Andrew L. Kun, and Orit Shaer
- Subjects
Artificial neural network ,Computer science ,business.industry ,Machine learning ,computer.software_genre ,Interactive displays ,Standard deviation ,Test (assessment) ,Identification (information) ,Visual attention ,Eye tracking ,Artificial intelligence ,business ,computer - Abstract
There is a growing interest in the ability to detect where people are looking in real-time to support learning, collaboration, and efficiency. Here we present an overview of computational methods for accurately classifying the area of visual attention on a horizontal surface that we use to represent an interactive display (i.e. tabletop). We propose a new model that utilizes a neural network to estimate the area of visual attention, and provide a close examination of the factors that contribute to the accuracy of the model. Additionally, we discuss the use of this technique to model joint visual attention in collaboration. We achieved a mean classification accuracy of 75.75% with a standard deviation of 0.14 when data from four participants was used in training the model and then tested on the fifth participant. We also achieved a mean classification accuracy of 98.8% with 0.02 standard deviation when different amounts of overall data was used to test the model.
- Published
- 2020
- Full Text
- View/download PDF
4. Eyes on URLs: Relating Visual Behavior to Safety Decisions
- Author
-
Ross Koppel, Andrew L. Kun, Sean W. Smith, Caitlin Mills, Jim Blythe, Vijay H. Kothari, and Niveta Ramkumar
- Subjects
Parsing ,Computer science ,business.industry ,media_common.quotation_subject ,Internet privacy ,computer.software_genre ,Phishing ,Proxy (climate) ,Vetting ,Reading (process) ,Cognitive resource theory ,Pupillary response ,Eye tracking ,business ,computer ,media_common - Abstract
Individual and organizational computer security rests on how people interpret and use the security information they are presented. One challenge is determining whether a given URL is safe or not. This paper explores the visual behaviors that users employ to gauge URL safety. We conducted a user study on 20 participants wherein participants classified URLs as safe or unsafe while wearing an eye tracker that recorded eye gaze (where they look) and pupil dilation (a proxy for cognitive effort). Among other things, our findings suggest that: users have a cap on the amount of cognitive resources they are willing to expend on vetting a URL; they tend to believe that the presence of www in the domain name indicates that the URL is safe; and they do not carefully parse the URL beyond what they perceive as the domain name.
- Published
- 2020
- Full Text
- View/download PDF
5. Where You Are, Not What You See
- Author
-
Caitlin Mills, Trish L. Varao-Sousa, and Alan Kingstone
- Subjects
Learning environment ,05 social sciences ,Visibility (geometry) ,Social learning ,050105 experimental psychology ,03 medical and health sciences ,0302 clinical medicine ,Negative relationship ,Display format ,Mind-wandering ,ComputingMilieux_COMPUTERSANDEDUCATION ,Mathematics education ,0501 psychology and cognitive sciences ,Student learning ,Psychology ,030217 neurology & neurosurgery - Abstract
Online lectures are an increasingly popular tool for learning, yet research on instructor visibility during an online lecture, and students’ environmental settings, has not been well-explored. The current study addresses this gap in the literature by experimentally manipulating online display format and social learning settings to understand their influence on student learning and mind-wandering experiences. Results suggest that instructor visibility within an online lecture does not impact students’ MW or retention performance. However, we found some evidence that students’ social setting during viewing has an impact on MW (p = .05). Specifically, students who watched the lecture in a classroom with others reported significantly more MW than students who watched the lecture alone. Finally, social setting also moderated the negative relationship between MW and material retention. Our results demonstrate that learning experiences during online lectures can vary based on where, and with whom, the lectures are watched.
- Published
- 2019
- Full Text
- View/download PDF
6. Are You Talking to Me?
- Author
-
Danielle S. McNamara, Laura K. Allen, Caitlin Mills, and Cecile A. Perret
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,05 social sciences ,computer.software_genre ,050105 experimental psychology ,Comprehension ,03 medical and health sciences ,Social order ,0302 clinical medicine ,Text mining ,Corpus linguistics ,Reading (process) ,Multi dimensional ,0501 psychology and cognitive sciences ,Artificial intelligence ,Language analysis ,Function (engineering) ,business ,computer ,030217 neurology & neurosurgery ,Natural language processing ,media_common - Abstract
This study examines the extent to which instructions to self-explain vs. other-explain a text lead readers to produce different forms of explanations. Natural language processing was used to examine the content and characteristics of the explanations produced as a function of instruction condition. Undergraduate students (n = 146) typed either self-explanations or other-explanations while reading a science text. The linguistic properties of these explanations were calculated using three automated text analysis tools. Machine learning classifiers in combination with the features were used to predict instruction condition (i.e., self- or other-explanation). The best machine learning model performed at rates above chance (kappa = .247; accuracy = 63%). Follow-up analyses indicated that students in the self-explanation condition generated explanations that were more cohesive and that contained words that were more related to social order (e.g., ethics). Overall, the results suggest that natural language processing techniques can be used to detect subtle differences in students' processing of complex texts.
- Published
- 2019
- Full Text
- View/download PDF
7. 'Out of the Fr-Eye-ing Pan'
- Author
-
Sidney K. D'Mello, Kristina Krasich, James R. Brockmole, Stephen Hutt, Nigel Bosch, and Caitlin Mills
- Subjects
Predictive validity ,Computer science ,BitTorrent tracker ,School classroom ,05 social sciences ,02 engineering and technology ,Gaze ,050105 experimental psychology ,Human–computer interaction ,Mind-wandering ,0202 electrical engineering, electronic engineering, information engineering ,Eye tracking ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences - Abstract
Attention is critical to learning. Hence, advanced learning technologies should benefit from mechanisms to monitor and respond to learners' attentional states. We study the feasibility of integrating commercial off-the-shelf (COTS) eye trackers to monitor attention during interactions with a learning technology called GuruTutor. We tested our implementation on 135 students in a noisy computer-enabled high school classroom and were able to collect a median 95% valid eye gaze data in 85% of the sessions where gaze data was successfully recorded. Machine learning methods were employed to develop automated detectors of mind wandering (MW) -- a phenomenon involving a shift in attention from task-related to task-unrelated thoughts that is negatively correlated with performance. Our student-independent, gaze-based models could detect MW with an accuracy (F1 of MW = 0.59) significantly greater than chance (F1 of MW = 0.24). Predicted rates of mind wandering were negatively related to posttest performance, providing evidence for the predictive validity of the detector. We discuss next steps towards developing gaze-based, attention-aware, learning technologies that can be deployed in noisy, real-world environments.
- Published
- 2017
- Full Text
- View/download PDF
8. Put your thinking cap on
- Author
-
Walid Soussou, Sidney K. D'Mello, Disha Waghray, Andrew Olney, Caitlin Mills, and Igor Fridman
- Subjects
medicine.diagnostic_test ,business.industry ,4. Education ,05 social sciences ,050301 education ,Electroencephalography ,Machine learning ,computer.software_genre ,Intelligent tutoring system ,Session (web analytics) ,Task (project management) ,Mental effort ,03 medical and health sciences ,InformationSystems_MODELSANDPRINCIPLES ,0302 clinical medicine ,medicine ,Artificial intelligence ,business ,Psychology ,0503 education ,computer ,030217 neurology & neurosurgery ,Cognitive load ,Cognitive psychology - Abstract
Current learning technologies have no direct way to assess students' mental effort: are they in deep thought, struggling to overcome an impasse, or are they zoned out? To address this challenge, we propose the use of EEG-based cognitive load detectors during learning. Despite its potential, EEG has not yet been utilized as a way to optimize instructional strategies. We take an initial step towards this goal by assessing how experimentally manipulated (easy and difficult) sections of an intelligent tutoring system (ITS) influenced EEG-based estimates of students' cognitive load. We found a main effect of task difficulty on EEG-based cognitive load estimates, which were also correlated with learning performance. Our results show that EEG can be a viable source of data to model learners' mental states across a 90-minute session.
- Published
- 2017
- Full Text
- View/download PDF
9. Investigating boredom and engagement during writing using multiple sources of information
- Author
-
Laura K. Allen, Danielle S. McNamara, Caitlin Mills, Scott A. Crossley, Sidney K. D'Mello, and Matthew E. Jacovina
- Subjects
060201 languages & linguistics ,4. Education ,05 social sciences ,Individual difference ,050301 education ,06 humanities and the arts ,Boredom ,Keystroke logging ,Affect (psychology) ,Personalization ,Corpus linguistics ,0602 languages and literature ,Pedagogy ,medicine ,Criticism ,medicine.symptom ,Psychology ,0503 education ,Emphasis (typography) ,Cognitive psychology - Abstract
Writing training systems have been developed to provide students with instruction and deliberate practice on their writing. Although generally successful in providing accurate scores, a common criticism of these systems is their lack of personalization and adaptive instruction. In particular, these systems tend to place the strongest emphasis on delivering accurate scores, and therefore, tend to overlook additional indices that may contribute to students' success, such as their affective states during writing practice. This study takes an initial step toward addressing this gap by building a predictive model of students' affect using information that can potentially be collected by computer systems. We used individual difference measures, text indices, and keystroke analyses to predict engagement and boredom in 132 writing sessions. The results suggest that these three categories of indices were successful in modeling students' affective states during writing. Taken together, indices related to students' academic abilities, text properties, and keystroke logs were able classify high and low engagement and boredom in writing sessions with accuracies between 76.5% and 77.3%. These results suggest that information readily available in writing training systems can inform affect detectors and ultimately improve student models within intelligent tutoring systems.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.