19 results on '"Scott Mishler"'
Search Results
2. Effectiveness of Lateral Auditory Collision Warnings: Should Warnings Be Toward Danger or Toward Safety?
- Author
-
Jing Chen 0005, Edin Sabic, Scott Mishler, Cody Parker, and Motonori Yamaguchi
- Published
- 2022
- Full Text
- View/download PDF
3. Automation Error Type and Methods of Communicating Automation Reliability Affect Trust and Performance: An Empirical Study in the Cyber Domain.
- Author
-
Jing Chen 0005, Scott Mishler, and Bin Hu 0014
- Published
- 2021
- Full Text
- View/download PDF
4. Recognition of Car Warnings: An Analysis of Various Alert Types.
- Author
-
Edin Sabic, Scott Mishler, Jing Chen 0005, and Bin Hu 0014
- Published
- 2017
- Full Text
- View/download PDF
5. The description-experience gap in the effect of warning reliability on user trust and performance in a phishing-detection context.
- Author
-
Jing Chen 0005, Scott Mishler, Bin Hu 0014, Ninghui Li, and Robert W. Proctor
- Published
- 2018
- Full Text
- View/download PDF
6. Drivers’ Understanding of Artificial Intelligence in Automated Driving Systems: A Study of a Malicious Stop Sign
- Author
-
Katherine R. Garcia, Scott Mishler, Yanru Xiao, Cong Wang, Bin Hu, Jeremiah D. Still, and Jing Chen
- Subjects
Human Factors and Ergonomics ,Engineering (miscellaneous) ,Applied Psychology ,Computer Science Applications - Abstract
Automated Driving Systems (ADS), like many other systems people use today, depend on successful Artificial Intelligence (AI) for safe roadway operations. In ADS, an essential function completed by AI is the computer vision techniques for detecting roadway signs by vehicles. The AI, though, is not always reliable and sometimes requires the human’s intelligence to complete a task. For the human to collaborate with the AI, it is critical to understand the human’s perception of AI. In the present study, we investigated how human drivers perceive the AI’s capabilities in a driving context where a stop sign is compromised and how knowledge, experience, and trust related to AI play a role. We found that participants with more knowledge of AI tended to trust AI more, and those who reported more experience with AI had a greater understanding of AI. Participants correctly deduced that a maliciously manipulated stop sign would be more difficult for AI to identify. Nevertheless, participants still overestimated the AI’s ability to recognize the malicious stop sign. Our findings suggest that the public do not yet have a sufficiently accurate understanding of specific AI systems, which leads them to over-trust the AI in certain conditions.
- Published
- 2022
- Full Text
- View/download PDF
7. Automation Error Type and Methods of Communicating Automation Reliability Affect Trust and Performance: An Empirical Study in the Cyber Domain
- Author
-
Scott Mishler, Jing Chen, and Bin Hu
- Subjects
Computer Networks and Communications ,Computer science ,Calibration (statistics) ,Reliability (computer networking) ,Human Factors and Ergonomics ,Computer security ,computer.software_genre ,Electronic mail ,Domain (software engineering) ,03 medical and health sciences ,0302 clinical medicine ,Empirical research ,Artificial Intelligence ,0501 psychology and cognitive sciences ,050107 human factors ,business.industry ,05 social sciences ,Testbed ,030210 environmental & occupational health ,Automation ,Phishing ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Signal Processing ,business ,computer - Abstract
Antiphishing aid systems, among other automated systems, are not perfectly reliable. Automated systems can make errors, thereby resulting in false alarms or misses. An automated system's capabilities need to be communicated to the users to maintain proper user trust. System capabilities can be learned through an explicit description or from experience. Using a phishing-detection system as a testbed in this article, we systematically varied automation error type and the method of communicating system reliability in a factorial design and measured their effects on human performance and trust in the automation. Participants were asked to classify emails as legitimate or phishing with assistance from the phishing-detection system. The results from 510 participants suggest that learning through experience with feedback improved trust calibration for both objective and subjective trust measures in most conditions. Moreover, false alarms lowered trust more than misses for both unreliable and reliable systems, and false alarms turned out to be beneficial for proper trust calibration when using unreliable systems. Design implications of the results include using feedback whenever possible and choosing false alarms over misses for unreliable systems.
- Published
- 2021
- Full Text
- View/download PDF
8. Human-Automation Interaction for Semi-Autonomous Driving: Risk Communication and Trust
- Author
-
Jing Chen, Scott Mishler, Shelby Long, Sarah Yahoodik, Katherine Garcia, and Yusuke Yamani
- Published
- 2022
- Full Text
- View/download PDF
9. Human Perception of AI Capabilities in Identifying Malicious Roadway Signs
- Author
-
Katherine R. Garcia, Yanru Xiao, Scott Mishler, Cong Wang, Bin Hu, and Jing Chen
- Published
- 2022
- Full Text
- View/download PDF
10. Effect of automation failure type on trust development in driving automation systems
- Author
-
Scott Mishler and Jing Chen
- Subjects
Automobile Driving ,Automation ,Amelogenesis Imperfecta ,Humans ,Physical Therapy, Sports Therapy and Rehabilitation ,Human Factors and Ergonomics ,Safety, Risk, Reliability and Quality ,Trust ,Engineering (miscellaneous) - Abstract
The performance of a driving automation system (DAS) can influence the human drivers' trust in the system. This driving-simulator study examined how different types of DAS failures affected drivers' trust. The automation-failure type (no-failure, takeover-request, system-malfunction) was manipulated among 122 participants, when a critical hazard event occurred. The dependent measures included participants' trust ratings after each of seven drives and their takeover performance following the hazard. Results showed that trust improved before any automation failure occurred, demonstrating proper trust calibration toward the errorless system. In the takeover-request and system-malfunction conditions, trust decreased similarly in response to the automation failures, although the takeover-request condition had better takeover performance. For the drives after the automation failure, trust was gradually repaired but did not recover to the original level. This study demonstrated how trust develops and responds to DAS failures, informing future research for trust repair interventions in designing DASs.
- Published
- 2022
11. Countering Driver Vigilance Decrement for Partially Automated Vehicles
- Author
-
Scott Mishler and Jing Chen
- Subjects
Medical Terminology ,Medical Assisting and Transcription - Published
- 2022
- Full Text
- View/download PDF
12. Identifying Perturbed Roadway Signs: Perception of AI Capabilities
- Author
-
Katherine Garcia, Yanru Xiao, Scott Mishler, Cong Wang, Bin Hu, and Jing Chen
- Subjects
Medical Terminology ,Medical Assisting and Transcription - Published
- 2022
- Full Text
- View/download PDF
13. Predicting a Malicious Stop Sign: Knowledge, Exposure, Trust in AI
- Author
-
Jeremiah D. Still, Katherine R. Garcia, Jing Chen, Bin Hu, Cong Wang, Scott Mishler, and Erin Fuller-Jakaitis
- Subjects
Medical Terminology ,Computer science ,Stop sign ,Computer security ,computer.software_genre ,computer ,Medical Assisting and Transcription - Published
- 2021
- Full Text
- View/download PDF
14. Effectiveness of Lateral Auditory Collision Warnings: Should Warnings Be Toward Danger or Toward Safety?
- Author
-
Cody Parker, Motonori Yamaguchi, Edin Sabic, Scott Mishler, and Jing Chen
- Subjects
medicine.medical_specialty ,Automobile Driving ,Injury control ,Poison control ,Human Factors and Ergonomics ,Suicide prevention ,050105 experimental psychology ,Occupational safety and health ,Behavioral Neuroscience ,Physical medicine and rehabilitation ,Injury prevention ,medicine ,Reaction Time ,Humans ,0501 psychology and cognitive sciences ,Attention ,050107 human factors ,Applied Psychology ,Pedestrians ,business.industry ,05 social sciences ,Accidents, Traffic ,Human factors and ergonomics ,Collision ,business ,Stimulus–response compatibility - Abstract
Objective The present study investigated the design of spatially oriented auditory collision-warning signals to facilitate drivers’ responses to potential collisions. Background Prior studies on collision warnings have mostly focused on manual driving. It is necessary to examine the design of collision warnings for safe takeover actions in semi-autonomous driving. Method In a video-based semi-autonomous driving scenario, participants responded to pedestrians walking across the road, with a warning tone presented in either the avoidance direction or the collision direction. The time interval between the warning tone and the potential collision was also manipulated. In Experiment 1, pedestrians always started walking from one side of the road to the other side. In Experiment 2, pedestrians appeared in the middle of the road and walked toward either side of the road. Results In Experiment 1, drivers reacted to the pedestrian faster with collision-direction warnings than with avoidance-direction warnings. In Experiment 2, the difference between the two warning directions became nonsignificant. In both experiments, shorter time intervals to potential collisions resulted in faster reactions but did not influence the effect of warning direction. Conclusion The collision-direction warnings were advantageous over the avoidance-direction warnings only when they occurred at the same lateral location as the pedestrian, indicating that this advantage was due to the capture of attention by the auditory warning signals. Application The present results indicate that drivers would benefit most when warnings occur at the side of potential collision objects rather than the direction of a desirable action during semi-autonomous driving.
- Published
- 2020
15. Effect of Response Method on Driver Responses to Auditory Warnings in Simulated Semi-autonomous Driving
- Author
-
Jing Chen and Scott Mishler
- Subjects
Medical Terminology ,Response method ,Warning system ,business.industry ,Computer science ,Direct response ,business ,Automation ,Simulation ,Medical Assisting and Transcription - Abstract
We examined how drivers’ response to automation warnings could improve driver performance by testing the traditional direct-response method against a new indirect-response method. A direct response, for which drivers manually take over control of the car after hearing the warning and seeing the scenario, was compared to an indirect response, for which drivers press a “yes” or “no” button to assist the automation in making a correct choice. Results showed no reaction time (RT) difference between the response methods, but accuracy was better for the direct response. Subtracting the action-execution time from RT showed that the indirect response took longer to mentally process, explaining why the indirect method was not faster and pointing to a potential source of increased errors. Buttons presses in the indirect method could eventually be faster, but better ways to convey the warning to the user and improve the human-machine interface are needed in future research.
- Published
- 2018
- Full Text
- View/download PDF
16. The Rise, Fall, and Repair of Trust for Automated Driving Systems
- Author
-
Jing Chen and Scott Mishler
- Subjects
Medical Terminology ,Computer science ,Medical Assisting and Transcription - Published
- 2020
- Full Text
- View/download PDF
17. Flowers and spiders in spatial stimulus-response compatibility: does affective valence influence selection of task-sets or selection of responses?
- Author
-
Scott Mishler, Robert W. Proctor, Motonori Yamaguchi, and Jing Chen
- Subjects
Adult ,Male ,Cognitive systems ,Adolescent ,050109 social psychology ,Experimental and Cognitive Psychology ,Flowers ,Stimulus (physiology) ,050105 experimental psychology ,Young Adult ,Arts and Humanities (miscellaneous) ,Developmental and Educational Psychology ,Reaction Time ,Animals ,Humans ,0501 psychology and cognitive sciences ,Valence (psychology) ,Communication ,Simon effect ,business.industry ,05 social sciences ,Spiders ,Affective valence ,Affect ,Female ,business ,Psychology ,Stimulus–response compatibility ,Photic Stimulation ,Psychomotor Performance ,Cognitive psychology - Abstract
The present study examined the effect of stimulus valence on two levels of selection in the cognitive system, selection of a task-set and selection of a response. In the first experiment, participants performed a spatial compatibility task (pressing left and right keys according to the locations of stimuli) in which stimulus-response mappings were determined by stimulus valence. There was a standard spatial stimulus-response compatibility (SRC) effect for positive stimuli (flowers) and a reversed SRC effect for negative stimuli (spiders), but the same data could be interpreted as showing faster responses when positive and negative stimuli were assigned to compatible and incompatible mappings, respectively, than when the assignment was opposite. Experiment 2 disentangled these interpretations, showing that valence did not influence a spatial SRC effect (Simon effect) when task-set retrieval was unnecessary. Experiments 3 and 4 replaced keypress responses with joystick deflections that afforded approach/avoidance action coding. Stimulus valence modulated the Simon effect (but did not reverse it) when the valence was task-relevant (Experiment 3) as well as when it was task-irrelevant (Experiment 4). Therefore, stimulus valence influences task-set selection and response selection, but the influence on the latter is limited to conditions where responses afford approach/avoidance action coding.
- Published
- 2017
18. Conveying Automation Reliability and Automation Error Type An Empirical Study in the Cyber Domain
- Author
-
Scott Mishler, Jing Chen, and Bin Hu
- Subjects
business.industry ,Computer science ,Computer security ,computer.software_genre ,Phishing ,Automation ,Domain (software engineering) ,Medical Terminology ,Information sensitivity ,Trustworthiness ,Empirical research ,Work (electrical) ,business ,computer ,Reliability (statistics) ,Medical Assisting and Transcription - Abstract
BackgroundEmails have become an integral part of our daily life and work. Phishing emails are often disguised as trustworthy ones and attempt to obtain sensitive information for malicious reasons (Egelman, Cranor, Hong, 2008;). Anti-phishing tools have been designed to help users detect phishing emails or websites (Egelman, et al., 2008; Yang, Xiong, Chen, Proctor, & Li, 2017). However, like any other types of automation aids, these tools are not perfect. An anti-phishing system can make errors, such as labeling a legitimate email as phishing (i.e., a false alarm) or assuming a phishing email as legitimate (i.e., a miss). Human trust in automation has been widely studied as it affects how the human operator interacts with the automation system, which consequently influences the overall system performance (Dzindolet, Peterson, Pomranky, Pierce, & Beck, 2003; Lee & Moray, 1992; Muir, 1994; Sheridan & Parasuraman, 2006). With interacting with an automation system, the human operator should calibrate his or her trust level to trust a system that is capable but distrust a system that is incapable (i.e., trust calibration; Lee & Moray, 1994; Lee & See, 2004; McGuirl & Sarter, 2006). Among the various system capabilities, automation reliability is one of the most important factors that affect trust, and it is widely accepted that higher reliability levels lead to higher trust levels (Desai et al., 2013; Hoff & Bashir, 2015). How well these capabilities are conveyed to the operator is essential (Lee & See, 2004). There are two general ways of conveying the system capabilities: through an explicit description of the capabilities (i.e., description), or through experiencing the system (i.e., experience). These two ways of conveying information have been studied widely in human decision-making literature (Wulff, Mergenthaler-Canseco, & Hertwig, 2018). Yet, there has not been systematic investigation on these different methods of conveying information in the applied area of human-automation interaction (but see Chen, Mishler, Hu, Li, & Proctor, in press; Mishler et al., 2017). Furthermore, trust and reliance on automation is not only affected by the reliability of the automation, but also by the error types, false alarms and misses (Chancey, Bliss, Yamani, & Handley, 2017; Dixon & Wickens, 2006). False alarms and misses affect human performance in qualitatively different ways, with more serious damage being caused by false-alarmprone automation than by miss-prone automation (Dixon, Wickens, & Chang, 2004). In addition, false-alarm-prone automation reduces compliance (i.e., the operator’s reaction when the automation presents a warning); and miss-prone automation reduces reliance (i.e., the operator’s inaction when the automation remains silent; Chancey et al., 2017).Current StudyThe goal of the current study was to examine how the methods of conveying system reliability and automation error type affect human decision making and trust in automation. The automation system was a phishing-detection system, which provided recommendations to users as to whether an email was legitimate or phishing. The automation reliability was defined as the percentage of correct recommendations (60% vs. 90%). For each reliability level, there were a false-alarm condition, with all the automation errors being false alarms, and a miss condition, with all the errors being misses. The system reliability was conveyed through description (with an exact percentage described to the user) or experience (with immediate feedback to help the user learn; Barron, & Erev, 2003). A total of 510 participants were recruited and completed the experiment online through Amazon Mechanical Turk. The experimental task consisted of classifying 20 emails as phishing and legitimate, with a phishing-detection system providing recommendations. At the end of the experiment, participants rated their trust in this automated aid system. The measures included a performance measure (the decision accuracy made by the participants), as well as two trust measures (participants’ agreement rate with the phishing-detection system, and their self-reported trust in the system). Our results showed that higher system reliability and feedback increased accuracy significantly, but description or error type alone did not affect accuracy. In terms of the trust measures, false alarms led to lower agreement rates than did misses. With a less reliable system, though, the misses caused a problem of inappropriately higher agreement rates; this problem was reduced when feedback was provided for the unreliable system, indicating a trust-calibration role of feedback. Self-reported trust showed similar result patterns to agreement rates. Performance was improved with higher system reliability, feedback, and explicit description. Design implications of the results included that (1) both feedback and description of the system reliability should be presented in the interface of an automation aid whenever possible, provided that the aid is reliable, and (2) for systems that are unreliable, false alarms are more desirable than misses, if one has to choose between the two.
- Published
- 2018
- Full Text
- View/download PDF
19. Description-Experience Gap: The Role of Feedback and Description in Human Trust in Automation
- Author
-
Ninghui Li, Robert W. Proctor, Jing Chen, Edin Sabic, Bin Hu, and Scott Mishler
- Subjects
Knowledge management ,Warning system ,business.industry ,Computer science ,05 social sciences ,Applied psychology ,Automation ,Phishing ,050105 experimental psychology ,Medical Terminology ,Phenomenon ,0501 psychology and cognitive sciences ,Description-experience gap ,business ,050107 human factors ,Reliability (statistics) ,Medical Assisting and Transcription - Abstract
Human trust in automation is widely studied because the level of trust influences the effectiveness of the system (Muir, 1994). It is vital to examine the role that the people play and how they interact with the system (Hoff & Bashir, 2015). In the decision-making literature, an interesting phenomenon is the description-experience gap, with a typical finding that experience-based choices underweight small probabilities, whereas description-based choices overweight small probabilities (Hertwig, Barron, Weber, & Erev, 2004; Hertwig & Erev, 2009; Jessup, Bishara, & Busemeyer, 2008). We applied this description-experience gap concept to the study of human-automation interaction and had Amazon Mechanical Turk workers evaluate emails as legitimate or phishing. An anti-phishing warning system provided recommendations to the user with a reliability level of 60%, 70%, 80%, or 90%. Additionally, the way in which reliability information was conveyed was manipulated with two factors: (1) whether the reliability level of the system was stated explicitly (i.e., description); (2) whether feedback was provided after the user made each decision (i.e., experience). Our results showed that as the reliability of the warning system increased, so did decision accuracy, agreement rate, self-reported trust, and perceived system reliability, consistent with prior research (Lee & See, 2004; Rice, 2009; Sanchez, Fisk, & Rogers, 2004). The increase in performance and trust with the increase in reliability indicates that participants were paying attention to and using the automation to make decisions. Feedback was also highly influential in performance and establishing trust, but description only affected self-reported trust. The effect of feedback strengthened at the higher levels of reliability, showing that individuals benefited the most from feedback when the automated warning system was more reliable. Additionally, unlike prior studies that manipulated description and experience/feedback separately (Hertwig, 2012), we varied description and feedback conditions systematically and discovered an interaction between the two factors. Our results show that feedback is more helpful in situations that do not provide an explicit description of the system reliability, compared to those who do. An implication of the current results for system design is that feedback should be provided whenever possible. This recommendation is based on the finding that providing feedback benefited both users’ performance and trust in the system, and on the hope that the systems in use are mostly of high reliability (e.g., > .80). A note for researchers in the field of human trust in automation is that, if only subjective measures of trust are used in a study, providing description of the system reliability will likely cause an inflation in the trust measures.
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.