16 results on '"Garrett G. Sadler"'
Search Results
2. Trust in Sensing Technologies and Human Wingmen: Analogies for Human-Machine Teams.
- Author
-
Joseph B. Lyons, Nhut Tan Ho, Lauren C. Hoffmann, Garrett G. Sadler, Anna Lee Van Abel, and Mark Wilkins
- Published
- 2018
- Full Text
- View/download PDF
3. A Cognitive Walkthrough of Multiple Drone Delivery Operations
- Author
-
Casey L Smith, Garrett G Sadler, Terence L Tyson, Summer L Brandt, R Conrad Rorie, Jillian N Keeler, Kevin J Monk, Igor Dolgov, and Jesus Viramontes
- Subjects
Aeronautics (General) - Abstract
Advances of early twenty-first century aviation and transportation technologies provide opportunities for enhanced aerial projects, and the overall integration of unmanned aircraft systems (UAS) into the National Airspace System (NAS) has applications across a wide range of operations. Through these, remote operators have learned to manage several UAS at the same time in a variety of operational environments. The present work details a component piece of an ongoing body of research into multi-UAS operations. Beginning in early 2020, NASA has collaborated with Uber Technologies to design and develop concepts of operations, roles and responsibilities, and ground control station (GCS) concepts to enable food delivery operations via multiple, small UAS (sUAS). A cognitive walkthrough was chosen as the method for data collection. This allowed information to be gathered from UAS subject matter experts (SMEs) that could further mature designs for future human-in-the-loop (HITL) simulations; in addition, it allowed information to be collected remotely during the stringent restrictions of the COVID-19 pandemic. Consequently, the described cognitive walkthrough activity utilized remote data collection protocols mediated through the usage of programs designed for presentation and telecommunications. Scenarios were designed, complete with airspace, contingencies, and remedial actions, to be presented to the SMEs. Information was collected using a combination of rating scales and open-ended questions. Results received from the SMEs revealed expected hazards, workloads, and information concerns inherent in the contingency scenarios. SMEs also provided insight into the design of GCS tools and displays as well as the duties and relationships of human operators (i.e., monitors) and automation (i.e., informers and flight managers). Implications of these findings are discussed.
- Published
- 2021
4. Exploring Trust Barriers to Future Autonomy: A Qualitative Look.
- Author
-
Joseph B. Lyons, Nhut Tan Ho, Anna Lee Van Abel, Lauren C. Hoffmann, W. Eric Fergueson, Garrett G. Sadler, Michelle A. Grigsby, and Amy C. Burns
- Published
- 2017
- Full Text
- View/download PDF
5. UAS Integration in the NAS Flight Test 6: Full Mission Results
- Author
-
Michael J Vincent, R Conrad Rorie, Kevin J Monk, Jillian N Keeler, Casey L Smith, and Garrett G Sadler
- Subjects
Air Transportation And Safety - Abstract
Recent standards development efforts for the integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS) such as those in RTCA Inc. Special Committee 228 (SC-228) have focused on relatively large UAS transitioning to and from Class A airspace. In an effort to expand the range of vehicle classes that can access the NAS, the NASA UAS Integration in the NAS project has investigated Low Size, Weight, and Power (Low SWaP) technologies that would allow smaller UAS to detect-and-avoid (DAA) traffic. Through batch and human in the loop (HITL) simulation studies, the UAS Integration in the NAS DAA subproject have identified candidate performance standards that would contribute to enabling extended Low SWaP, UAS operations under 10,000 feet. These candidate performance standards include minimum field of regard (FOR) values for Low SWaP air surveillance sensors as well as a DAA well-clear (DWC) definition which can be applied to non-cooperative traffic to reduce the required maneuver initiation range. To test the assumptions of the project’s simulation studies and validate the candidate performance standards, a live flight research event was executed at NASA Armstrong Flight Research Center. The UAS Integration in the NAS Project Flight Test 6 Full Mission sought to characterize UAS pilot responses to traffic conflicts using a representative Low SWAP DAA system in an operational NAS environment. To achieve this, live, virtual and constructive distributed environment (LVC-DE) elements were combined to simulate a sector of Oakland center airspace and induce encounters with a live, manned aircraft. A Navmar Applied Sciences Tigershark XP was used as the UAS ownship and was integrated into the test architecture to enable it to be controlled from a Vigilant Spirit Control Station (VSCS) research ground control station. Qualified UAS pilots were recruited to act as subject pilots under test (SPUT) to control the Tigershark XP in a simulated mission while coordinating with a participating air traffic controller in simulated airspace. The intruder speed, intruder equipage and encounter geometry were varied between six scripted encounters per SPUT. Various metrics were collected including pilot reaction time from the onset of DAA alert, ATC coordination rate, probability and severity of losses of DAA well clear, and subjective ratings of system acceptability. The implications of these results on the development of standards for Low SWAP DAA systems will be discussed.
- Published
- 2021
6. An Evaluation of UAS Pilot Workload and Acceptability Ratings with Four Simulated Radar Declaration Ranges
- Author
-
Jillian N Keeler, R Conrad Rorie, Kevin J Monk, Garrett G Sadler, and Casey L Smith
- Subjects
Aeronautics (General) - Abstract
Currently, minimum operational performance standards (MOPS) are being developed for a broader range of unmanned aircraft system (UAS) platforms, including smaller UAS that will feature onboard sensors that are low in size, weight, and power, otherwise known as low SWaP. The low SWaP sensors used to detect non-cooperative traffic will have limited declaration ranges compared to those designed for medium-to-large UAS. A human-in-the-loop (HITL) study was conducted examining four possible radar declaration ranges (i.e., 1.5 NM, 2 NM, 2.5 NM, and 3 NM) for a potential low SWaP sensor with a detect and avoid (DAA) system encountering various non-cooperative encounters in Oakland Center airspace. Participants had lower workload, particularly workload associated with temporal demand and effort, in scenarios that featured larger declaration ranges. Furthermore, participants reported better ability to remain DAA well clear within the larger declaration range conditions, specifically with the 2.5 NM and 3 NM conditions.
- Published
- 2020
7. An Evaluation of UAS Pilot Workload and Acceptability Ratings with Four Simulated Radar Declaration Ranges
- Author
-
Jillian N. Keeler, Robert C. Rorie, Kevin J. Monk, Garrett G. Sadler, and Casey Smith
- Subjects
Aeronautics (General) - Abstract
Currently, minimum operating standards (MOPS) are being developed for a broader range of UAS types, including smaller UAS that will feature onboard sensors that are low in size, weight, and power (Low SWaP). These sensors will have limited declaration ranges compared to ones typically found on medium-to-large UAS used to detect non-cooperative aircraft. A human-in-the-loop (HITL) study was conducted examining four possible radar declaration ranges (i.e., 1.5 nm, 2 nm, 2.5 nm, and 3 nm) for a potential low SWaP sensor with a DAA system. Participants had lower workload, particularly workload associated with temporal demand and effort. Furthermore, participants reported better ability to remain DAA well clear within the larger declaration range conditions, such as the 2.5 nm and 3 nm.
- Published
- 2020
8. UAS Pilot Assessments of Display and Alerting for the Airborne Collision Avoidance System XU
- Author
-
Casey L. Smith, Robert C. Rorie, Kevin J. Monk, Jillian N. Keeler, and Garrett G. Sadler
- Subjects
Air Transportation And Safety ,Aircraft Design, Testing And Performance - Abstract
Unmanned aircraft systems (UAS) must comply with specific standards to operate in the National Airspace System (NAS). Among the requirements are the detect and avoid (DAA) capabilities, which include display, alerting, and guidance specifications. Previous studies have queried pilots for their subjective feedback on these display elements on earlier systems; the present study sought pilot evaluations with an initial iteration of the unmanned variant of a Next Generation Airborne Collision Avoidance System (ACAS XU). Sixteen participants piloted simulated aircraft with both standalone and integrated DAA displays. Their opinions were gathered using post-block and post-simulation questionnaires as well as guided debriefs. The data showed pilots had better understanding and comfort with the system when using an integrated display. Pilots also rated ACAS Xu alerting and guidance as generally acceptable and effective. Implications for further development of ACAS XU and DAA displays are discussed.
- Published
- 2020
9. Streamlining Tactical Operator Handoffs During Multi-Vehicle Applications
- Author
-
Meghan Chandarana, Garrett G. Sadler, Jillian N. Keeler, Casey L. Smith, R. Conrad Rorie, Dominic G. Wong, Scott Scheff, Cindy Pham, and Igor Dolgov
- Subjects
Control and Systems Engineering - Published
- 2022
- Full Text
- View/download PDF
10. Assisting the Improvement of a Military Safety System: An Application of Rapid Assessment Procedures to the Automatic Ground Collision Avoidance System
- Author
-
Nhut Ho, Garrett G. Sadler, Joseph B. Lyons, Mark Wilkins, Kevin Zemlicka, and Lauren C. Hoffmann
- Subjects
Arts and Humanities (miscellaneous) ,Computer science ,Anthropology ,Human machine interaction ,Systems engineering ,General Social Sciences ,Collision avoidance system ,Rapid assessment - Abstract
This article describes an iterative application of Rapid Assessment Procedures (RAP) to study human-machine interaction issues with a recently implemented, highly automatic system on the F-16 fighter aircraft known as the Automatic Ground Collision Avoidance System (Auto-GCAS). Auto-GCAS is a sophisticated technology that has the ability to detect an imminent ground collision threat and automatically execute a last-moment maneuver to avoid a crash. We employed RAP methods at multiple United States Air Force and Air National Guard sites in the United States and abroad. Over a three-year period, we conducted semi-structured interviews with 402 F-16 pilots experienced with the system. The method we employed, termed here iterative RAP, is reviewed in detail and evaluated in the framework of Utarini, Winkvist, and Pelto's (2001) eleven criteria for quality RAP studies. Results from this study include assisting in correcting system misunderstandings and anomalies, improving information flow about Auto-GCAS, and contributing towards the perception of military safety systems as valuable. This article concludes by (1) discussing positive effects that iterative RAP can contribute to the defense community and (2) arguing our method's utility towards the study of complex bureaucracies and multi-sited research, particularly following the introduction of new technology or policy.
- Published
- 2019
- Full Text
- View/download PDF
11. Comparing Trust in Auto-GCAS Between Experienced and Novice Air Force Pilots
- Author
-
Michelle A. Grigsby, Garrett G. Sadler, Mark Wilkins, Lauren C. Hoffmann, William E. Fergueson, Joseph B. Lyons, Nhut Ho, and Anna Lee Van Abel
- Subjects
050210 logistics & transportation ,media_common.quotation_subject ,05 social sciences ,General Engineering ,Human Factors and Ergonomics ,Aeronautics ,Spatial disorientation ,Perception ,0502 economics and business ,Controlled flight into terrain ,0501 psychology and cognitive sciences ,Collision avoidance system ,Psychology ,050107 human factors ,media_common - Abstract
We examined F-16 pilots’ trust of the Automatic Ground Collision Avoidance System (Auto-GCAS), an automated system fielded on the F-16 to reduce the occurrence of controlled flight into terrain. We looked at the impact of experience (i.e., number of flight hours) as a predictor of trust perceptions and complacency potential among pilots. We expected that novice pilots would report higher trust and greater potential for complacency in relation to Auto-GCAS, which was shown to be partly true. Although novice pilots, compared with experienced pilots, reported equivalent trust perceptions, they also reported greater complacency potential.
- Published
- 2017
- Full Text
- View/download PDF
12. A Longitudinal Field Study of Auto-GCAS Acceptance and Trust: First-Year Results and Implications
- Author
-
William E. Fergueson, Casey Richardson, Artemio Cacanindin, Joseph B. Lyons, Garrett G. Sadler, Kevin Zemlicka, Lauren C. Hoffmann, Mark Wilkins, Nhut Ho, and Samantha D. Cals
- Subjects
Field (physics) ,Computer science ,business.industry ,05 social sciences ,Human Factors and Ergonomics ,Transparency (behavior) ,Computer Science Applications ,Longitudinal field ,0502 economics and business ,0501 psychology and cognitive sciences ,Collision avoidance system ,Telecommunications ,business ,Engineering (miscellaneous) ,050203 business & management ,050107 human factors ,Applied Psychology - Abstract
In this paper we describe results from the first year of field study examining U.S. Air Force (USAF) F-16 pilots’ trust of the Automatic Ground Collision Avoidance System (Auto-GCAS). Using semistructured interviews focusing on opinion development and evolution, system transparency and understanding, the pilot–vehicle interface, stories and reputation, usability, and the impact on behavior, we identified factors positively and negatively influencing trust with data analysis methods based in grounded theory. Overall, Auto-GCAS is an effective life-/aircraft-saving technology and is generally well received and trusted appropriately, with trust evolving based on factors including having a healthy skepticism of the system, attributing system faults to hardware problems, and having trust informed by reliable performance (e.g., lives saved). Unanticipated findings included pilots reporting reputation to not be negatively affected by system activations and an interface anticipation cue having the potential to change operational flight behavior. We discuss emergent research avenues in areas of transparency and culture, and values of conducting trust research with operators of real-world systems having high levels of autonomy.
- Published
- 2017
- Full Text
- View/download PDF
13. Application of human-autonomy teaming to an advanced ground station for reduced crew operations
- Author
-
Bao Nguyen, Kenny Wakeland, Garrett G. Sadler, Walter W. Johnson, Nathan Wilson, Karanvir Panesar, Joel Lachter, Nhut Ho, and Summer L. Brandt
- Subjects
Teamwork ,Engineering ,Operations research ,Situation awareness ,End user ,business.industry ,Aviation ,media_common.quotation_subject ,05 social sciences ,Ground control station ,Automation ,050105 experimental psychology ,Cockpit ,Task (project management) ,Systems engineering ,0501 psychology and cognitive sciences ,business ,050107 human factors ,media_common - Abstract
Within human factors there is burgeoning interest in the “human-autonomy teaming” (HAT) concept as a way to address the challenges of interacting with complex, increasingly autonomous systems. The HAT concept comes out of an aspiration to interact with increasingly autonomous systems as a team member, rather than simply use automation as a tool. The authors, and others, have proposed core tenets for HAT that include bi-directional communication, automation and system transparency, and advanced coordination between human and automated teammates via predefined, dynamic task sequences known as “plays.” It is believed that, with proper implementation, HAT should foster appropriate teamwork, thus increasing trust and reliance on the system, which in turn will reduce workload, increase situation awareness, and improve performance. To this end, HAT has been demonstrated and/or studied in multiple applications including search and rescue operations, healthcare and medicine, autonomous vehicles, photography, and aviation. The current paper presents one such effort to apply HAT. It details the design of a HAT agent, developed by Human Automation Teaming Solutions, Inc., to facilitate teamwork between the automation and the human operator of an advanced ground dispatch station. This dispatch station was developed to support a NASA project investigating a concept called Reduced Crew Operations (RCO); consequently, we have named the agent R-HATS. Part of the RCO concept involves a ground operator providing enhanced support to a large number of aircraft with a single pilot on the flight deck. When assisted by R-HATS, operators can monitor and support or manage a large number of aircraft and use plays to respond in real-time to complicated, workload-intensive events (e.g., an airport closure). A play is a plan that encapsulates goals, tasks, and a task allocation strategy appropriate for a particular situation. In the current implementation, when a play is initiated by a user, R-HATS determines what tasks need to be completed and has the ability to autonomously execute them (e.g., determining diversion options and uplinking new routes to aircraft) when it is safe and appropriate. R-HATS has been designed to both support end users and researchers in RCO and HAT. Additionally, R-HATS and its underlying architecture were developed with generaliz ability in mind as a modular software applicable outside of RCO/aviation domains. This paper will also discuss future further development and testing of R-HATS.
- Published
- 2017
- Full Text
- View/download PDF
14. Effects of transparency on pilot trust and agreement in the autonomous constrained flight planner
- Author
-
Walter W. Johnson, Joseph B. Lyons, Robert J. Shively, Garrett G. Sadler, Nhut Ho, David E. Smith, Lauren C. Hoffmann, and Henri Battiste
- Subjects
Engineering ,Operations research ,Automatic control ,business.industry ,Debriefing ,05 social sciences ,Crew ,Qualitative property ,Computer security ,computer.software_genre ,Automation ,Transparency (behavior) ,050105 experimental psychology ,System integration ,0501 psychology and cognitive sciences ,Runway ,business ,computer ,050107 human factors - Abstract
We performed a human-in-the-loop study to explore the role of transparency in engendering trust and reliance within highly automated systems. Specifically, we examined how transparency impacts trust in and reliance upon the Autonomous Constrained Flight Planner (ACFP), a critical automated system being developed as part of NASA's Reduced Crew Operations (RCO) Concept. The ACFP is designed to provide an enhanced ground operator, termed a super dispatcher, with recommended diversions for aircraft when their primary destinations are unavailable. In the current study, 12 commercial transport rated pilots who played the role of super dispatchers were given six time-pressured “all land” scenarios where they needed to use the ACFP to determine diversions for multiple aircraft. Two factors were manipulated. The primary factor was level of transparency. In low transparency scenarios the pilots were given a recommended airport and runway, plus basic information about the weather conditions, the aircraft types, and the airport and runway characteristics at that and other airports. In moderate transparency scenarios the pilots were also given a risk evaluation for the recommended airport, and for the other airports if they requested it. In the high transparency scenario additional information including the reasoning for the risk evaluations was made available to the pilots. The secondary factor was level of risk, either high or low. For high-risk aircraft, all potential diversions were rated as highly risky, with the ACFP giving the best option for a bad situation. For low-risk aircraft the ACFP found only low-risk options for the pilot. Both subjective and objective measures were collected, including rated trust, whether the pilots checked the validity of the automation recommendation, and whether the pilots eventually flew to the recommended diversion airport. Key results show that: 1) Pilots' trust increased with higher levels of transparency, 2) Pilots were more likely to verify ACFP's recommendations with low levels of transparency and when risk was high, 3) Pilots were more likely to explore other options from the ACFP in low transparency conditions and when risk was high, and 4) Pilots' decision to accept or reject ACFP's recommendations increased as a function of the transparency in the explanation. The finding that higher levels of transparency was coupled with higher levels of trust, a lower need to verify other options, and higher levels of agreement with ACFP recommendations, confirms the importance of transparency in aiding reliance on automated recommendations. Additional analyses of qualitative data gathered from subjects through surveys and during debriefing interviews also provided the basis for new design recommendations for the ACFP.
- Published
- 2016
- Full Text
- View/download PDF
15. Shaping Trust Through Transparent Design: Theoretical and Experimental Guidelines
- Author
-
Garrett G. Sadler, Robert J. Shively, David E. Smith, Joseph B. Lyons, Lauren C. Hoffmann, Walter W. Johnson, Kolina Koltai, Henri Battiste, and Nhut Ho
- Subjects
Engineering ,Decision support system ,Process management ,Knowledge management ,business.industry ,Interface (computing) ,05 social sciences ,Automation ,050105 experimental psychology ,Task (project management) ,Transparency (graphic) ,0501 psychology and cognitive sciences ,Numeric Value ,business ,Baseline (configuration management) ,050107 human factors - Abstract
The current research discusses transparency as a means to enable trust of automated systems. Commercial pilots (N = 13) interacted with an automated aid for emergency landings. The automated aid provided decision support during a complex task where pilots were instructed to land several aircraft simultaneously. Three transparency conditions were used to examine the impact of transparency on pilot’s trust of the tool. The conditions were: baseline (i.e., the existing tool interface), value (where the tool provided a numeric value for the likely success of a particular airport for that aircraft), and logic (where the tool provided the rationale for the recommendation). Trust was highest in the logic condition, which is consistent with prior studies in this area. Implications for design are discussed in terms of promoting understanding of the rationale for automated recommendations.
- Published
- 2016
- Full Text
- View/download PDF
16. Application of Human-Autonomy Teaming (HAT) Patterns to Reduced Crew Operations (RCO)
- Author
-
Michael Matessa, Shively R. Jay, Summer L. Brandt, Henri Battiste, Garrett G. Sadler, and Joel Lachter
- Subjects
0209 industrial biotechnology ,Engineering ,ComputingMilieux_THECOMPUTINGPROFESSION ,business.industry ,05 social sciences ,Air traffic management ,Crew ,Flight management system ,02 engineering and technology ,Air traffic control ,Automation ,Task (project management) ,Cockpit ,020901 industrial engineering & automation ,Aeronautics ,Risk analysis (engineering) ,Software design pattern ,0501 psychology and cognitive sciences ,business ,050107 human factors - Abstract
Unmanned aerial systems, advanced cockpits, and air traffic management are all seeing dramatic increases in automation. However, while automation may take on some tasks previously performed by humans, humans will still be required to remain in the system for the foreseeable future. The collaboration between humans and these increasingly autonomous systems will begin to resemble cooperation between teammates, rather than simple task allocation. It is critical to understand this human-autonomy teaming (HAT) to optimize these systems in the future. One methodology to understand HAT is by identifying recurring patterns of HAT that have similar characteristics and solutions. This paper applies a methodology for identifying HAT patterns to an advanced cockpit project.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.