Back to Search
Start Over
Avoiding adverse autonomous agent actions.
- Source :
-
Human-Computer Interaction . 2022, Vol. 37 Issue 3, p211-236. 26p. - Publication Year :
- 2022
-
Abstract
- The potential threats of autonomy One of the obvious threats of autonomy, which lies in its form as, prospectively, one of the most powerful expressions of technology, is its influence upon evolution; both human and technical. Any real-world challenges that can present the need for an I exact i and I deterministic i expression of intent will already be an obvious candidate for precise automation and so probably not a candidate for exploratory forms of autonomy (and see Hancock, [41], [42]). The "isles of autonomy" metaphor initially casts humans in the littoral role of the beaches and riparian shorelines that surround such emerging island (i.e., the outer boundary layer of emerging autonomies, as they "rise" above the ocean of extant automation). Challenges to formal methods assessments It would be both desirable and actually rather gratifying then if, in counter to each of these prospective weaknesses, we could specify a variety of provably effective formal methods which would test and indemnify us against any untoward outcomes of a singular or interactive group of autonomous systems. The potential promises of autonomy The vista of autonomy's promise is only circumscribed by the limits of the advocates' imagination that such envisaged autonomous systems can underwrite (Arkin, [4]), see Figure 3. [Extracted from the article]
- Subjects :
- *INDUSTRIAL safety
*ZIPF'S law
*AUTONOMOUS robots
*WATSON (Computer)
*MIXED reality
Subjects
Details
- Language :
- English
- ISSN :
- 07370024
- Volume :
- 37
- Issue :
- 3
- Database :
- Academic Search Index
- Journal :
- Human-Computer Interaction
- Publication Type :
- Academic Journal
- Accession number :
- 156359841
- Full Text :
- https://doi.org/10.1080/07370024.2021.1970556