1,075 results on '"Behavior-based robotics"'
Search Results
2. Jam Mitigation for Autonomous Convoys via Behavior-Based Robotics.
- Author
-
Cheung, Calvin, Rawashdeh, Samir, and Mohammadi, Alireza
- Subjects
DENIAL of service attacks ,VECTOR fields ,RADAR interference ,ROBOTICS ,WIRELESS communications ,AUTONOMOUS robots ,AUTONOMOUS vehicles - Abstract
Autonomous ground vehicle convoys heavily rely on wireless communications to perform leader-follower operations, which make them particularly vulnerable to denial-of-service attacks such as jamming. To mitigate the effects of jamming on autonomous convoys, this paper proposes a behavior-based architecture, called the Behavior Manager, that utilizes layered costmaps and vector field histogram motion planning to implement motor schema behaviors. Using our proposed Behavior Manager, multiple behaviors can be created to form a convoy controller assemblage capable of continuing convoy operations while under a jamming attack. To measure the performance of our proposed solution to jammed autonomous convoying, simulated convoy runs are performed on multiple path plans under different types of jamming attacks, using both the assemblage and a basic delayed follower convoy controller. Extensive simulation results demonstrated that our proposed solution, the Behavior Manager, can be leveraged to dramatically improve the robustness of autonomous convoys when faced with jamming attacks and can be further extended due to its modular nature to combat other types of attacks through the development of additional behaviors and assemblages. When comparing the performance of the Behavior Manager convoy to that of the basic convoy controller, improvements were seen across all jammer types and path plans, ranging from 13.33% to 86.61% reductions in path error. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Jam Mitigation for Autonomous Convoys via Behavior-Based Robotics
- Author
-
Calvin Cheung, Samir Rawashdeh, and Alireza Mohammadi
- Subjects
autonomous convoy ,behavior-based robotics ,jamming ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Autonomous ground vehicle convoys heavily rely on wireless communications to perform leader-follower operations, which make them particularly vulnerable to denial-of-service attacks such as jamming. To mitigate the effects of jamming on autonomous convoys, this paper proposes a behavior-based architecture, called the Behavior Manager, that utilizes layered costmaps and vector field histogram motion planning to implement motor schema behaviors. Using our proposed Behavior Manager, multiple behaviors can be created to form a convoy controller assemblage capable of continuing convoy operations while under a jamming attack. To measure the performance of our proposed solution to jammed autonomous convoying, simulated convoy runs are performed on multiple path plans under different types of jamming attacks, using both the assemblage and a basic delayed follower convoy controller. Extensive simulation results demonstrated that our proposed solution, the Behavior Manager, can be leveraged to dramatically improve the robustness of autonomous convoys when faced with jamming attacks and can be further extended due to its modular nature to combat other types of attacks through the development of additional behaviors and assemblages. When comparing the performance of the Behavior Manager convoy to that of the basic convoy controller, improvements were seen across all jammer types and path plans, ranging from 13.33% to 86.61% reductions in path error.
- Published
- 2022
- Full Text
- View/download PDF
4. Online Learning and Teaching of Emergent Behaviors in Multi-Robot Teams
- Author
-
Luis Feliphe S. Costa, Tiago P. Do Nascimento, and Luiz Marcos G. Goncalves
- Subjects
Multirobot leaning ,behavior-based robotics ,knowledge transference ,emergent behavior ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In this manuscript, we propose an approach that allows a team of robots to create new (emergent) behaviors at execution time. Basically, we improve the approach called N-Learning used for self-programming of robots in a team, by modifying and extending its functioning structure. The basic capability of behavior sharing is increased by the catching of emergent behaviors at run time. With this, all robots are able not only to share existing knowledge, here represented by blocks of codes containing desired behaviors but also to creating new behaviors as well. Experiments with real robots are presented in order to validate our approach. The experiments demonstrate that after the human-robot interaction with one robot using Program by Demonstration, this robot generates a new behavior at run time and teaches a second robot that performs the same learned behavior through this improved version of the N-learning system.
- Published
- 2019
- Full Text
- View/download PDF
5. An Analysis of Displays for Probabilistic Robotic Mission Verification Results
- Author
-
O‘Brien, Matthew, Arkin, Ronald, Kacprzyk, Janusz, Series editor, Savage-Knepshield, Pamela, editor, and Chen, Jessie, editor
- Published
- 2017
- Full Text
- View/download PDF
6. Adapting to environmental dynamics with an artificial circadian system.
- Author
-
Manoonpong, Poramate, Xiong, Xiaofeng, Larsen, Jørgen Christian, O'Brien, Matthew J, and Arkin, Ronald C
- Subjects
- *
ENERGY management , *TIME series analysis , *PERFORMANCE management , *FORECASTING , *CIRCADIAN rhythms , *SOLAR cycle - Abstract
One of the core challenges of long-term autonomy is the environmental dynamics that agents must interact with. Some of these dynamics are driven by reliable cyclic processes, and thus are predictable. The most dominant of these is the daily solar cycle, which drives both natural phenomena like weather, as well as the activity of animals and humans. Circadian clocks are a widespread solution in nature to help organisms adapt to these dynamics, and demonstrate that many organisms benefit from maintaining simple models of their environments and how they change. Drawing inspiration from circadian systems, this work models relevant environmental states as times series, allowing for forecasts of the state to be generated without any knowledge of the underlying physics. These forecasts are treated as special percepts in a behavior-based architecture; providing estimates of the future state rather than measurements of the current state. They are incorporated into an ethologically based action-selection mechanism, where they influence the activation levels of behaviors. The approach was validated on a simulated agricultural task: a solar-powered agent monitoring pest populations. By using the artificial circadian system to leverage the forecasted state, the agent was able to improve performance and energy management. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
7. Dynamic Leader Allocation in Multi-robot Systems Based on Nonlinear Model Predictive Control.
- Author
-
Tavares, Augusto de Holanda B. M., Madruga, Sarah Pontes, Brito, Alisson V., and Nascimento, Tiago P.
- Abstract
This paper presents an approach to the dynamic leader selection problem in autonomous non-holonomic mobile robot formations when the current leader enters a failure state. Our method is based on a tree structure coupled with a modified version of the Nonlinear Model Predictive Control (NMPC) that allows for behavior change at the controller level. An explanation of the control algorithm, behavior selection, and leader selection structure is given, after which the results of both simulations and experiments using a three robot formation are shown and discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
8. Behavior-based swarm model using fuzzy controller for route planning and E-waste collection
- Author
-
Batoo, Khalid Mujasam, Pandiaraj, Saravanan, Muthuramamoorthy, Muthumareeswaran, Raslan, Emad H., and Krishnamoorthy, Sujatha
- Published
- 2022
- Full Text
- View/download PDF
9. Toward Verifying the User of Motion-Controlled Robotic Arm Systems via the Robot Behavior
- Author
-
Zhen Meng, Long Huang, Chen Wang, Liying Li, Zeyu Deng, and Guodong Zhao
- Subjects
Spoofing attack ,Computer Networks and Communications ,Computer science ,Login ,Motion capture ,Computer Science Applications ,Hardware and Architecture ,Human–computer interaction ,Control theory ,Signal Processing ,Trajectory ,Robot ,Behavior-based robotics ,Robotic arm ,Information Systems - Abstract
Motion-controlled robotic arms allow a user to interact with a remote real world without physically reaching it. By connecting cyberspace to the physical world, such interactive teleoperations are promising to improve remote education, virtual social interactions and online participatory activities. In this work, we build up a motion-controlled robotic arm framework comprising a robotic arm end and a user end, which are connected via a network and responsible for manipulator control and motion capture respectively. To protect the system access, we propose to verify who is controlling the robotic arm by examining the robotic arm’s behavior, which adds a second security layer in addition to the system login credentials. We show that a robotic arm’s motion inherits its human controller’s behavioral biometric in interactive control scenarios. By extracting the angle readings of the robotic arm’s all joints, the proposed user authentication approach reconstructs the robotic arm’s end-effector movement trajectory that follows the user’s hand. Furthermore, we derive the unique robotic motion features to capture the user’s behavioral biometric embedded in the robot motions and develop learning-based algorithms to verify the robotic arm user to be one of the enrolled users or a nonuser. Extensive experiments show that our system achieves 94% accuracy to distinguish users while preventing user identity spoofing attacks with 95% accuracy.
- Published
- 2022
10. Methods for Robot Behavior Adaptation for Cognitive Neurorehabilitation
- Author
-
Alyssa Kubota and Laurel D. Riek
- Subjects
medicine.medical_specialty ,Cognition ,medicine.disease ,Human-Computer Interaction ,Physical medicine and rehabilitation ,Artificial Intelligence ,Control and Systems Engineering ,medicine ,Dementia ,Cognitive decline ,Behavior-based robotics ,Adaptation (computer science) ,Psychology ,Engineering (miscellaneous) ,Stroke ,Neurorehabilitation - Abstract
An estimated 11% of adults report experiencing some form of cognitive decline, which may be associated with conditions such as stroke or dementia and can impact their memory, cognition, behavior, and physical abilities. While there are no known pharmacological treatments for many of these conditions, behavioral treatments such as cognitive training can prolong the independence of people with cognitive impairments. These treatments teach metacognitive strategies to compensate for memory difficulties in their everyday lives. Personalizing these treatments to suit the preferences and goals of an individual is critical to improving their engagement and sustainment, as well as maximizing the treatment's effectiveness. Robots have great potential to facilitate these training regimens and support people with cognitive impairments, their caregivers, and clinicians. This article examines how robots can adapt their behavior to be personalized to an individual in the context of cognitive neurorehabilitation. We provide an overview of existing robots being used to support neurorehabilitation and identify key principles for working in this space. We then examine state-of-the-art technical approaches for enabling longitudinal behavioral adaptation. To conclude, we discuss our recent work on enabling social robots to automatically adapt their behavior and explore open challenges for longitudinal behavior adaptation. This work will help guide the robotics community as it continues to provide more engaging, effective, and personalized interactions between people and robots.
- Published
- 2022
11. Design of a Sensing Limited Autonomous Robotic System
- Author
-
Benjamen, Lim Han Yang, Ang, Marcelo H., Jr., Lee, Sukhan, editor, Cho, Hyungsuck, editor, Yoon, Kwang-Joon, editor, and Lee, Jangmyung, editor
- Published
- 2013
- Full Text
- View/download PDF
12. Robotics for Occupational Therapy: Learning Upper-Limb Exercises From Demonstrations
- Author
-
Katherine J. Kuchenbecker, Rochelle Mendonca, Siyao Hu, and Michelle J. Johnson
- Subjects
Robot kinematics ,Control and Optimization ,Computer science ,business.industry ,Mechanical Engineering ,Biomedical Engineering ,Robotics ,Computer Science Applications ,Task (project management) ,Human-Computer Interaction ,Artificial Intelligence ,Control and Systems Engineering ,Human–computer interaction ,Trajectory ,Task analysis ,Robot ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Behavior-based robotics ,business ,Humanoid robot - Abstract
We describe a learning-from-demonstration technique that enables a general-purpose humanoid robot to lead a user through object-mediated upper-limb exercises. It needs only tens of seconds of training data from a therapist teleoperating the robot to do the task with the user. We model the robot behavior as a regression problem, inferring the desired robot effort using the end-effector's state (position and velocity). Compared to the conventional approach of learning time-based trajectories, our strategy produces customized robot behavior and eliminates the need to tune gains to adapt to the user's motor ability. In our study, one occupational therapist and six people with stroke trained a Willow Garage PR2 on three example tasks (periodic 1D and 2D motions plus episodic pick-and-place). They then repeatedly did the tasks with the robot and blindly compared the state- and time-based controllers learned from the training data. Our results show that working models were reliably obtained to allow the robot to do the exercise with the user; that our state-based approach enabled users to be more actively involved, allowed larger excursion, and generated power outputs more similar to the therapist demonstrations; and that the therapist found our strategy more agreeable than the traditional time-based approach.
- Published
- 2021
13. The Need for Verbal Robot Explanations and How People Would Like a Robot to Explain Itself
- Author
-
Elizabeth Phillips, Zhao Han, and Holly A. Yanco
- Subjects
Human-Computer Interaction ,Point (typography) ,Artificial Intelligence ,Movement (music) ,Head (linguistics) ,Similarity (psychology) ,Eye tracking ,Robot ,Psychology ,Behavior-based robotics ,Automatic summarization ,Cognitive psychology - Abstract
Although non-verbal cues such as arm movement and eye gaze can convey robot intention, they alone may not provide enough information for a human to fully understand a robot’s behavior. To better understand how to convey robot intention, we conducted an experiment ( N = 366 ) investigating the need for robots to explain , and the content and properties of a desired explanation such as timing , engagement importance , similarity to human explanations, and summarization . Participants watched a video where the robot was commanded to hand an almost-reachable cup and one of six reactions intended to show the unreachability : doing nothing (No Cue), turning its head to the cup (Look), or turning its head to the cup with the addition of repeated arm movement pointed towards the cup (Look & Point), and each of these with or without a Headshake. The results indicated that participants agreed robot behavior should be explained across all conditions, in situ , in a similar manner as what human explain, and provide concise summaries and respond to only a few follow-up questions by participants. Additionally, we replicated the study again with N = 366 participants after a 15-month span and all major conclusions still held.
- Published
- 2021
14. Development of Humanoid Robot HUMIC and Reinforcement Learning-based Robot Behavior Intelligence using Gazebo Simulator
- Author
-
Young-Gi Kim and Ji-Hyeong Han
- Subjects
Computer science ,Reinforcement learning ,General Medicine ,Behavior-based robotics ,Simulation ,Humanoid robot - Published
- 2021
15. Learning to Rapidly Re-Contact the Lost Plume in Chemical Plume Tracing
- Author
-
Meng-Li Cao, Qing-Hao Meng, Jia-Ying Wang, Bing Luo, Ya-Qi Jing, and Shu-Gen Ma
- Subjects
chemical plume tracing ,reinforcement learning ,collaborative learning ,behavior-based robotics ,Chemical technology ,TP1-1185 - Abstract
Maintaining contact between the robot and plume is significant in chemical plume tracing (CPT). In the time immediately following the loss of chemical detection during the process of CPT, Track-Out activities bias the robot heading relative to the upwind direction, expecting to rapidly re-contact the plume. To determine the bias angle used in the Track-Out activity, we propose an online instance-based reinforcement learning method, namely virtual trail following (VTF). In VTF, action-value is generalized from recently stored instances of successful Track-Out activities. We also propose a collaborative VTF (cVTF) method, in which multiple robots store their own instances, and learn from the stored instances, in the same database. The proposed VTF and cVTF methods are compared with biased upwind surge (BUS) method, in which all Track-Out activities utilize an offline optimized universal bias angle, in an indoor environment with three different airflow fields. With respect to our experimental conditions, VTF and cVTF show stronger adaptability to different airflow environments than BUS, and furthermore, cVTF yields higher success rates and time-efficiencies than VTF.
- Published
- 2015
- Full Text
- View/download PDF
16. Behavior-based swarm robotic search and rescue using fuzzy controller.
- Author
-
Din, Ahmad, Jabeen, Meh, Zia, Kashif, Khalid, Abbas, and Saini, Dinesh Kumar
- Subjects
- *
SWARM intelligence , *FUZZY control systems , *SEARCH & rescue operations , *VIRTUAL reality , *IMAGING systems , *FUZZY logic , *EMERGENCY management - Abstract
Highlights • A self-contained dynamic goal-seeking mechanism using behavior-based approach has been implemented for searching targets in an unknown environment. • Bio-inspired dynamic virtual leaders are used, who steer and drive the swarm from current goal to the reference goal. • An avoid past behavior has been designed to help goal-seeking behavior more efficiently. Abstract Search and rescue (SAR) is one of the foremost issues in disaster management. A robust SAR mechanism can significantly reduce the number of causalities. This paper presents a behavior-based model for a swarm of small robots to perform an efficient SAR operation in an unknown environment. The swarm is guided by a dynamically selected virtual leader (VL). A self-contained dynamic goal-seeking mechanism, using behavior-based approach, is designed to search targets (victims). Under the leadership of VL, the proposed model retains the integrity of the swarm while driving it from its current position to referenced goals. Fuzzy logic has been used to design constituent behavioral modules, namely obstacle avoidance, alignment, and inter-robot cohesion. The model has been simulated to validate its efficiency and the findings reveal that robots moving as a swarm are more effective in the SAR process as compared to multiple single robots. Graphical abstract Image, graphical abstract [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
17. Insights on obstacle avoidance for small unmanned aerial systems from a study of flying animal behavior.
- Author
-
Sarmiento, Traci A. and Murphy, Robin R.
- Subjects
- *
OBSTACLE avoidance (Robotics) , *DRONE aircraft , *AUTONOMOUS robots , *FLIGHT control systems , *TASK performance - Abstract
Thirty-five papers from the ethological literature were surveyed with the perception and reaction of flying animals to autonomous navigation tasks organized and analyzed using a schema theoretic framework. Flying animals are an existence proof of autonomous collision-free flight in unknown and disordered environments. Because they successfully avoid obstacles, self-orient, and evade predators and capture prey to survive, the collected information could inform the design of biologically-inspired behaviors for control of a small unmanned aerial system (SUAS) to improve the current state-of-the art in autonomous obstacle avoidance. Five observations were derived from the surveyed papers: sensing is done by vision in lighted scenarios and sonar in darkness, one sensor is always dominant, adaptive sensing is beneficial, no preference was identified for lateral versus vertical avoidance maneuvers, and reducing speed is consistently seen across species in response to objects in the flight path. Additionally, the questions of defining clutter and scale of speed reduction left unanswered by the literature were discussed. Finally, three rules for control of a SUAS in an unknown environment that restricts maneuverability were identified. These are the distance to begin maneuvers to avoid an obstacle in the flight path, the direction to adjust the flight path, and the role of centered flight in determining the adjustment. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
18. Building the Foundation of Robot Explanation Generation Using Behavior Trees
- Author
-
Jordan Allspaw, Zhao Han, Daniel Giger, Henny Admoni, Holly A. Yanco, and Michael S. Lee
- Subjects
Structure (mathematical logic) ,0209 industrial biotechnology ,Computer science ,computer.internet_protocol ,Framing (World Wide Web) ,05 social sciences ,02 engineering and technology ,050105 experimental psychology ,Task (project management) ,Human-Computer Interaction ,020901 industrial engineering & automation ,Artificial Intelligence ,Human–computer interaction ,Code (cryptography) ,Robot ,0501 psychology and cognitive sciences ,Set (psychology) ,Behavior-based robotics ,computer ,XML - Abstract
As autonomous robots continue to be deployed near people, robots need to be able to explain their actions. In this article, we focus on organizing and representing complex tasks in a way that makes them readily explainable. Many actions consist of sub-actions, each of which may have several sub-actions of their own, and the robot must be able to represent these complex actions before it can explain them. To generate explanations for robot behavior, we propose using Behavior Trees (BTs), which are a powerful and rich tool for robot task specification and execution. However, for BTs to be used for robot explanations, their free-form, static structure must be adapted. In this work, we add structure to previously free-form BTs by framing them as a set of semantic sets {goal, subgoals, steps, actions} and subsequently build explanation generation algorithms that answer questions seeking causal information about robot behavior. We make BTs less static with an algorithm that inserts a subgoal that satisfies all dependencies. We evaluate our BTs for robot explanation generation in two domains: a kitting task to assemble a gearbox, and a taxi simulation. Code for the behavior trees (in XML) and all the algorithms is available at github.com/uml-robotics/robot-explanation-BTs.
- Published
- 2021
19. Fifty Years of AI: From Symbols to Embodiment - and Back
- Author
-
Steels, Luc, Carbonell, Jaime G., editor, Siekmann, J\'org, editor, Lungarella, Max, editor, Iida, Fumiya, editor, Bongard, Josh, editor, and Pfeifer, Rolf, editor
- Published
- 2007
- Full Text
- View/download PDF
20. Should my robot know what's best for me? Human–robot interaction between user experience and ethical design
- Author
-
Kathrin Pollmann, Nora Fronemann, and Wulf Loh
- Subjects
Social robot ,Computer science ,business.industry ,media_common.quotation_subject ,05 social sciences ,Usability ,06 humanities and the arts ,Interaction design ,0603 philosophy, ethics and religion ,050105 experimental psychology ,Human–robot interaction ,Human-Computer Interaction ,Philosophy ,User experience design ,Artificial Intelligence ,Human–computer interaction ,Robot ,0501 psychology and cognitive sciences ,060301 applied ethics ,Behavior-based robotics ,business ,Autonomy ,media_common - Abstract
To integrate social robots in real-life contexts, it is crucial that they are accepted by the users. Acceptance is not only related to the functionality of the robot but also strongly depends on how the user experiences the interaction. Established design principles from usability and user experience research can be applied to the realm of human–robot interaction, to design robot behavior for the comfort and well-being of the user. Focusing the design on these aspects alone, however, comes with certain ethical challenges, especially regarding the user’s privacy and autonomy. Based on an example scenario of human–robot interaction in elder care, this paper discusses how established design principles can be used in social robotic design. It then juxtaposes these with ethical considerations such as privacy and user autonomy. Combining user experience and ethical perspectives, we propose adjustments to the original design principles and canvass our own design recommendations for a positive and ethically acceptable social human–robot interaction design. In doing so, we show that positive user experience and ethical design may be sometimes at odds, but can be reconciled in many cases, if designers are willing to adjust and amend time-tested design principles.
- Published
- 2021
21. Interactive Robot for Automated Question and Answer System
- Author
-
Vijay A. Kotkar
- Subjects
Interactive robot ,Notice ,SIMPLE (military communications protocol) ,Microphone ,Computer science ,business.industry ,General Mathematics ,Automation ,Education ,Entertainment ,Computational Mathematics ,Computational Theory and Mathematics ,Human–computer interaction ,Robot ,business ,Behavior-based robotics - Abstract
This paper aims in developing an intelligent interactive robot with multi-functions which provides entertainment and companion. To obtain the information accurately, we have used speech recognition to perform the operations. For the Robot behavior, planning, interactions with the voice assistant and interactions with the user the various speech recognition results are applied. The robot has a simple design. In this study, we have used the microphone for speech recognition. In addition, we have used room automation, notice display using voice message of the intelligent interaction between human and robots.
- Published
- 2021
22. Combining CNN and LSTM for activity of daily living recognition with a 3D matrix skeleton representation
- Author
-
Silvia Rossi, Giovanni Ercolano, Ercolano, Giovanni, and Rossi, Silvia
- Subjects
Computer science ,business.industry ,Mechanical Engineering ,Deep learning ,Feature extraction ,Computational Mechanics ,Process (computing) ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,Grid ,Activity recognition ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Selection (linguistics) ,020201 artificial intelligence & image processing ,Artificial intelligence ,Behavior-based robotics ,Representation (mathematics) ,business ,Engineering (miscellaneous) - Abstract
In socially assistive robotics, human activity recognition plays a central role when the adaptation of the robot behavior to the human one is required. In this paper, we present an activity recognition approach for activities of daily living based on deep learning and skeleton data. In the literature, ad hoc features extraction/selection algorithms with supervised classification methods have been deployed, reaching an excellent classification performance. Here, we propose a deep learning approach, combining CNN and LSTM, that exploits both the learning of spatial dependencies correlating the limbs in a skeleton 3D grid representation and the learning of temporal dependencies from instances with a periodic pattern that works on raw data and so without requiring an explicit feature extraction process. These models are proposed for real-time activity recognition, and they are tested on the CAD-60 dataset. Results show that the proposed model behaves better than an LSTM model thanks to the automatic features extraction of the limbs’ correlation. “New Person” results show that the CNN-LSTM model achieves$$95.4\%$$95.4%of precision and$$94.4\%$$94.4%of recall, while the “Have Seen” results are$$96.1\%$$96.1%of precision and$$94.7\%$$94.7%of recall.
- Published
- 2021
23. Behavioral Design of Guiding Agents to Encourage their Use by Visitors in Public Spaces
- Author
-
Megumi Aizawa and Hiroyuki Umemuro
- Subjects
0209 industrial biotechnology ,Agent behavior ,Computer science ,05 social sciences ,Questionnaire ,02 engineering and technology ,Robot behavior ,Guiding agent ,Field experiment ,Public space ,020901 industrial engineering & automation ,Human–computer interaction ,Design approach ,Robot ,0501 psychology and cognitive sciences ,Behavior-based robotics ,050107 human factors - Abstract
To encourage visitors to use guiding agents in public spaces, this study adopted a design approach and focused on identifying behavioral factors that would encourage interaction. Six factors of agent behavior were hypothesized, and an experiment was performed in a public space with real people. One or two communication robots were installed near the entrance of a university library. The reactions of library users passing by the robot were observed and recorded under different robot behavior conditions. The results showed that the robots were able to attract attention by uttering guidance information and looking in various directions while waiting for people. When the robots spoke directly to nearby people, the people tended to interact with them. The results of a questionnaire survey suggested that the voice, speech content, and appearance of the robots are also important factors.
- Published
- 2021
24. Synthesis of Provably Correct Autonomy Protocols for Shared Control
- Author
-
Ufuk Topcu, Murat Cubuktepe, Nils Jansen, and Mohammed Alshiekh
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer science ,Markov process ,Machine Learning (cs.LG) ,Computer Science - Robotics ,symbols.namesake ,Software Science ,FOS: Mathematics ,Temporal logic ,Electrical and Electronic Engineering ,Mathematics - Optimization and Control ,Protocol (object-oriented programming) ,business.industry ,Probabilistic logic ,Mobile robot ,Computer Science Applications ,Artificial Intelligence (cs.AI) ,Control and Systems Engineering ,Optimization and Control (math.OC) ,Task analysis ,symbols ,Artificial intelligence ,Markov decision process ,Behavior-based robotics ,business ,Robotics (cs.RO) - Abstract
We synthesize shared control protocols subject to probabilistic temporal logic specifications. More specifically, we develop a framework in which a human and an autonomy protocol can issue commands to carry out a certain task. We blend these commands into a joint input to a robot. We model the interaction between the human and the robot as a Markov decision process (MDP) that represents the shared control scenario. Using inverse reinforcement learning, we obtain an abstraction of the human's behavior and decisions. We use randomized strategies to account for randomness in human's decisions, caused by factors such as complexity of the task specifications or imperfect interfaces. We design the autonomy protocol to ensure that the resulting robot behavior satisfies given safety and performance specifications in probabilistic temporal logic. Additionally, the resulting strategies generate behavior as similar to the behavior induced by the human's commands as possible. We solve the underlying problem efficiently using quasiconvex programming. Case studies involving autonomous wheelchair navigation and unmanned aerial vehicle mission planning showcase the applicability of our approach., Comment: Submitted to IEEE Transactions of Automatic Control
- Published
- 2021
25. Congestion-aware policy synthesis for multirobot systems
- Author
-
Manuel Mühlig, Charlie Street, Bruno Lacerda, Sebastian Putz, and Nick Hawes
- Subjects
Iterative and incremental development ,Control and Systems Engineering ,Computer science ,Distributed computing ,Reservation ,Probabilistic logic ,Table (database) ,Robot ,Mobile robot ,Electrical and Electronic Engineering ,Behavior-based robotics ,Computer Science Applications ,Automaton - Abstract
Multirobot systems must be able to maintain performance when robots get delayed during execution. For mobile robots, one source of delays is congestion . Congestion occurs when robots deployed in shared physical spaces interact, as robots present in the same area simultaneously must maneuver to avoid each other. Congestion can adversely affect navigation performance and increase the duration of navigation actions. In this article, we present a multirobot planning framework that utilizes learnt probabilistic models of how congestion affects navigation duration. Central to our framework is a probabilistic reservation table , which summarizes robot plans, capturing the effects of congestion. To plan, we solve a sequence of single-robot time-varying Markov automata , where transition probabilities and rates are obtained from the probabilistic reservation table. We also present an iterative model refinement procedure for accurately predicting execution-time robot performance. We evaluate our framework with extensive experiments on synthetic data and simulated robot behavior.
- Published
- 2022
26. A General Architecture for Robotics Systems: A Perception-Based Approach to Artificial Life.
- Author
-
Young, Rupert
- Subjects
- *
ROBOTICS , *FEEDBACK control systems , *PERCEPTUAL control theory , *ARTIFICIAL life , *MATHEMATICAL models - Abstract
Departing from the conventional view of the reasons for the behavior of living systems, this research presents a radical and unique view of that behavior, as the observed side effects of a hierarchical set of simple, continuous, and dynamic negative feedback control systems, by way of an experimental model implemented on a real-world autonomous robotic rover. Rather than generating specific output from input, the systems control their perceptual inputs by varying output. The variables controlled do not exist in the environment, but are entirely internal perceptions constructed as a result of the layout and connections of the neural architecture. As the underlying processes are independent of the domain, the architecture is universal and thus has significant implications not only for understanding natural living systems, but also for the development of robotics systems. The central process of perceptual control has the potential to unify the behavioral sciences and is proposed as the missing behavioral principle of Artificial Life. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
27. Mitigating the Effects of Jamming on Autonomous Vehicle Convoys with Behavior-Based Robotics
- Author
-
Cheung, Calvin
- Subjects
- Autonomous convoy, Behavior-based robotics, Jam mitigation, Platooning, Military doctrine, Convoy performance metrics
- Abstract
Autonomous ground vehicle convoys are heavily reliant on radio communications when performing leader-follower operations. The lead vehicle sets the path and utilizes radio communications to send information such as path points, vehicle pose, vehicle speed, and other sensor data. Follower vehicles utilize this information to track the leader’s path and mobilize to the proper positions. This reliance on radio communications makes autonomous ground vehicle convoys particularly vulnerable to network denial-of-service attacks, such as radio jamming. Jamming is a type of denial-of-service attack that attempts to disrupt or block wireless communications, which interferes with a radio’s ability to transmit or receive data. The contribution of this dissertation is to improve the performance of autonomous ground vehicle convoys when facing radio jamming attacks by utilizing a controls-oriented approach. To mitigate the effects of jamming attacks on autonomous convoys, we propose a behavior-based architecture named the Behavior Manager. The Behavior Manager utilizes layered costmaps and vector field histogram motion planning to implement motor schema behaviors. By utilizing the Behavior Manager, multiple behaviors can be created and combined to form a convoy controller capable of persisting with convoy operations while under a jamming attack. Based on a thorough review of relevant literature, this is the first time that techniques from behavioral robotics are being utilized to mitigate the effects of jamming attacks in any capacity. In addition, we propose a framework for comparative performance, named the Performance Metrics Framework, to gauge the performance of convoy systems. To develop the framework, we examined manned convoy requirements found in Army doctrine, along with common autonomous convoying research metrics. By using the framework, we can categorize performance requirements into different priority areas and find relevant key metrics to use for performance comparison. We conducted experiments to measure the performance of our Behavior Manager convoy controller in the face of radio jamming and utilized the Performance Metrics Framework in performing comparative analysis. In the experiments, simulated convoy runs were performed on multiple path plans under different types of jamming attacks. The experimental results showed that the Behavior Manager was able to improve the performance of autonomous convoys when faced with jamming attacks across all jammer types and path plans, ranging from 13.33% to 86.61% reductions in path error. These results show that a behavior-based robotics architecture approach can used to provide a controls-oriented layer of protection against radio jamming. When combined with common anti-jamming techniques, the Behavior Manager provides a robust, multifaceted defense against radio jamming.
- Published
- 2023
28. Robot Errors in Proximate HRI
- Author
-
Akanimoh Adeleye, Thomas An, Laurel D. Riek, and Auriel Washburn
- Subjects
Computer science ,Mobile manipulator ,media_common.quotation_subject ,Affect (psychology) ,Task (project management) ,Human-Computer Interaction ,Artificial Intelligence ,Human–computer interaction ,Framing (construction) ,Perception ,Robot ,Behavior-based robotics ,Reliability (statistics) ,media_common - Abstract
Advancements within human–robot interaction generate increasing opportunities for proximate, goal-directed joint action (GDJA). However, robot errors are common and researchers must determine how to mitigate them. In this article, we examine how expectations for robot functionality affect people’s perceptions of robot reliability and trust for a robot that makes errors. Here 35 participants ( n = 35) performed a collaborative banner-hanging task with an autonomous mobile manipulator (Toyota HSR). Each participant received either a low- or high-functionality framing for the robot. We then measured how participants perceived the robot’s reliability and trust prior to, during, and after interaction. Functionality framing changed how robot errors affected participant experiences of robot behavior. People with low expectations experienced positive changes in reliability and trust after interacting with the robot, while those with high expectations experienced a negative change in reliability and no change in trust. The low-expectation group also showed greater trust recovery following the robot’s first error compared to the high group. Our findings inform human–robot teaming through: (1) identifying robot presentation factors that can be employed to facilitate trust calibration and (2) establishing the effects of framing, functionality, and the interactions between them to improve dynamic models of human–robot teaming.
- Published
- 2020
29. Expectation-Aware Planning: A Unifying Framework for Synthesizing and Executing Self-Explaining Plans for Human-Aware Planning
- Author
-
Sarath Sreedharan, Tathagata Chakraborti, Christian Muise, and Subbarao Kambhampati
- Subjects
FOS: Computer and information sciences ,Formalism (philosophy of mathematics) ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer science ,business.industry ,Leverage (statistics) ,General Medicine ,Artificial intelligence ,Behavior-based robotics ,business - Abstract
In this work, we present a new planning formalism called Expectation-Aware planning for decision making with humans in the loop where the human's expectations about an agent may differ from the agent's own model. We show how this formulation allows agents to not only leverage existing strategies for handling model differences but can also exhibit novel behaviors that are generated through the combination of these different strategies. Our formulation also reveals a deep connection to existing approaches in epistemic planning. Specifically, we show how we can leverage classical planning compilations for epistemic planning to solve Expectation-Aware planning problems. To the best of our knowledge, the proposed formulation is the first complete solution to decision-making in the presence of diverging user expectations that is amenable to a classical planning compilation while successfully combining previous works on explanation and explicability. We empirically show how our approach provides a computational advantage over existing approximate approaches that unnecessarily try to search in the space of models while also failing to facilitate the full gamut of behaviors enabled by our framework.
- Published
- 2020
30. Group Split and Merge Prediction With 3D Convolutional Networks
- Author
-
Aaron Steinfeld and Allan Wang
- Subjects
Control and Optimization ,Computer science ,business.industry ,Mechanical Engineering ,0211 other engineering and technologies ,Biomedical Engineering ,Mobile robot ,02 engineering and technology ,Machine learning ,computer.software_genre ,Computer Science Applications ,Human-Computer Interaction ,Social group ,Crowds ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Behavior-based robotics ,business ,Merge (version control) ,computer ,021101 geological & geomatics engineering - Abstract
Mobile robots in crowds often have limited navigation capability due to insufficient evaluation of pedestrian behavior. We strengthen this capability by predicting splits and merges in multi-person groups. Successful predictions should lead to more efficient planning while also increasing human acceptance of robot behavior. We take a novel approach by formulating this as a video prediction problem, where group splits or merges are predicted given a history of geometric social group shape transformations. We take inspiration from the success of 3D convolution models for video-related tasks. By treating the temporal dimension as a spatial dimension, a modified C3D model successfully captures the temporal features required to perform the prediction task. We demonstrate performance on several datasets and analyze transfer ability to other settings. While current approaches for tracking human motion are not explicitly designed for this task, our approach performs significantly better at predicting the occurrence of splits and merges. We also draw human interpretations from the model's learned features.
- Published
- 2020
31. Optimum design of the reconfiguration system for a 6-degree-of-freedom parallel manipulator via motion/force transmission analysis
- Author
-
Raymundo Ramos Alvarado and Eduardo Castillo Castañeda
- Subjects
Optimal design ,0209 industrial biotechnology ,Computer science ,Mechanical Engineering ,Parallel manipulator ,Control reconfiguration ,Stiffness ,02 engineering and technology ,Power (physics) ,Computer Science::Robotics ,Computer Science::Hardware Architecture ,020303 mechanical engineering & transports ,020901 industrial engineering & automation ,0203 mechanical engineering ,Transmission (telecommunications) ,Mechanics of Materials ,Control theory ,medicine ,medicine.symptom ,Actuator ,Behavior-based robotics - Abstract
This research introduces a 6-degree-of-freedom parallel manipulator with a reconfigurable fixed base; the manipulator performance has been evaluated to select the most suitable reconfiguration system. The optimal design of the parallel manipulator refers to the enhancement of robot behavior achieved by the geometrical variation of the fixed platform. Motion/force transmission, with the aid of the principle of power conservation, defines the manipulator capacity to transmit its force from the actuated joints to the end-effector from different reconfiguration approaches and vice versa considering the stiffness analysis. Results allow inferring that reconfiguration of the fixed base enhances the performance of this parallel manipulator. The proposal of the reconfiguration system allows obtaining a reconfigurable parallel robot with the minimum number of actuators.
- Published
- 2020
32. Поведение конечных автоматов в лабиринтах
- Subjects
Theoretical computer science ,Finite-state machine ,010504 meteorology & atmospheric sciences ,Computer science ,General Mathematics ,010502 geochemistry & geophysics ,01 natural sciences ,Automaton ,Trap (computing) ,Tree traversal ,Range (mathematics) ,State (computer science) ,Behavior-based robotics ,Auxiliary memory ,0105 earth and related environmental sciences - Abstract
The paper is devoted to the study of problems on the behavior of finite automata in mazes. For any n, a maze is constructed that can be bypassed with 2n stones but you can’t get around with n stones. The range of tasks is extensive and touches upon key aspects of theoretical Computer Science. Of course, the solution of such problems does not mean the automatic solution of complex problems of complexity theory, however, the consideration of these issues can have a positive impact on the understanding of the essence of theoretical Computer Science. It is hoped that the behavior of automata in mazes is a good model for non-trivial information theoretic problems, and the development of methods and approaches to the study of robot behavior will give more serious results in the future. Problems related to automaton analysis of geometric media have a rather rich history of study. The first work that gave rise to this kind of problems, it is necessary to recognize the work of Shannon [24]. It deals with a model of a mouse in the form of an automaton, which must find a specific target in the maze. Another early work, one way or another affecting our problems, is the work of Fisher [9] on computing systems with external memory in the form of a discrete plane. A serious impetus to the study of the behavior of automata in mazes was the work of Depp [7, 8], in which the following model is proposed: there is a certain configuration of cells from mathbbZ^2 (chess maze), in which finite automata, surveying some neighborhood of the cell in which they are, can move to an adjacent cell in one of four directions. The main question posed in such a model is whether there is an automaton that bypasses all such mazes. In [20], Muller constructed a flat trap for a given automaton (a maze that does not completely bypass) in the form of a 3-graph. Budach [5] constructed a chess trap for any given finite automaton. Note that Budach’s solution was quite complex (the first versions contained 175 pages). More visual solutions to this question are presented here [29, 31, 33, 34]. Antelman [2] estimated the complexity of such a trap by the number of cells, and in [1] Antelman, Budach, and Rollick made a finite trap for any finite automaton system. In the formulation with a chess maze and one automaton, there are a number of results related to the problems of traversability of labyrinths with different numbers of holes, with bundles of labyrinths by the number of States of the automaton, and other issues. An overview of such problems can be found for example here [35]. The impossibility of traversing all flat chess labyrinths with one automaton raised the question of studying the possible amplifications of the automaton model, which will solve the problem of traversal. The main way of strengthening can be the consideration of a collective of automata, instead of one automaton, interacting with each other. A special and widely used case is the consideration of a system of one full-fledged automaton and a certain number of automata of stones, which have no internal state and can move only together with the main automaton. Interaction between machines is a key feature of this gain, it is allowed to have a collective (or one machine with stones) external memory, thereby significantly diversifies its behavior. If you get rid of the interaction of automata, the resulting independent system will be little better than a single machine. Next, we discuss the known results associated with the collective automata.
- Published
- 2020
33. Non-verbal behavior of the robot companion: a contribution to the likeability
- Author
-
A. D. Kotov, Liudmila Zaidelman, Anna Zinina, and Nikita Arinkin
- Subjects
genetic structures ,Computer science ,technology, industry, and agriculture ,Eye movement ,020206 networking & telecommunications ,02 engineering and technology ,Animation ,body regions ,Nonverbal communication ,surgical procedures, operative ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,General Earth and Planetary Sciences ,Robot ,020201 artificial intelligence & image processing ,Behavior-based robotics ,human activities ,General Environmental Science ,Gesture - Abstract
Modern emotional robots can support multimodal communicative interaction with humans through speech, hand gestures, head movements, eye and mouth animations. In the framework of this work, the contribution of individual active organs of the robot to the overall positive impression created by the robot in the user is investigated. The F-2 robot was used as an experimental platform. The experimental study revealed that users are significantly more likely to prefer a robot that uses gestures, head movements, eyes and mouth animation in its behavior compared to a robot in which the corresponding part of the body is stationary. The impact of robot’s eye movements on the user is not so clear: subjects significantly more often prefer a robot that uses eye movements in behavior, but they relatively rarely notice the difference between the two patterns of robot behavior – the behavior of the robot with fixed eyes and behavior of one with moving eyes.
- Published
- 2020
34. Analyzing the robotic behavior in a smart city with deep enforcement and imitation learning using IoRT
- Author
-
Yangting Chen, Yanjun Li, Wei Zhang, Yan Liu, and Shuwen Pan
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Smart objects ,Deep learning ,Information technology ,020206 networking & telecommunications ,Robotics ,02 engineering and technology ,Human–computer interaction ,Smart city ,Health care ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,Robot ,020201 artificial intelligence & image processing ,The Internet ,Smart environment ,Artificial intelligence ,business ,Behavior-based robotics ,Internet of Things - Abstract
Smart City aims to develop an environment in which different things goes to different people. The smart city provides a core infrastructure by improving the status of the inhabitants by providing a smart environment by applying Smart Solutions. The Internet of Things (IoT) is a revolutionary concept which finds its way in many applications such as business, industry, healthcare, Transportation, modern Information Technology applications and many more. IoT combined with Artificial Intelligence (AI) can be applied to many day to day applications in superior systems like transportation, robotics, industrial, and automation systems applications. This research focuses on developing a robotic behavior control using the Internet of Robotic Things (IoRT) using deep learning for a Smart City. The IoRT is a promising standard that brings together autonomous robotic systems in the midst of the IoT vision of connected sensors and smart objects pervasively embedded in the day to day environments. Robotic behavioral control models the robot with the essential features to react with the immediate environment via sensory-motor links. Robotic behavioral control has a direct interconnection between the sensors and actuators and controls the functions necessary to move around the environment and carry out necessary tasks. This deep learning solution applied to the robotic behavioral control for robotic application uses two main paradigms: Deep Reinforcement Learning (DRL) and Imitation Learning (IL).DRL merges the concept of deep learning architecture using neural networks and Reinforcement learning algorithms to identify the behavior of the robots.IL focus on imitating human learning or expert demonstration for controlling the robot behavior. The behavior of the robot is monitored in a diner and its performance is estimated to be 92% in realtime
- Published
- 2020
35. Data Driven Models for Human Motion Prediction in Human-Robot Collaboration
- Author
-
Qinghua Li, Mu Yaqi, Zhao Zhang, Chao Feng, and You Yue
- Subjects
0209 industrial biotechnology ,General Computer Science ,representative trajectory computation ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,Human–robot interaction ,Motion (physics) ,Data-driven ,020901 industrial engineering & automation ,Kriging ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Hidden Markov model ,Human-robot collaboration ,business.industry ,General Engineering ,human motion prediction ,human action recognition ,Trajectory ,Robot ,020201 artificial intelligence & image processing ,Artificial intelligence ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Behavior-based robotics ,business ,computer ,lcsh:TK1-9971 ,Gaussian process regression - Abstract
To improve the safety and effectiveness of human-robot collaboration (HRC), the robot must plan a safe trajectory before the human movement is finished. Therefore, it is necessary to enable proactive robot behavior by making accurate intention prediction decisions early in a human motion. Furthermore, it is desirable to not only provide the long-term trajectory prediction of human motion but also characterize the uncertainty around it. In this paper, we present a human motion prediction framework to predict the motion trajectory of human arm in a reaching task. The proposed framework combines partial trajectory classification and human motion regression. By leveraging on the partial trajectory classification, our framework makes it possible to recognize the human action and to provide a trajectory prediction before the human movement is finished. The human motion regression can compensate the low accuracy of the representative trajectory through the fusion strategy. The proposed framework consists of two phases: online phase and offline phase. The offline phase aims to learn a regression model with optimized hyperparameters and a fusion strategy combining different prediction algorithms. In the online phase, based on the partial motion classification, the future reaching trajectory in a given time step is predicted by using a multi-step Gaussian process regression and representative trajectory. Experimental results show that our proposed framework achieved significant performance.
- Published
- 2020
36. Multiplexing of Information about Self and Others in Hippocampal Ensembles
- Author
-
Paul F. M. J. Verschure, Cyriel M. A. Pennartz, Pietro Marchesi, Jadin C. Jackson, Martin Vinck, Jeroen J. Bos, Amos Keestra, Laura A. Van Mourik-Donga, Cognitive and Systems Neuroscience (SILS, FNWI), and Faculty of Science
- Subjects
Male ,0301 basic medicine ,Computer science ,Movement ,Action Potentials ,Spatial Behavior ,Hippocampal formation ,ENCODE ,Multiplexing ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,0302 clinical medicine ,Interneurons ,Orientation ,Conditioning, Psychological ,Animals ,Social information ,CA1 Region, Hippocampal ,lcsh:QH301-705.5 ,Behavior, Animal ,Behavioral pattern ,Robotics ,Observer (special relativity) ,Rats ,030104 developmental biology ,nervous system ,lcsh:Biology (General) ,Space Perception ,Robot ,Behavior-based robotics ,Neuroscience ,030217 neurology & neurosurgery - Abstract
Summary: In addition to coding a subject’s location in space, the hippocampus has been suggested to code social information, including the spatial position of conspecifics. “Social place cells” have been reported for tasks in which an observer mimics the behavior of a demonstrator. We examine whether rat hippocampal neurons may encode the behavior of a minirobot, but without requiring the animal to mimic it. Rather than finding social place cells, we observe that robot behavioral patterns modulate place fields coding animal position. This modulation may be confounded by correlations between robot movement and changes in the animal’s position. Although rat position indeed significantly predicts robot behavior, we find that hippocampal ensembles code additional information about robot movement patterns. Fast-spiking interneurons are particularly informative about robot position and global behavior. In conclusion, when the animal’s own behavior is conditional on external agents, the hippocampus multiplexes information about self and others. : Bos et al. study hippocampal coding of an external agent’s location using a minirobot and find no evidence of mirror-like neurons coding location. They discover that CA1 firing patterns (especially interneurons) carry information about robot behavior and highlight the importance of controlling for confounds due to changes in animal position. Keywords: CA1, tetrode, place cells, social behavior, information theory, decoding, robot, place field, mutual information, interneuron
- Published
- 2019
37. ARTIFICIAL INTELLIGENCE SYSTEM FOR IDENTIFYING ROBOT BEHAVIOR ON A WEB RESOURCE
- Author
-
Oleg Mikhailov, Ruslan Shaporin, Alexander Lysenko, and Vladimir Shaporin
- Subjects
Artificial Intelligence System ,Computer science ,Human–computer interaction ,Web resource ,Behavior-based robotics - Published
- 2019
38. Quick Setup of Force-Controlled Industrial Gluing Tasks Using Learning From Demonstration
- Author
-
Aljaz Kramberger, Christoffer Sloth, and Inigo Iturrate
- Subjects
Robotics and AI ,Adaptive control ,Computer science ,Process (computing) ,learning from demonstration ,Control engineering ,QA75.5-76.95 ,force control ,adaptive control ,Contact force ,Computer Science Applications ,Task (computing) ,Constant (computer programming) ,Control theory ,Artificial Intelligence ,Electronic computers. Computer science ,TJ1-1570 ,gluing ,Robot ,Mechanical engineering and machinery ,parameter estimation ,Behavior-based robotics ,Original Research - Abstract
This paper presents a framework for programming in-contact tasks using learning by demonstration. The framework is demonstrated on an industrial gluing task, showing that a high quality robot behavior can be programmed using a single demonstration. A unified controller structure is proposed for the demonstration and execution of in-contact tasks that eases the transition from admittance controller for demonstration to parallel force/position control for the execution. The proposed controller is adapted according to the geometry of the task constraints, which is estimated online during the demonstration. In addition, the controller gains are adapted to the human behavior during demonstration to improve the quality of the demonstration. The considered gluing task requires the robot to alternate between free motion and in-contact motion; hence, an approach for minimizing contact forces during the switching between the two situations is presented. We evaluate our proposed system in a series of experiments, where we show that we are able to estimate the geometry of a curved surface, that our adaptive controller for demonstration allows users to achieve higher accuracy in a shorter demonstration duration when compared to an off-the-shelf controller for teaching implemented on a collaborative robot, and that our execution controller is able to reduce impact forces and apply a constant process force while adapting to the surface geometry.
- Published
- 2021
- Full Text
- View/download PDF
39. Towards a Quantum Modeling Approach to Reactive Agents
- Author
-
Abdeljalil Abbas-Turki, Yassine Ruichek, Vincent Hilaire, and Abder Koukam
- Subjects
Theoretical computer science ,Computer science ,business.industry ,Subsumption architecture ,Control unit ,Robotics ,Computer Science::Robotics ,Robot ,Quantum algorithm ,Artificial intelligence ,W state ,Behavior-based robotics ,business ,Quantum computer - Abstract
Quantum computing offers a new approach to the problem modeling and solving. This paper deals with the quantum modeling of reactive agents. It also proposes a quantum algorithm to implement the subsumption architecture, widely used by reactive agents, particularly in robotics. This work shows the contribution of the formalism proposed by quantum mechanics to the modeling and the proof of certain properties of the agent behavior. After, the definition of the reactive agent state modeling, the paper suggests a behavior modeling approach based on two steps for subsumption architecture. The first one models the preset behavior that links each action to the perception states. The second one determines, among several actuated actions, the one that the robot must achieve. The subsumption architecture raises the challenge of modeling hierarchical priority of actions. To this end, a multipartite entanglement is used in the second step. More precisely, the paper proposes and generalizes a W-state circuit in order to be used for modeling hierarchical priority actions and controlling the robot accordingly. The result of both steps provides a formal model that links the robot’s perception (input) to the actions (output), with respect to the subsumption architecture. The proposed model of agent is simulated using IBM quantum computer. The simulation shows that the model can either be served as a control unit of the robot (CU) to obtain the suitable action or to simulate the robot behavior.
- Published
- 2021
40. Repairing Human Trust by Promptly Correcting Robot Mistakes with An Attention Transfer Model
- Author
-
Boyi Song, Chao Huang, Ruijiao Luo, Rui Liu, and Yuntao Peng
- Subjects
FOS: Computer and information sciences ,Computer science ,Mistake ,Computer security ,computer.software_genre ,Maintenance engineering ,Task (project management) ,Computer Science - Robotics ,Action (philosophy) ,Robot ,Human operator ,Transfer model ,Behavior-based robotics ,Robotics (cs.RO) ,computer - Abstract
In human-robot collaboration (HRC), human trust in the robot is the human expectation that a robot executes tasks with desired performance. A higher-level trust increases the willingness of a human operator to assign tasks, share plans, and reduce the interruption during robot executions, thereby facilitating human-robot integration both physically and mentally. However, due to real-world disturbances, robots inevitably make mistakes, decreasing human trust and further influencing collaboration. Trust is fragile and trust loss is triggered easily when robots show incapability of task executions, making the trust maintenance challenging. To maintain human trust, in this research, a trust repair framework is developed based on a human-to-robot attention transfer (H2R-AT) model and a user trust study. The rationale of this framework is that a prompt mistake correction restores human trust. With H2R-AT, a robot localizes human verbal concerns and makes prompt mistake corrections to avoid task failures in an early stage and to finally improve human trust. User trust study measures trust status before and after the behavior corrections to quantify the trust loss. Robot experiments were designed to cover four typical mistakes, wrong action, wrong region, wrong pose, and wrong spatial relation, validated the accuracy of H2R-AT in robot behavior corrections; a user trust study with $252$ participants was conducted, and the changes in trust levels before and after corrections were evaluated. The effectiveness of the human trust repairing was evaluated by the mistake correction accuracy and the trust improvement., six pages
- Published
- 2021
41. On the Design of Social Robots Using Sheaf Theory and Smart Contracts
- Author
-
Renita Murimi
- Subjects
blockchain ,Blockchain ,Computer science ,Big data ,smart contracts ,sheaf theory ,Artificial Intelligence ,Human–computer interaction ,Hypothesis and Theory ,TJ1-1570 ,Mechanical engineering and machinery ,Robotics and AI ,robotics ,Social robot ,business.industry ,Perfect information ,Robotics ,QA75.5-76.95 ,imperfect information ,Computer Science Applications ,Electronic computers. Computer science ,Key (cryptography) ,Robot ,Artificial intelligence ,business ,Behavior-based robotics ,irrationality - Abstract
The incorporation of robots in the social fabric of our society has taken giant leaps, enabled by advances in artificial intelligence and big data. As these robots become increasingly adept at parsing through enormous datasets and making decisions where humans fall short, a significant challenge lies in the analysis of robot behavior. Capturing interactions between robots, humans and IoT devices in traditional structures such as graphs poses challenges in the storage and analysis of large data sets in dense graphs generated by frequent activities. This paper proposes a framework that uses the blockchain for the storage of robotic interactions, and the use of sheaf theory for analysis of these interactions. Further, the use of smart contracts on the blockchain is proposed with an embedding of two parameters in the smart contracts – imperfect information and irrationality – to enable robots to operate deftly in various human environments that expect different levels of dominance from the robot. This work shows the application of such a framework for various blockchain applications on the spectrum of human-robot interaction, and identifies key challenges that arise as a result of using the blockchain for robotic applications.
- Published
- 2021
42. Estimating Robot Body Torque for Two-Handed Cooperative Physical Human-Robot Interaction
- Author
-
Martin F. Stoelen, Erik Kyrkjebø, and Johannes Mogster
- Subjects
Control theory ,law ,Computer science ,Limit (music) ,Torque ,Robot ,Wrench ,Behavior-based robotics ,Representation (mathematics) ,Robot end effector ,Human–robot interaction ,law.invention - Abstract
Cooperative physical Human Robot Interaction (pHRI) aims to combine the best of human problem-solving skills with the strength, speed and accuracy of a robot. When humans and robots physically interact, there will be interaction Forces and Torques (F/Ts) that must be within safe limits to avoid threats to human life and unacceptable damage to equipment. When measured, these F/Ts can be limited by safety rated emergency stops, and one can design a compliant robot behavior to reduce interaction F/Ts, and avoid unnecessary emergency stops. Several recent collaborative robots offer measurements of interaction F/Ts by utilizing torque sensors in joints or observers for joint torque, and the classical end-effector F/T sensor can provide measurements of interaction at the working end of a robot. The end-effector wrench can be calculated from joint torques if and only if there is no interaction on the robot body. Typically, safety limits are evaluated around a single point of contact – on the end-effector or elsewhere on the robot body. This approach fails when a human uses both hands to interact with a robot, e.g. when hand guiding or otherwise cooperating with the robot placing one hand on the robot end-effector and the other hand on the robot elbow. Having two points of contact that are evaluated as one will limit the allowed F/Ts of the sum of the contacts rather than individually. In this paper, we introduce the body torque as the interaction on the body that is not the result of interactions on the end-effector. We then use this body torque, which is a more accurate representation of the forces applied to the robot body, to limit the body interaction F/Ts to ensure safe human-robot interaction. Furthermore, the body torque can be used to design null-space compliance for a redundant robot. Distinguishing body torque is a step towards safe cooperative pHRI, where body torque, unknown end-effector loads, and end-effector interaction F/T are all important measurements for safety, control and compliance.
- Published
- 2021
43. The Director Task: a Psychology-Inspired Task to Assess Cognitive and Interactive Robot Architectures
- Author
-
Amandine Mayima, Guillaume Sarthou, Aurélie Clodic, Guilhem Buisan, Kathleen Belhassein, Équipe Robotique et InteractionS (LAAS-RIS), Laboratoire d'analyse et d'architecture des systèmes (LAAS), Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT)-Université de Toulouse (UT)-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Université de Toulouse (UT)-Institut National des Sciences Appliquées (INSA)-Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université de Toulouse (UT)-Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT), Cognition, Langues, Langage, Ergonomie (CLLE), École Pratique des Hautes Études (EPHE), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS)-Toulouse Mind & Brain Institut (TMBI), Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), ANR-16-CE33-0017,JointAction4HRI,Action Jointe pour l'Interaction Humain Robot(2016), ANR-19-P3IA-0004,ANITI,Artificial and Natural Intelligence Toulouse Institute(2019), Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées, and Centre National de la Recherche Scientifique (CNRS)-École pratique des hautes études (EPHE)
- Subjects
Knowledge representation and reasoning ,Human–computer interaction ,Task analysis ,Robot ,Cognition ,Cognitive robotics ,Psychology ,Behavior-based robotics ,Human–robot interaction ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,Task (project management) - Abstract
International audience; Assessing robotic architecture for Human-Robot Interaction can be challenging due to the number of features a robot has to endow to perform an acceptable interaction. While everyday-inspired tasks are interesting as reflecting a realistic use of such robots, they often contain a lot of unknown and uncontrolled conditions and specific robot behavior can be hard to test. In this paper, we propose a new psychology-inspired task, gathering perspective-taking, planning, knowledge representation with theory of mind, manipulation, and communication. Along with a precise description of the task allowing its replication, we present a cognitive robot architecture able to perform it in its nominal cases. We finally suggest some challenges and evaluations for the Human-Robot Interaction research community, all derived from this easy-to-replicate task.
- Published
- 2021
44. Null Space Based Efficient Reinforcement Learning with Hierarchical Safety Constraints
- Author
-
Johannes A. Stork, Quantao Yang, and Todor Stoyanov
- Subjects
business.industry ,Computer science ,Physical system ,Robotics ,Space (commercial competition) ,Task (project management) ,Robotteknik och automation ,Obstacle avoidance ,Leverage (statistics) ,Robot ,Reinforcement learning ,Artificial intelligence ,business ,Behavior-based robotics - Abstract
Reinforcement learning is inherently unsafe for use in physical systems, as learning by trial-and-error can cause harm to the environment or the robot itself. One way to avoid unpredictable exploration is to add constraints in the action space to restrict the robot behavior. In this paper, we proposea null space based framework of integrating reinforcement learning methods in constrained continuous action spaces. We leverage a hierarchical control framework to decompose target robotic skills into higher ranked tasks (e. g., joint limits and obstacle avoidance) and lower ranked reinforcement learning task. Safe exploration is guaranteed by only learning policies in the null space of higher prioritized constraints. Meanwhile multiple constraint phases for different operational spaces are constructed to guide the robot exploration. Also, we add penalty loss for violating higher ranked constraints to accelerate the learning procedure. We have evaluated our method on different redundant robotic tasks in simulation and show that our null space based reinforcement learning method can explore and learn safely and efficiently., Funding agency:Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP)
- Published
- 2021
45. Health State Monitoring of 4-mecanum Wheeled Mobile Robot Actuators and its Impact on the Robot Behavior Analysis
- Author
-
Guillaume Graton, El Mostafa El Adel, Samia Mellah, Mustapha Ouladsine, Alain Planchais, Laboratoire d'Informatique et Systèmes (LIS), Aix Marseille Université (AMU)-Université de Toulon (UTLN)-Centre National de la Recherche Scientifique (CNRS), and STMicroelectronics [Rousset] (ST-ROUSSET)
- Subjects
021110 strategic, defence & security studies ,0209 industrial biotechnology ,Computer science ,Mechanical Engineering ,0211 other engineering and technologies ,Impact study ,Control engineering ,Mobile robot ,02 engineering and technology ,Fault (power engineering) ,Industrial and Manufacturing Engineering ,law.invention ,[SPI.AUTO]Engineering Sciences [physics]/Automatic ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,law ,Mecanum wheel ,Robot ,State (computer science) ,Electrical and Electronic Engineering ,Behavior-based robotics ,Actuator ,Software ,ComputingMilieux_MISCELLANEOUS - Abstract
This paper focuses on a 4-mecanum wheeled mobile robot used for transportation tasks in manufacturing industry. It aims to supervise the health state of the robot actuators after a first detection of faults, and to study the faults impact on the robot behavior. As faults are considered the loss of actuators efficiency due to wear linked to the robot high solicitation. The main contribution is on the one hand the diagnosis of simultaneous actuator faults and on the other hand the impact study of these faults on the robot behavior to determine degradation limits which guarantee that the robot is still close to the desired performance. The objective is to provide to the user a decision-making support regarding whether (or not) the robot is still able to continue functioning with respect to security instructions imposed by its environment. Two main cases are dealt: 1) All the four-actuators lose from 0% (no fault) to 95% of their efficiency. 2) One actuator is completely defective (it loses 100 % of its efficiency), while the three-others lose from 0% to 95% of their efficiency. Different scenarios are tested to analyze the impact of the robot missions and the working conditions on its behavior under degradation.
- Published
- 2021
46. Robot Behavior-Based User Authentication for Motion-Controlled Robotic Systems
- Author
-
Liying Li, Zhen Meng, Long Huang, Chen Wang, Guodong Zhao, and Zeyu Deng
- Subjects
Password ,Authentication ,law ,Computer science ,Human–computer interaction ,Robot ,Fingerprint recognition ,Robot end effector ,Behavior-based robotics ,Robotic arm ,law.invention ,Robot control - Abstract
Motion-controlled robotic systems would become more and more popular in the future since they allow humans to easily control robots to carry out various tasks. However, current authentication methods rely on static credentials, such as passwords, fingerprints, and faces, which are independent of the robot control. Thus, they cannot guarantee that a robot is always under the control of its enrolled user. In this paper, we build a motion-controlled robotic arm system and show that a robotic arm’s motion inherits much of its user’s behavioral information in interactive control scenarios. Based on that, we propose a novel user authentication approach to verify the robotic arm user. In particular, we log the angle readings of the robotic arm’s joints to reconstruct the 3D movement trajectory of its end effector. We then develop a learning-based algorithm to identify the user. Extensive experiments show that our system achieves 95% accuracy to verify users while preventing various impersonation attacks.
- Published
- 2021
47. Learning a Robot's Social Obligations from Comparisons of Observed Behavior
- Author
-
Houssam Abbas and Colin Shea-Blymyer
- Subjects
education.field_of_study ,Learning automata ,Computer science ,business.industry ,Deontic logic ,Population ,Automaton ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Robot ,Obligation ,Artificial intelligence ,education ,Behavior-based robotics ,Set (psychology) ,business - Abstract
We study the problem of learning a formal representation of a robot's social obligations from a human population's preferences. Rigorous system design requires a logical formalization of a robot's desired behavior, including the social obligations that constrain its actions. The preferences of the society hosting these robots are a natural source of these obligations. Thus we ask: how can we turn a popu-lation's preferences concerning robot behavior into a logico-mathematical specification that we can use to design the robot's controllers? We use non-deterministic weighted automata to model a robot's behavioral algorithms, and we use the deontic logic of Dominance Act Utilitarianism (DAU) to model the robot's social and ethical obligations. Given a set of automaton executions, and pair-wise comparisons between the executions, we develop simple algorithms to infer the automaton's weights, and compare them to existing methods; these weights are then turned into logical obligation formulas in DAU. We bound the sensitivity of the inferred weights to changes in the comparisons. We evaluate empirically the degree to which the obligations inferred from these various methods differ from each other.
- Published
- 2021
48. Exploring Non-Expert Robot Programming Through Crowdsourcing
- Author
-
Iolanda Leite, Elizabeth J. Carter, Oscar Örnberg, and Sanne van Waveren
- Subjects
Computer science ,Interface (Java) ,02 engineering and technology ,Crowdsourcing ,Human–robot interaction ,human-robot interaction ,non-expert robot programming ,Artificial Intelligence ,Human–computer interaction ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,TJ1-1570 ,Mechanical engineering and machinery ,block-based programming ,Visual programming language ,Original Research ,Statement (computer science) ,Robotics and AI ,business.industry ,QA75.5-76.95 ,Computer Science Applications ,Electronic computers. Computer science ,Robot ,robots ,While loop ,crowdsourcing ,business ,Behavior-based robotics - Abstract
A longstanding barrier to deploying robots in the real world is the ongoing need to author robot behavior. Remote data collection–particularly crowdsourcing—is increasingly receiving interest. In this paper, we make the argument to scale robot programming to the crowd and present an initial investigation of the feasibility of this proposed method. Using an off-the-shelf visual programming interface, non-experts created simple robot programs for two typical robot tasks (navigation and pick-and-place). Each needed four subtasks with an increasing number of programming statements (if statement, while loop, variables) for successful completion of the programs. Initial findings of an online study (N = 279) indicate that non-experts, after minimal instruction, were able to create simple programs using an off-the-shelf visual programming interface. We discuss our findings and identify future avenues for this line of research.
- Published
- 2021
49. Constrained by Design: Influence of Genetic Encodings on Evolved Traits of Robots
- Author
-
Karine Miras
- Subjects
bias ,evolvable morphologies ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,03 medical and health sciences ,locality ,Artificial Intelligence ,Encoding (memory) ,behavioral traits ,0202 electrical engineering, electronic engineering, information engineering ,TJ1-1570 ,Mechanical engineering and machinery ,Control (linguistics) ,evolvable robots ,Original Research ,030304 developmental biology ,Robotics and AI ,0303 health sciences ,business.industry ,Locality ,Principal (computer security) ,QA75.5-76.95 ,encoding ,Computer Science Applications ,Bounded function ,Electronic computers. Computer science ,Robot ,020201 artificial intelligence & image processing ,phenotypic traits ,Artificial intelligence ,Behavior-based robotics ,business ,computer ,Generative grammar - Abstract
Genetic encodings and their particular properties are known to have a strong influence on the success of evolutionary systems. However, the literature has widely focused on studying the effects that encodings have on performance, i.e., fitness-oriented studies. Notably, this anchoring of the literature to performance is limiting, considering that performance provides bounded information about the behavior of a robot system. In this paper, we investigate how genetic encodings constrain the space of robot phenotypes and robot behavior. In summary, we demonstrate how two generative encodings of different nature lead to very different robots and discuss these differences. Our principal contributions are creating awareness about robot encoding biases, demonstrating how such biases affect evolved morphological, control, and behavioral traits, and finally scrutinizing the trade-offs among different biases.
- Published
- 2021
50. An Integrated Kinematic Modeling and Experimental Approach for an Active Endoscope
- Author
-
Nicola Bailey, Ioannis Georgilas, and Andrew Isbister
- Subjects
closed-loop control ,0209 industrial biotechnology ,Computer science ,Complex system ,02 engineering and technology ,020901 industrial engineering & automation ,Artificial Intelligence ,experimental validation ,TJ1-1570 ,Mechanical engineering and machinery ,Simulation ,Original Research ,Flexibility (engineering) ,Robotics and AI ,Ideal (set theory) ,Continuum (topology) ,Cosserat theory ,QA75.5-76.95 ,021001 nanoscience & nanotechnology ,Computer Science Applications ,Range (mathematics) ,Workflow ,Electronic computers. Computer science ,actuation ,Robot ,endoscopic robots ,0210 nano-technology ,Behavior-based robotics - Abstract
Continuum robots are a type of robotic device that are characterized by their flexibility and dexterity, thus making them ideal for an active endoscope. Instead of articulated joints they have flexible backbones that can be manipulated remotely, usually through tendons secured onto structures attached to the backbone. This structure makes them lightweight and ideal to be miniaturized for endoscopic applications. However, their flexibility poses technical challenges in the modeling and control of these devices, especially when closed-loop control is needed, as is the case in medical applications. There are two main approaches in the modeling of continuum robots, the first is to theoretically model the behavior of the backbone and the interaction with the tendons, while the second is to collect experimental observations and retrospectively apply a model that can approximate their apparent behavior. Both approaches are affected by the complexity of continuum robots through either model accuracy/computational time (theoretical method) or missing complex system interactions and lacking expandability (experimental method). In this work, theoretical and experimental descriptions of an endoscopic continuum robot are merged. A simplified yet representative mathematical model of a continuum robot is developed, in which the backbone model is based on Cosserat rod theory and is coupled to the tendon tensions. A robust numerical technique is formulated that has low computational costs. A bespoke experimental facility with precise automated motion of the backbone via the precise control of tendon tension, leads to a robust and detailed description of the system behavior provided through a contactless sensor. The resulting facility achieves a real-world mean positioning error of 3.95% of the backbone length for the examined range of tendon tensions which performs favourably to existing approaches. Moreover, it incorporates hysteresis behavior that could not be predicted by the theoretical modeling alone, reinforcing the benefits of the hybrid approach. The proposed workflow is theoretically grounded and experimentally validated allowing precise prediction of the continuum robot behavior, adhering to realistic observations. Based on this accurate estimation and the fact it is geometrically agnostic enables the proposed model to be scaled for various robotic endoscopes.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.