Animation Techniques in Human-Robot Interaction User Studies: a Systematic Literature Review
Understanding older adults? attitude toward robots has become increasingly important as robots have been introduced in various settings, such as retirement homes. We investigated whether there are age differences in both implicit and explicit attitudes toward robots after interacting with an assistive robot. Twenty-four younger and 24 older adults were recruited. Explicit attitudes were measured by self-reported questionnaires both before and after interacting with the robot. State curiosity toward robots was also measured as a momentary form of explicit attitude. Implicit attitude was measured via an implicit association test. Our results showed that 1) both older and younger adults had more positive explicit attitudes toward robots after interaction, 2) older adults had lower state curiosity than younger adults; however, their state curiosity would be up to the same level as younger adults when they perceived the robot with higher levels of personal association, and 3) the implicit association between robots and negative words was stronger for older adults than younger adults, suggesting that older adults had more implicit negative attitudes toward robots. The results suggest that, despite older adults? relatively more negative implicit attitudes toward robots, personally relevant positive experiences could help improve their explicit attitudes toward robots.
Robots are expected to possess similar capabilities that humans exhibit during close proximity dyadic interaction. Humans can easily adapt to each other in a multitude of scenarios, ensuring safe and natural interaction. Even though there have been attempts to mimic human motions for robot control, understanding the motion patterns emerging during dyadic interaction has been neglected. In this work, we analyze close proximity human-human interaction and derive an ontology that describes a broad range of possible interaction scenarios by abstracting tasks, and using insights from attention theory. This ontology enables us to group interaction behaviors into separate cases, each of which can be represented by a graph. Using inverse reinforcement learning, we find unique interaction models, represented as combination of cost functions, for each case. The ontology offers a unified, and generic approach to categorically analyze and learn close proximity interaction behaviors that can enhance natural human-robot collaboration.
This study proposes, develops and evaluates methods for modeling the eye-gaze direction and head orientation of a person in multi-party open-world dialogues, as a function of low-level cues exhibited by the participants. These signals include speech activity, eye-gaze direction and head orientation, all of which may be estimated in real-time during the interaction. By utilizing these signals and novel data representations suitable for the task and context, the developed methods can generate plausible candidate gaze targets in real-time. One of the models is based on a fully-connected Neural Network (NN) while the other is based on a Long Short-Term Memory network (LSTM). The trained models are compared against a rule-based baseline model. We performed an extensive evaluation of the proposed methods by investigating the contribution of the different predictors to the accurate generation of candidate gaze targets. The results show that the methods can accurately generate candidate gaze targets when the person being modeled is in a listening state. However, when the person being modeled is in a speaking state the methods yield significantly lower performance. This result stems from the fact that the methods do not consider the goals of the person being modeled.
High-stress environments, such as first-response or a NASA control room, require optimal task performance, as a single mistake may cause monetary loss or even the loss of human life. Robots can partner with humans in a collaborative or supervisory paradigm in order to augment the human's abilities and increase task performance. Such teaming paradigms require the robot to appropriately interact with the human without decreasing either's task performance. Workload is directly correlated with task performance; thus, a robot may use a human's workload state to modify its interactions with the human. Assessing the human's workload state may also allow for dynamic task (re-)allocation, as a robot can predict if a task may overload the human and if so, allocate it elsewhere. A diagnostic workload assessment algorithm that accurately estimates workload using results from two evaluations, one peer-based and one supervisory-based, is presented. This algorithm is an initial step towards adaptive robots that can adapt their interactions and intelligently (re-)allocate tasks in dynamic domains.
Curiosity did not kill the robot: A curiosity-based learning system for a shopkeeper robot
What do older adults and clinicians think about traditional mobility aids and exoskeleton technology?
In recent years, researchers have explored gesture-based interfaces to control robots in non-traditional ways. These interfaces require the ability to track the body movements of the user in 3D. Deploying mo-cap systems for tracking tends to be costly, intrusive, and requires a clear line of sight, making them ill-adapted for applications that need fast deployment, such as artistic performance and emergency response. In this paper, we use consumer-grade armbands, capturing orientation information and muscle activity, to interact with a robotic system through a state machine controlled by a body motion classifier. To compensate for the low quality of the information of these sensors, and to allow a wider range of dynamic control, our approach relies on machine learning. We train our classifier directly on the user to reconignize (within minutes) his or her physiological states, or moods. We demonstrate that on top of guaranteeing faster field deployment, our algorithm performs better than a memoryless neural network. We then demonstrate the ease of use and robustness of our system with a study on 27 participants, each creating a hybrid dance performance with a swarm of desk robots from their own body dynamics.