ACM DL

ACM Transactions on

Human-Robot Interaction (THRI)

Menu
Latest Articles

Blossom: A Handcrafted Open-Source Robot

Blossom is an open-source social robotics platform responding to three challenges in human-robot interaction (HRI) research: (1) Designing, manufacturing, and programming social robots requires a high level of technical knowledge; (2) social robot designs are fixed in appearance and movement capabilities, making them hard to adapt to a specific... (more)

Empathic Robot for Group Learning: A Field Study

This work explores a group learning scenario with an autonomous empathic robot. We address two research questions: (1) Can an autonomous robot designed with empathic competencies foster collaborative learning in a group context? (2) Can an empathic robot sustain positive educational outcomes in long-term collaborative learning interactions with... (more)

Communicating Dominance in a Nonanthropomorphic Robot Using Locomotion

Dominance is a key aspect of interpersonal relationships. To what extent do nonverbal indicators related to dominance status translate to a... (more)

Reflecting on the Presence of Science Fiction Robots in Computing Literature

Depictions of robots and AIs in popular science fiction movies and shows have the potential to showcase visions of Human-Robot Interaction (HRI) to... (more)

NEWS

Welcome to ACM Associate Editors

ACM video welcome to Associate Editors now available.

Special issue on Representation Learning for Human and Robot Cognition

Intelligent robots are rapidly moving to the center of human environment; they collaborate with human users in different applications that require high-level cognitive functions so as to allow them to understand and learn from human behavior within different Human-Robot Interaction (HRI) contexts. Read more here.

Forthcoming Articles

Animation Techniques in Human-Robot Interaction User Studies: a Systematic Literature Review

Age difference in perceived ease of use, curiosity, and implicit negative attitude toward robots

Understanding older adults? attitude toward robots has become increasingly important as robots have been introduced in various settings, such as retirement homes. We investigated whether there are age differences in both implicit and explicit attitudes toward robots after interacting with an assistive robot. Twenty-four younger and 24 older adults were recruited. Explicit attitudes were measured by self-reported questionnaires both before and after interacting with the robot. State curiosity toward robots was also measured as a momentary form of explicit attitude. Implicit attitude was measured via an implicit association test. Our results showed that 1) both older and younger adults had more positive explicit attitudes toward robots after interaction, 2) older adults had lower state curiosity than younger adults; however, their state curiosity would be up to the same level as younger adults when they perceived the robot with higher levels of personal association, and 3) the implicit association between robots and negative words was stronger for older adults than younger adults, suggesting that older adults had more implicit negative attitudes toward robots. The results suggest that, despite older adults? relatively more negative implicit attitudes toward robots, personally relevant positive experiences could help improve their explicit attitudes toward robots.

An Ontology for Human-Human Interactions and Learning Interaction Behavior Policies

Robots are expected to possess similar capabilities that humans exhibit during close proximity dyadic interaction. Humans can easily adapt to each other in a multitude of scenarios, ensuring safe and natural interaction. Even though there have been attempts to mimic human motions for robot control, understanding the motion patterns emerging during dyadic interaction has been neglected. In this work, we analyze close proximity human-human interaction and derive an ontology that describes a broad range of possible interaction scenarios by abstracting tasks, and using insights from attention theory. This ontology enables us to group interaction behaviors into separate cases, each of which can be represented by a graph. Using inverse reinforcement learning, we find unique interaction models, represented as combination of cost functions, for each case. The ontology offers a unified, and generic approach to categorically analyze and learn close proximity interaction behaviors that can enhance natural human-robot collaboration.

Modeling of Human Visual Attention in Multiparty Open-World Dialogues

This study proposes, develops and evaluates methods for modeling the eye-gaze direction and head orientation of a person in multi-party open-world dialogues, as a function of low-level cues exhibited by the participants. These signals include speech activity, eye-gaze direction and head orientation, all of which may be estimated in real-time during the interaction. By utilizing these signals and novel data representations suitable for the task and context, the developed methods can generate plausible candidate gaze targets in real-time. One of the models is based on a fully-connected Neural Network (NN) while the other is based on a Long Short-Term Memory network (LSTM). The trained models are compared against a rule-based baseline model. We performed an extensive evaluation of the proposed methods by investigating the contribution of the different predictors to the accurate generation of candidate gaze targets. The results show that the methods can accurately generate candidate gaze targets when the person being modeled is in a listening state. However, when the person being modeled is in a speaking state the methods yield significantly lower performance. This result stems from the fact that the methods do not consider the goals of the person being modeled.

A Diagnostic Human Workload Assessment Algorithm for Collaborative and Supervisory Human-Robot Teams

High-stress environments, such as first-response or a NASA control room, require optimal task performance, as a single mistake may cause monetary loss or even the loss of human life. Robots can partner with humans in a collaborative or supervisory paradigm in order to augment the human's abilities and increase task performance. Such teaming paradigms require the robot to appropriately interact with the human without decreasing either's task performance. Workload is directly correlated with task performance; thus, a robot may use a human's workload state to modify its interactions with the human. Assessing the human's workload state may also allow for dynamic task (re-)allocation, as a robot can predict if a task may overload the human and if so, allocate it elsewhere. A diagnostic workload assessment algorithm that accurately estimates workload using results from two evaluations, one peer-based and one supervisory-based, is presented. This algorithm is an initial step towards adaptive robots that can adapt their interactions and intelligently (re-)allocate tasks in dynamic domains.

Curiosity did not kill the robot: A curiosity-based learning system for a shopkeeper robot

What do older adults and clinicians think about traditional mobility aids and exoskeleton technology?

Engaging with Robotic Swarms: Commands from Expressive Motion

In recent years, researchers have explored gesture-based interfaces to control robots in non-traditional ways. These interfaces require the ability to track the body movements of the user in 3D. Deploying mo-cap systems for tracking tends to be costly, intrusive, and requires a clear line of sight, making them ill-adapted for applications that need fast deployment, such as artistic performance and emergency response. In this paper, we use consumer-grade armbands, capturing orientation information and muscle activity, to interact with a robotic system through a state machine controlled by a body motion classifier. To compensate for the low quality of the information of these sensors, and to allow a wider range of dynamic control, our approach relies on machine learning. We train our classifier directly on the user to reconignize (within minutes) his or her physiological states, or moods. We demonstrate that on top of guaranteeing faster field deployment, our algorithm performs better than a memoryless neural network. We then demonstrate the ease of use and robustness of our system with a study on 27 participants, each creating a hybrid dance performance with a swarm of desk robots from their own body dynamics.

All ACM Journals | See Full Journal Index

Search THRI
enter search term and/or author name