ACM DL

ACM Transactions on

Human-Robot Interaction (THRI)

Menu
Latest Articles

When Exceptions Are the Norm: Exploring the Role of Consent in HRI

HRI researchers have made major strides in developing robotic architectures that are capable of reading a limited set of social cues and producing behaviors that enhance their likeability and feeling of comfort amongst humans. However, the cues in these models are fairly direct and the interactions largely dyadic. To capture the normative qualities... (more)

Curiosity Did Not Kill the Robot: A Curiosity-based Learning System for a Shopkeeper Robot

Learning from human interaction data is a promising approach for developing robot interaction logic, but behaviors learned only from offline data simply represent the most frequent interaction patterns in the training data, without any adaptation for individual differences. We developed a robot that incorporates both data-driven and interactive... (more)

An Ontology for Human-Human Interactions and Learning Interaction Behavior Policies

Robots are expected to possess similar capabilities that humans exhibit during close proximity dyadic interaction. Humans can easily adapt to each... (more)

Bridging Multilevel Time Scales in HRI: An Analysis Framework

In this article, we present a multi-level time scales framework for the analysis of human-robot interaction (HRI). Such a framework allows HRI scientists to model the inter-relation between measures and factors of an experiment. Our final goal with the introduction of this framework is to unify scientific practice in the HRI community for better... (more)

Beyond R2D2: Designing Multimodal Interaction Behavior for Robot-specific Morphology

Robots are expected to enter the everyday lives of people to entertain, educate, or support them. It is therefore important that people can intuitively understand the behavior of robots. Oftentimes, the behavior of people is used as a model because of its familiarity. However, it is as yet unclear what the best approach is to design interaction... (more)

NEWS

Welcome to ACM Associate Editors

ACM video welcome to Associate Editors now available.

Special issue on Representation Learning for Human and Robot Cognition

Intelligent robots are rapidly moving to the center of human environment; they collaborate with human users in different applications that require high-level cognitive functions so as to allow them to understand and learn from human behavior within different Human-Robot Interaction (HRI) contexts. Read more here.

Forthcoming Articles
Using event representations to generate robot semantics

The key idea of this article is to use events as the fundamental structures for the semantic representations of a robot. Events are modeled in terms of conceptual spaces and mappings between spaces. It is shown how the semantics of major word classes can be described with the aid of conceptual spaces in a way that is amenable to computer implementations. An event is represented by two vectors, one force vector representing an action and one result vector representing the effect of the action. The two-vector model is then extended with the thematic roles so that an event is built up from an agent, an action, a patient, and a result. It is shown how the components of an event can be put together to semantic structures that represent the meanings of sentences.

A Survey of Robot-Assisted Language Learning (RALL)

Robot-assisted language learning (RALL) is becoming a more commonly studied area of human-robot interaction (HRI). This research draws on theories and methods from many different fields, with researchers utilizing different instructional methods, robots, and populations to evaluate the effectiveness of RALL. This survey details the characteristics of robots used ? form, voice, immediacy, non-verbal cues, and personalization ? along with study implementations, discussing research findings. It also analyses robot effectiveness. While research clearly shows that robots can support native and foreign language acquisition, it has been unclear what benefits robots provide over computer-assisted language learning. This survey examines the results of relevant studies from 2004 (RALL?s inception) to 2017. Results are suggestive that robots may be uniquely suited to aid in language production, with apparent benefits in comparison to other technology and as a supplement to human instruction. As well, research consistently indicates that robots provide unique advantages in increasing learning motivation and in-task engagement, and decreasing anxiety, though long-term benefits are uncertain. Throughout this survey, future areas of exploration are suggested, with the hope that answers to these questions will allow for more robust design and implementation guidelines in RALL.

Neural-network-based memory for a social robot: Learning a memory model of human behavior from data

Many recent studies have shown that behaviors and interaction logic for social robots can be learned automatically from natural examples of human-human interaction by machine learning algorithms, with minimal input from human designers. Oftentimes in human-human interactions, a person?s action is dependent upon actions that occurred much earlier in the interaction. However, in previous approaches to data-driven social robot behavior learning, the robot decides its next action based only on a narrow window of the interaction history. In this work, an analysis of the types of memory-setting and memory-dependent actions that occur in a camera shop interaction scenario are presented. Furthermore, a robot behavior system with a recurrent neural network architecture with gated recurrent units is proposed to tackle the problem of learning relevant memories to enable a shopkeeper robot to perform appropriate memory-dependent actions. In an offline evaluation the proposed system performed appropriate memory-dependent actions at a significantly higher rate than a baseline system without memory. Finally, an analysis of how the recurrent neural network with gated recurrent units learned to represent memories is presented.

Now we're talking: Learning by explaining your reasoning to a social robot

This paper presents a study in which we explored the effect of a social robot on the explanatory behavior of children (aged 6-10) while working on an inquiry learning task. In a comparative experiment we offered children either a baseline Computer Aided Learning (CAL) system or the same CAL system that was supplemented with a social robot to verbally explain their thoughts to. Results indicate that when children made observations in an inquiry learning context, the robot was better able to trigger elaborate explanatory behavior. First, this is shown by a longer duration of explanatory utterances by children who worked with the robot compared to the baseline CAL system. Second, a content analysis of the explanations indicated that when children talked longer, they included more relevant utterances about the task in their explanation. Third, the content analysis shows that children make more logical associations between relevant facets in their explanations when they explained to a robot compared to a baseline CAL system. These results show that social robots that are used as extensions to CAL systems may be beneficial for triggering explanatory behavior in children, which is associated with deeper learning.

The Impact of Different Levels of Autonomy and Training on Operators? Drone Control Strategies

Robot expressive motions: a survey of generation and evaluation methods

Speech Melody Matters -- How Robots Profit from Speaking like Steve Jobs

In this paper, we address to what extent the proverb the sound makes the music also applies to human-robot interaction, and whether robots could profit from using speech characteristics similar to those used by charismatic speakers like Steve Jobs. In three empirical studies, we investigate the effects of using Steve Jobs and Mark Zuckerbergs speech characteristics during the generation of robot speech on the robot's persuasiveness and its impressionistic evaluation. The three studies address different human-robot interaction situations, which range from online questionnaires to real-time interactions with a large service robot, yet all involve both behavioral measures and users assessments. The results clearly show that robots can profit from speaking like Steve Jobs.

Towards Scalable Measures of Quality of Interaction: Motor Interference

Motor resonance, the activation of an observer?s motor control system by another actor?s movements, has been claimed to be an indicator of the quality of interaction. Motor interference, a consequence of the presence of resonance, can be detected by analyzing an actor?s spatial movement and has been used as an indicator for of motor resonance. Unfortunately, the established paradigm in which it has been measured is highly artificial in a number of ways. In the present experiment, we tested whether the experimental constraints can be relaxed towards a more naturalistic behavior. Analytically we found a multitude of researcher's degrees of freedom in the literature in terms of how spatial variation is quantified. Upon implementing many of the documented analytical variations for the present study, we found these to be non-equivalent. We also tested forth-back transitive movements for the occurrence of motor interference and found the effect to be more robust than within left-right movements. The direction of interference, however, was opposite to the one described in the literature. Because all relevant effect sizes are small we believe that motor interference, when measured via spatial variation, is an unlikely candidate for covert embedding in naturalistic interaction scenarios.

Learning the Correct Robot Trajectory in Real-Time from Physical Human Interactions

We present a learning and control strategy that enables robots to harness physical human interventions to update their trajectory for the rest of an autonomous task. Within the state-of-the-art, the robot typically reacts to human forces and torques by modifying a local segment of its trajectory, or by searching for the global trajectory offline, using either replanning or previous demonstrations. Instead, we explore a one-shot approach: here the robot updates its entire trajectory for the rest of the task in real-time, without relying on multiple iterations, offline demonstrations, or replanning. Our solution is grounded in optimal control and gradient descent, and extends linear-quadratic regulator (LQR) controllers to generalize across methods that locally or globally modify the robot's underlying trajectory. In the best case, this LQR + Learning approach matches the optimal offline response to physical interactions, and---in the worst case---our strategy is robust to noisy and unexpected human corrections. We compare the proposed solution against other real-time strategies in a user-study, and demonstrate its efficacy in terms of both objective and subjective measures.

All ACM Journals | See Full Journal Index

Search THRI
enter search term and/or author name