ACM DL

ACM Transactions on

Human-Robot Interaction (THRI)

Menu
Latest Articles

Editorial

When Exceptions Are the Norm: Exploring the Role of Consent in HRI

HRI researchers have made major strides in developing robotic architectures that are capable of reading a limited set of social cues and producing behaviors that enhance their likeability and feeling of comfort amongst humans. However, the cues in these models are fairly direct and the interactions largely dyadic. To capture the normative qualities... (more)

Curiosity Did Not Kill the Robot: A Curiosity-based Learning System for a Shopkeeper Robot

Learning from human interaction data is a promising approach for developing robot interaction logic, but behaviors learned only from offline data simply represent the most frequent interaction patterns in the training data, without any adaptation for individual differences. We developed a robot that incorporates both data-driven and interactive... (more)

An Ontology for Human-Human Interactions and Learning Interaction Behavior Policies

Robots are expected to possess similar capabilities that humans exhibit during close proximity dyadic interaction. Humans can easily adapt to each... (more)

Bridging Multilevel Time Scales in HRI: An Analysis Framework

In this article, we present a multi-level time scales framework for the analysis of human-robot interaction (HRI). Such a framework allows HRI scientists to model the inter-relation between measures and factors of an experiment. Our final goal with the introduction of this framework is to unify scientific practice in the HRI community for better... (more)

Beyond R2D2: Designing Multimodal Interaction Behavior for Robot-specific Morphology

Robots are expected to enter the everyday lives of people to entertain, educate, or support them. It is therefore important that people can intuitively understand the behavior of robots. Oftentimes, the behavior of people is used as a model because of its familiarity. However, it is as yet unclear what the best approach is to design interaction... (more)

NEWS

Welcome to ACM Associate Editors

ACM video welcome to Associate Editors now available.


Call for Papers: ACM THRI Special Issue on Explainable Robotic Systems

Robotic systems will inevitably become increasingly complex, yet at the same time increasingly ubiquitous. With this will come the need for them to be transparent and trustworthy: the need for people to understand enough about how robots work in order for to anticipate when they can be trusted.
Read more here.

Forthcoming Articles
Trust-Aware Decision Making for Human-Robot Collaboration: Model Learning and Planning

Trust in autonomy is essential for effective human-robot collaboration and user adoption of autonomous systems such as robot assistants. This paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human trust, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants). In our studies, the robot builds human trust by manipulating low-risk objects first. Interestingly, the robot sometimes fails intentionally in order to modulate human trust and achieve the best team performance. These results show that the trust-POMDP calibrates trust to improve human-robot team performance over the long term. Further, they highlight that maximizing trust alone does not always lead to the best performance.

Using event representations to generate robot semantics

The key idea of this article is to use events as the fundamental structures for the semantic representations of a robot. Events are modeled in terms of conceptual spaces and mappings between spaces. It is shown how the semantics of major word classes can be described with the aid of conceptual spaces in a way that is amenable to computer implementations. An event is represented by two vectors, one force vector representing an action and one result vector representing the effect of the action. The two-vector model is then extended with the thematic roles so that an event is built up from an agent, an action, a patient, and a result. It is shown how the components of an event can be put together to semantic structures that represent the meanings of sentences.

A Survey of Robot-Assisted Language Learning (RALL)

Robot-assisted language learning (RALL) is becoming a more commonly studied area of human-robot interaction (HRI). This research draws on theories and methods from many different fields, with researchers utilizing different instructional methods, robots, and populations to evaluate the effectiveness of RALL. This survey details the characteristics of robots used ? form, voice, immediacy, non-verbal cues, and personalization ? along with study implementations, discussing research findings. It also analyses robot effectiveness. While research clearly shows that robots can support native and foreign language acquisition, it has been unclear what benefits robots provide over computer-assisted language learning. This survey examines the results of relevant studies from 2004 (RALL?s inception) to 2017. Results are suggestive that robots may be uniquely suited to aid in language production, with apparent benefits in comparison to other technology and as a supplement to human instruction. As well, research consistently indicates that robots provide unique advantages in increasing learning motivation and in-task engagement, and decreasing anxiety, though long-term benefits are uncertain. Throughout this survey, future areas of exploration are suggested, with the hope that answers to these questions will allow for more robust design and implementation guidelines in RALL.

Adapt, Explain, Engage - A Study on How Social Robots Can Scaffold Second-Language Learning of Children

The use of social robots to support children in learning receives increasing attention. However, many questions about how a robot can assist to foster learning are still open. One technique used by teachers is scaffolding, which refers to temporary assistance enabling the learner to move towards new skills or levels of understanding that they would not reach on their own. In this paper, we ask if and how a social robot can be utilized to scaffold second language learning in preschool children (4-7 years). More specifically, we study an adapt-and-explain scaffolding-strategy for which the robot acts as a peer-like tutor who adapts the tasks and its behavior to the cognitive and affective state of the child, with or without verbal explanations of these adaptations. The results of our evaluation study with 49 preschoolers show that, while all learners benefit from the learning adaptation, especially slower learners benefit significantly from the additional explanations. Furthermore, the robot succeeded to `re-engage' children who started to dis-engage from the learning interaction in 76\% of all cases. Our findings demonstrate that a social robot equipped with suitable scaffolding mechanisms can increase engagement and learning, especially when being adaptive to the individual differences of child learners.

Probabilistic Human Intent Recognition for Shared Autonomy in Assistive Robotics

Effective human-robot collaboration in shared autonomy requires reasoning about the intentions of the human partner. To provide meaningful assistance, the autonomy has to first correctly predict, or infer, the intended goal of the human collaborator. In this work, we present a mathematical formulation for intent inference during assistive teleoperation under shared autonomy. Our recursive Bayesian filtering approach models and fuses multiple non-verbal observations to probabilistically reason about the intended goal of the user without explicit communication. In addition to contextual observations, we model and incorporate the human agents behavior as goal-directed actions with adjustable rationality to inform the underlying intent recognition. We validate our approach with a human subjects study that evaluates intent inference performance during shared-control operation under a variety of goal scenarios and tasks, by novice subjects. Importantly, these studies are performed using multiple control interfaces that are typically available to users in assistive domains, which differ in the continuity and dimensionality of the issued control signals. The implications of control interface limitations on intent inference are analyzed. The results of our study show that the underlying intent inference approach directly affects shared autonomy performance, as do control interface limitations. Results further show that our approach outperforms existing solutions for intent inference in assistive teleoperation.

Neural-network-based memory for a social robot: Learning a memory model of human behavior from data

Many recent studies have shown that behaviors and interaction logic for social robots can be learned automatically from natural examples of human-human interaction by machine learning algorithms, with minimal input from human designers. Oftentimes in human-human interactions, a person?s action is dependent upon actions that occurred much earlier in the interaction. However, in previous approaches to data-driven social robot behavior learning, the robot decides its next action based only on a narrow window of the interaction history. In this work, an analysis of the types of memory-setting and memory-dependent actions that occur in a camera shop interaction scenario are presented. Furthermore, a robot behavior system with a recurrent neural network architecture with gated recurrent units is proposed to tackle the problem of learning relevant memories to enable a shopkeeper robot to perform appropriate memory-dependent actions. In an offline evaluation the proposed system performed appropriate memory-dependent actions at a significantly higher rate than a baseline system without memory. Finally, an analysis of how the recurrent neural network with gated recurrent units learned to represent memories is presented.

Now we're talking: Learning by explaining your reasoning to a social robot

This paper presents a study in which we explored the effect of a social robot on the explanatory behavior of children (aged 6-10) while working on an inquiry learning task. In a comparative experiment we offered children either a baseline Computer Aided Learning (CAL) system or the same CAL system that was supplemented with a social robot to verbally explain their thoughts to. Results indicate that when children made observations in an inquiry learning context, the robot was better able to trigger elaborate explanatory behavior. First, this is shown by a longer duration of explanatory utterances by children who worked with the robot compared to the baseline CAL system. Second, a content analysis of the explanations indicated that when children talked longer, they included more relevant utterances about the task in their explanation. Third, the content analysis shows that children make more logical associations between relevant facets in their explanations when they explained to a robot compared to a baseline CAL system. These results show that social robots that are used as extensions to CAL systems may be beneficial for triggering explanatory behavior in children, which is associated with deeper learning.

Transparency about a Robot?s Lack of Human Psychological Capacities

The increasing sophistication of social robots has intensified calls for transparency about robots? machine nature. Initial research has suggested that providing children with information about robots? mechanical status does not alter children?s humanlike perception of, and relationship formation with, social robots. Against this background, our study experimentally investigated the effects of transparency about a robot?s lack of human psychological capacities (intelligence, self-consciousness, emotionality, identity construction, social cognition) on children?s perceptions of a robot and their relationship to it. Our sample consisted of 144 children aged 8 to 9 years old who interacted with the Nao robot in either a transparent or a control condition. Transparency decreased children?s humanlike perception of the robot in terms of animacy, anthropomorphism, social presence, and perceived similarity. Transparency reduced child-robot relationship formation in terms of decreased trust, while children?s feelings of closeness toward the robot were not affected.

The Impact of Different Levels of Autonomy and Training on Operators? Drone Control Strategies

Unmanned Aerial Vehicles (UAVs), also known as drones, have extensive applications in civilian rescue and military surveillance realms. A common drone control scheme among such applications is human supervisory control, in which human operators remotely navigate drones and direct them to conduct high-level tasks. However, different levels of autonomy in the control system and different operator training processes may affect operators? performance in task success rate and efficiency. An experiment was designed and conducted to investigate such potential impacts. The results showed us that a dedicated supervisory drone control interface tended towards increased operator successful task completion as compared to an enhanced teleoperation control interface, although this difference was not statistically significant. In addition, using Hidden Markov Models, operator behavior models were developed to further study the impact of operators? drone control strategies as a function of differing levels of autonomy. These models revealed that people with both supervisory and enhanced teleoperation control training were not able to determine the right control action at the right time to the same degree that people with just training in the supervisory control mode. Future work is needed to determine how trust plays a role in such settings.

Robot expressive motions: a survey of generation and evaluation methods

Robots that have different forms and capabilities are used in a wide variety of situations, however one common point to all robots interacting with humans is their ability to communicate with them. In addition to verbal communication or purely communicative movements, robots can also use their embodiment to generate expressive movements while achieving a task, to convey additional information to its human partner. This paper surveys the state of the art of the techniques that generate whole-body expressive movements in robots and robot avatars. We consider different embodiments such as wheeled, legged or flying systems and the different metrics used to evaluate the generated movements. Finally, we discuss what are the future areas of improvement and the difficulties to overcome to develop truly expressive motions in artificial agents.

Robots Learning to Say 'No': Prohibition and Rejective Mechanisms in Acquisition of Linguistic Negation

'No' belongs to the first ten words used by children and embodies the first form of linguistic negation. Despite its early occurrence the details of its acquisition remain largely unknown. The circumstance that 'no' cannot be construed as a label for perceptible objects or events puts it outside of the scope of most modern accounts of language acquisition. Moreover, most symbol grounding architectures will struggle to ground the word due to its non-referential character. The presented work extends symbol grounding to encompass affect and motivation. In a study involving the child-like robot iCub we attempt to illuminate the acquisition process of negation words. The robot is deployed in speech-wise unconstrained interaction with participants acting as its language teachers. The results corroborate the hypothesis that affect or volition plays a pivotal role in the acquisition process. Negation words are prosodically salient within prohibitive utterances and negative intent interpretations such that they can be easily isolated from the teacher?s speech signal. These words subsequently may be grounded in negative affective states. However, observations of the nature of prohibition and the temporal relationships between its linguistic and extra-linguistic components raise questions over the suitability of Hebbian-type algorithms for language grounding.

Speech Melody Matters -- How Robots Profit from Using Charismatic Speech

In this paper, we address to what extent the proverb the sound makes the music also applies to human-robot interaction, and whether robots could profit from using speech characteristics similar to those used by charismatic speakers like Steve Jobs. In three empirical studies, we investigate the effects of using Steve Jobs and Mark Zuckerbergs speech characteristics during the generation of robot speech on the robot's persuasiveness and its impressionistic evaluation. The three studies address different human-robot interaction situations, which range from online questionnaires to real-time interactions with a large service robot, yet all involve both behavioral measures and users assessments. The results clearly show that robots can profit from speaking like Steve Jobs.

Towards Scalable Measures of Quality of Interaction: Motor Interference

Motor resonance, the activation of an observer?s motor control system by another actor?s movements, has been claimed to be an indicator of the quality of interaction. Motor interference, a consequence of the presence of resonance, can be detected by analyzing an actor?s spatial movement and has been used as an indicator for of motor resonance. Unfortunately, the established paradigm in which it has been measured is highly artificial in a number of ways. In the present experiment, we tested whether the experimental constraints can be relaxed towards a more naturalistic behavior. Analytically we found a multitude of researcher's degrees of freedom in the literature in terms of how spatial variation is quantified. Upon implementing many of the documented analytical variations for the present study, we found these to be non-equivalent. We also tested forth-back transitive movements for the occurrence of motor interference and found the effect to be more robust than within left-right movements. The direction of interference, however, was opposite to the one described in the literature. Because all relevant effect sizes are small we believe that motor interference, when measured via spatial variation, is an unlikely candidate for covert embedding in naturalistic interaction scenarios.

Learning the Correct Robot Trajectory in Real-Time from Physical Human Interactions

We present a learning and control strategy that enables robots to harness physical human interventions to update their trajectory for the rest of an autonomous task. Within the state-of-the-art, the robot typically reacts to human forces and torques by modifying a local segment of its trajectory, or by searching for the global trajectory offline, using either replanning or previous demonstrations. Instead, we explore a one-shot approach: here the robot updates its entire trajectory for the rest of the task in real-time, without relying on multiple iterations, offline demonstrations, or replanning. Our solution is grounded in optimal control and gradient descent, and extends linear-quadratic regulator (LQR) controllers to generalize across methods that locally or globally modify the robot's underlying trajectory. In the best case, this LQR + Learning approach matches the optimal offline response to physical interactions, and---in the worst case---our strategy is robust to noisy and unexpected human corrections. We compare the proposed solution against other real-time strategies in a user-study, and demonstrate its efficacy in terms of both objective and subjective measures.

A Robotic Framework to Facilitate Sensory Experiences for Children with Autism Spectrum Disorder

The diagnosis of autism spectrum disorder (ASD) in children is commonly accompanied by diagnosis of sensory processing disorders as well. This paper discusses a novel framework designed to enable children with ASD to experience sensory stimuli in a setting to mediate sensory overload experiences. The set up consists of various sensory stations, together with robotic agents that navigate the stations and respond to the stimuli as they are presented. These stimuli are designed to resemble real world scenarios that form a common part of ones daily experiences. Given the strong interest of children with ASD in technology in general and robots in particular, we attempt to utilize these robotic platforms to demonstrate socially acceptable responses to the stimuli in an interactive setting that encourages the childs social, motor and vocal skills, while providing a diverse sensory experience. A small-scale pilot study that uses three children with ASD (3 males) and eight neurotypical children (5 males, 3 females), with ages ranging between 4 and 15 years, is conducted. We describe the user study and analyze the results. We also discuss future directions for this work and describe the improvements that can be made to enhance the reliability of the obtained results.

ACM THRI Editorial Introduction: Representation Learning in HRI

All ACM Journals | See Full Journal Index

Search THRI
enter search term and/or author name