ACM DL

ACM Transactions on

Human-Robot Interaction (THRI)

Menu
Latest Articles

Robot Expressive Motions: A Survey of Generation and Evaluation Methods

Robots that have different forms and capabilities are used in a wide variety of situations; however, one common point to all robots interacting with humans is their ability to communicate with them. In addition to verbal communication or purely communicative movements, robots can also use their embodiment to generate expressive movements while... (more)

Using Event Representations to Generate Robot Semantics

Most semantic models employed in human-robot interactions concern how a robot can understand commands, but in this article the aim is to present a... (more)

The Impact of Different Levels of Autonomy and Training on Operators’ Drone Control Strategies

Unmanned Aerial Vehicles (UAVs), also known as drones, have extensive applications in civilian... (more)

Robots Learning to Say “No”: Prohibition and Rejective Mechanisms in Acquisition of Linguistic Negation

“No” is one of the first ten words used by children and embodies the first form of linguistic negation. Despite its early occurrence, the details of its acquisition remain largely unknown. The circumstance that “no” cannot be construed as a label for perceptible objects or events puts it outside the scope of most... (more)

Neural-network-based Memory for a Social Robot: Learning a Memory Model of Human Behavior from Data

Many recent studies have shown that behaviors and interaction logic for social robots can be learned automatically from natural examples of human-human interaction by machine learning algorithms, with minimal input from human designers [1--4]. In this work, we exceed the capabilities of the previous approaches by giving the robot memory. In earlier... (more)

NEWS

Welcome to ACM Associate Editors

ACM video welcome to Associate Editors now available.


Call for Papers: ACM THRI Special Issue on Explainable Robotic Systems

Robotic systems will inevitably become increasingly complex, yet at the same time increasingly ubiquitous. With this will come the need for them to be transparent and trustworthy: the need for people to understand enough about how robots work in order for to anticipate when they can be trusted.
Read more here.

Forthcoming Articles
Trust-Aware Decision Making for Human-Robot Collaboration: Model Learning and Planning

Trust in autonomy is essential for effective human-robot collaboration and user adoption of autonomous systems such as robot assistants. This paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human trust, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants). In our studies, the robot builds human trust by manipulating low-risk objects first. Interestingly, the robot sometimes fails intentionally in order to modulate human trust and achieve the best team performance. These results show that the trust-POMDP calibrates trust to improve human-robot team performance over the long term. Further, they highlight that maximizing trust alone does not always lead to the best performance.

A Survey of Robot-Assisted Language Learning (RALL)

Robot-assisted language learning (RALL) is becoming a more commonly studied area of human-robot interaction (HRI). This research draws on theories and methods from many different fields, with researchers utilizing different instructional methods, robots, and populations to evaluate the effectiveness of RALL. This survey details the characteristics of robots used ? form, voice, immediacy, non-verbal cues, and personalization ? along with study implementations, discussing research findings. It also analyses robot effectiveness. While research clearly shows that robots can support native and foreign language acquisition, it has been unclear what benefits robots provide over computer-assisted language learning. This survey examines the results of relevant studies from 2004 (RALL?s inception) to 2017. Results are suggestive that robots may be uniquely suited to aid in language production, with apparent benefits in comparison to other technology and as a supplement to human instruction. As well, research consistently indicates that robots provide unique advantages in increasing learning motivation and in-task engagement, and decreasing anxiety, though long-term benefits are uncertain. Throughout this survey, future areas of exploration are suggested, with the hope that answers to these questions will allow for more robust design and implementation guidelines in RALL.

Multimodal Physiological Signals for Workload Prediction in Robot-Assisted Surgery

Monitoring surgeons workload levels is critical in robot-assisted surgery to control their work burden, adapt system interface, and evaluate the robotic systems ease-of-use. Current practice for measuring cognitive load relies on interruptive questionnaires that are subjective and disrupt workflow. To address this limitation, a computational framework to assess workload was implemented in the domain of telerobotic surgery. The framework leverages multiple wireless sensors for real-time surgeon cognitive state assessment and predictive models for automatic workload estimation. Twelve surgeons physiological signals (e.g., heart rate variability, electrodermal, and electroencephalogram activity) were recorded while performing surgical skills tasks on the da Vinci Skills Simulator. The selected surgical skills tasks covered a wide range of difficulty levels and require mental and visual processing, as well as human-motor controls. These multimodal signals were fused using independent component analysis, and the predicted results were compared to the ground-truth workload level based on the NASA Task Load Index (NASA-TLX) scores. It was found that our multisensor approach outperformed individual signals and can correctly predict the cognitive workload level 83.2% of the time during basic and complex surgical skills tasks.

Adapt, Explain, Engage - A Study on How Social Robots Can Scaffold Second-Language Learning of Children

The use of social robots to support children in learning receives increasing attention. However, many questions about how a robot can assist to foster learning are still open. One technique used by teachers is scaffolding, which refers to temporary assistance enabling the learner to move towards new skills or levels of understanding that they would not reach on their own. In this paper, we ask if and how a social robot can be utilized to scaffold second language learning in preschool children (4-7 years). More specifically, we study an adapt-and-explain scaffolding-strategy for which the robot acts as a peer-like tutor who adapts the tasks and its behavior to the cognitive and affective state of the child, with or without verbal explanations of these adaptations. The results of our evaluation study with 49 preschoolers show that, while all learners benefit from the learning adaptation, especially slower learners benefit significantly from the additional explanations. Furthermore, the robot succeeded to `re-engage' children who started to dis-engage from the learning interaction in 76\% of all cases. Our findings demonstrate that a social robot equipped with suitable scaffolding mechanisms can increase engagement and learning, especially when being adaptive to the individual differences of child learners.

Probabilistic Human Intent Recognition for Shared Autonomy in Assistive Robotics

Effective human-robot collaboration in shared autonomy requires reasoning about the intentions of the human partner. To provide meaningful assistance, the autonomy has to first correctly predict, or infer, the intended goal of the human collaborator. In this work, we present a mathematical formulation for intent inference during assistive teleoperation under shared autonomy. Our recursive Bayesian filtering approach models and fuses multiple non-verbal observations to probabilistically reason about the intended goal of the user without explicit communication. In addition to contextual observations, we model and incorporate the human agents behavior as goal-directed actions with adjustable rationality to inform the underlying intent recognition. We validate our approach with a human subjects study that evaluates intent inference performance during shared-control operation under a variety of goal scenarios and tasks, by novice subjects. Importantly, these studies are performed using multiple control interfaces that are typically available to users in assistive domains, which differ in the continuity and dimensionality of the issued control signals. The implications of control interface limitations on intent inference are analyzed. The results of our study show that the underlying intent inference approach directly affects shared autonomy performance, as do control interface limitations. Results further show that our approach outperforms existing solutions for intent inference in assistive teleoperation.

Results of Field Trials with a Mobile Service Robot for Older Adults in 16 Private Households

In this article we present results obtained from field trials with the Hobbit robotic platform, an assistive, social service robot aiming at enabling prolonged independent living of older adults in their own homes. Our main contribution lies within the detailed results on perceived safety and usability, and acceptance from 371 days of using an autonomous robot in real apartments of older users. In these field trials we studied how 18 older adults (75 plus) lived with the autonomously interacting service robot over multiple weeks. This is the first time a multi-functional, low-cost service robot equipped with a manipulator was studied and evaluated for several weeks under real-world conditions. Our results show that all users interacted with Hobbit on a daily basis, rated most functions as well working, and reported that they believe that Hobbit will be part of future elderly care. We show that Hobbit's adaptive approach towards the user increasingly eased the interaction between the users and the robot. Our trials show the necessity to move out into actual user's homes, as only there we encounter real world challenges and demonstrate issues such as misinterpretation of actions during non-scripted human-robot interaction.

Now we're talking: Learning by explaining your reasoning to a social robot

This paper presents a study in which we explored the effect of a social robot on the explanatory behavior of children (aged 6-10) while working on an inquiry learning task. In a comparative experiment we offered children either a baseline Computer Aided Learning (CAL) system or the same CAL system that was supplemented with a social robot to verbally explain their thoughts to. Results indicate that when children made observations in an inquiry learning context, the robot was better able to trigger elaborate explanatory behavior. First, this is shown by a longer duration of explanatory utterances by children who worked with the robot compared to the baseline CAL system. Second, a content analysis of the explanations indicated that when children talked longer, they included more relevant utterances about the task in their explanation. Third, the content analysis shows that children make more logical associations between relevant facets in their explanations when they explained to a robot compared to a baseline CAL system. These results show that social robots that are used as extensions to CAL systems may be beneficial for triggering explanatory behavior in children, which is associated with deeper learning.

Transparency about a Robot?s Lack of Human Psychological Capacities

The increasing sophistication of social robots has intensified calls for transparency about robots? machine nature. Initial research has suggested that providing children with information about robots? mechanical status does not alter children?s humanlike perception of, and relationship formation with, social robots. Against this background, our study experimentally investigated the effects of transparency about a robot?s lack of human psychological capacities (intelligence, self-consciousness, emotionality, identity construction, social cognition) on children?s perceptions of a robot and their relationship to it. Our sample consisted of 144 children aged 8 to 9 years old who interacted with the Nao robot in either a transparent or a control condition. Transparency decreased children?s humanlike perception of the robot in terms of animacy, anthropomorphism, social presence, and perceived similarity. Transparency reduced child-robot relationship formation in terms of decreased trust, while children?s feelings of closeness toward the robot were not affected.

Speech Melody Matters -- How Robots Profit from Using Charismatic Speech

In this paper, we address to what extent the proverb the sound makes the music also applies to human-robot interaction, and whether robots could profit from using speech characteristics similar to those used by charismatic speakers like Steve Jobs. In three empirical studies, we investigate the effects of using Steve Jobs and Mark Zuckerbergs speech characteristics during the generation of robot speech on the robot's persuasiveness and its impressionistic evaluation. The three studies address different human-robot interaction situations, which range from online questionnaires to real-time interactions with a large service robot, yet all involve both behavioral measures and users assessments. The results clearly show that robots can profit from speaking like Steve Jobs.

Towards Scalable Measures of Quality of Interaction: Motor Interference

Motor resonance, the activation of an observer?s motor control system by another actor?s movements, has been claimed to be an indicator of the quality of interaction. Motor interference, a consequence of the presence of resonance, can be detected by analyzing an actor?s spatial movement and has been used as an indicator for of motor resonance. Unfortunately, the established paradigm in which it has been measured is highly artificial in a number of ways. In the present experiment, we tested whether the experimental constraints can be relaxed towards a more naturalistic behavior. Analytically we found a multitude of researcher's degrees of freedom in the literature in terms of how spatial variation is quantified. Upon implementing many of the documented analytical variations for the present study, we found these to be non-equivalent. We also tested forth-back transitive movements for the occurrence of motor interference and found the effect to be more robust than within left-right movements. The direction of interference, however, was opposite to the one described in the literature. Because all relevant effect sizes are small we believe that motor interference, when measured via spatial variation, is an unlikely candidate for covert embedding in naturalistic interaction scenarios.

Objective Assessment of Human Workload in Physical Human-Robot Cooperation using Brain Monitoring

The notions of safe and compliant interaction are not sufficient to ensure an effective physical human-robot cooperation. To obtain an optimal compliant behavior (e.g., variable impedance/admittance control), assessment techniques are required to measure the effectiveness of the interaction in terms of perceived comfort by users. This study investigates electroencephalography (EEG) monitoring as an objective measure to classify operator workload in cooperative manipulation with compliance. For this purpose, an experimental study is conducted including two types of manipulation (gross and fine) and two admittance levels (low- and high-damping). Performance analysis and subjective measures indicate that a proper admittance level which enhances perceived comfort is task-dependent. This information is used to form a binary classification problem (low- and high-workload). Spectral power density and coherence features are extracted from EEG data. Using SVM classifiers a high accuracy rate of >90% is achieved which indicates the reliability of the proposed approach for assessing human workload in interaction with varying compliance across both gross and fine manipulation. Furthermore, a proof-of-concept experiment with a variable admittance scheme is carried out to demonstrate the implications of the proposed method for evaluating the effectiveness of adaptive controllers regarding operator workload.

Learning the Correct Robot Trajectory in Real-Time from Physical Human Interactions

We present a learning and control strategy that enables robots to harness physical human interventions to update their trajectory for the rest of an autonomous task. Within the state-of-the-art, the robot typically reacts to human forces and torques by modifying a local segment of its trajectory, or by searching for the global trajectory offline, using either replanning or previous demonstrations. Instead, we explore a one-shot approach: here the robot updates its entire trajectory for the rest of the task in real-time, without relying on multiple iterations, offline demonstrations, or replanning. Our solution is grounded in optimal control and gradient descent, and extends linear-quadratic regulator (LQR) controllers to generalize across methods that locally or globally modify the robot's underlying trajectory. In the best case, this LQR + Learning approach matches the optimal offline response to physical interactions, and---in the worst case---our strategy is robust to noisy and unexpected human corrections. We compare the proposed solution against other real-time strategies in a user-study, and demonstrate its efficacy in terms of both objective and subjective measures.

Enhancement and Application of a UAV Control Interface Evaluation Technique: Modified GEDIS-UAV

UAV supervisory control interfaces are important for safe operations and mission performance. We reviewed existing UAV interface design and evaluation tools and identified limitations. To address issues with existing methods, we developed an enhanced evaluation tool, the M-GEDIS-UAV. The tool includes detailed criteria for all aspects of UAV control interface design to support operator performance. It also supports quantitative and objective assessment of an interface. We prototyped three UAV information displays, including a digital control display, analog control display, and massive data display, as part of a simulated supervisory control interface. Six analysts, including three human factors experts and three novices evaluated the interfaces using the M-GEDIS-UAV. Inter-rater reliability was high for the human factors experts, suggesting training in usability analysis is necessary for tool application. Results also revealed the massive data display to produce significantly lower evaluation scores than the other displays. We concluded that the M-GEDIS-UAV was sensitive to interface manipulations and was most effectively used by human factors experts. Using the M-GEDIS-UAV tool can reveal the majority design deviations from guidelines early in the design process towards increasing the effectiveness of control interfaces.

A Robotic Framework to Facilitate Sensory Experiences for Children with Autism Spectrum Disorder

The diagnosis of autism spectrum disorder (ASD) in children is commonly accompanied by diagnosis of sensory processing disorders as well. This paper discusses a novel framework designed to enable children with ASD to experience sensory stimuli in a setting to mediate sensory overload experiences. The set up consists of various sensory stations, together with robotic agents that navigate the stations and respond to the stimuli as they are presented. These stimuli are designed to resemble real world scenarios that form a common part of ones daily experiences. Given the strong interest of children with ASD in technology in general and robots in particular, we attempt to utilize these robotic platforms to demonstrate socially acceptable responses to the stimuli in an interactive setting that encourages the childs social, motor and vocal skills, while providing a diverse sensory experience. A small-scale pilot study that uses three children with ASD (3 males) and eight neurotypical children (5 males, 3 females), with ages ranging between 4 and 15 years, is conducted. We describe the user study and analyze the results. We also discuss future directions for this work and describe the improvements that can be made to enhance the reliability of the obtained results.

ACM THRI Editorial Introduction: Representation Learning in HRI

All ACM Journals | See Full Journal Index

Search THRI
enter search term and/or author name