enter search term and/or author name
Call for Papers: ACM THRI Special Issue on Explainable Robotic Systems
Maartje de Graaf, Utrecht University, NL
Bertram Malle, Brown University, USA
Anca Dragan, UC Berkeley, USA
Tom Ziemke, Linköping University, Sweden
Aim and Scope
Robotic systems will inevitably become increasingly complex, yet at the same time increasingly ubiquitous. With this will come the need for them to be transparent and trustworthy: the need for people to understand enough about how robots work in order for to anticipate when they can be trusted. When people interact with a robotic system, they will inevitably construct mental models to understand and predict its actions. However, people’s mental models of robotic systems stem from their interactions with living beings. Thus, people easily run the risk of establishing incorrect or inadequate mental models of robotic systems. This can have significant consequences for trust in such systems. This results in people either under-trusting or over-trusting the system, which could result in self-deception or even harm. To avoid under- and over-trust, we need to understand the inferences that people make about robots from their behavior, and leverage this understanding to formulate and implement behaviors into robotic systems in such a way that people are more likely to form correct mental models of them. Because people will be better able to predict the intentions of these systems, more accurately estimate their capabilities, better understand their actions, and potentially correct their errors, this will increase not only the acceptance of robotic systems, but also enable people to calibrate their trust in them. This special issue will address the topics of transparency and explainability, with a particular focus on robotic systems, to address these topics (amongst others) from a computer science, artificial intelligence, cognitive psychology, and philosophical perspective. The implementation and use of explainable robotic systems may prevent the potentially frightening confusion over why a robot acted in a particular manner. Moreover, explainable robotic systems may allow people to better calibrate their expectations of the robot’s capabilities and be less prone to treating robots as almost-humans.
We invite a diversity of topics from researchers and practitioners from a wide variety of fields covering any topic that could potentially contribute to the discussion of people’s interpretation of robot actions as well as the implementation of transparent, predictable, and explainable behaviors in robotic systems. Topics of interest include (but not limited to):
Paper submission deadline:
November 15, 2019
ACM Transactions on Human-Robot Interaction is a peer-reviewed, interdisciplinary, open-access journal using an online submission and manuscript tracking system. To submit your paper, please:
1. Go to https://mc.manuscriptcentral.com/thri and login or follow the "Create an account" link to register.
2. After logging in, click the "Author" tab.
3. Follow the instructions to "Start New Submission”.
4. Choose the submission category "SI on Explainable Robot Behavior".
If you have any questions, please feel free to contact us at email@example.com