- When: : 7 February 2019, 16:00-17:00
- Location: room 021 at Janskerkhof 2-3.
- Speaker: Maartje van der Graaf
About the talk:
While interacting with robots, people inevitably construct mental models to understand a robot’s actions. However, people build these mental models based on their experience interacting with other living beings. This leads to ambiguous perceptions in robot actions, wrongful accusations of errors, improper trust estimation in robots, and ineffective human-robot collaborations. The growing presence of robots in society, combined with their increasingly complex but unintelligible algorithms, requires us to build robots that can explain their actions to often puzzled human users. Yet, an understanding of how to develop effective robot behavior explanations is currently lacking.
I will be presenting a framework based on folk psychology within which explanations from robotic systems should be phrased in order to actually create the meaning, understanding, and trust that people seek during interactions. My research shows that people readily apply the conceptual and linguistic tools of folk psychology when explaining robot behavior, which implies that people will be comfortable when robots explain their own behavior by relying on that framework. Explainability of robots can succeed only if we know what forms of explanations people actually find meaningful. Making robot behaviors more transparent and intuitive by means of robots’ own explanations improves the accuracy of people’s mental model of robots. This helps people to better understand a robot’s actions, enables accurate error corrections, supports blame verification, fosters appropriate trust assessment, and sustains effective human-robot collaborations.