When Transparent does not Mean Explainable

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

View graph of relations

Based on findings from interactional linguistics, I argue thattransparency is not desirable in all cases, especially not in socialhuman-robot interaction. Three reasons for limited use oftransparency are discussed in more detail: 1) that social humanrobotinteraction always relies on some kind of illusion, whichmay be destroyed if people understand more about the robot’s realcapabilities; 2) that human interaction partners make use ofinference-rich categories in order to inform each other about theircapabilities, whereas these inferences are not applicable to robots;and 3) that in human interaction, people display only informationabout their highest capabilities, so that if robots display low-levelcapabilities, people will understand them as very basic. I thereforesuggest not to aim for transparency or explainability, but to focuson the signaling of affordances instead
Title of host publicationPapers of Workshop on Explainable Robotic Systems
Number of pages3
Publication date5 Mar 2018
Publication statusPublished - 5 Mar 2018
EventWorkshop on Explainable Robotic Systems - Chicago, United States
Duration: 5 Mar 20188 Mar 2018


WorkshopWorkshop on Explainable Robotic Systems
LandUnited States