Kahn, Peter H. JrKanda, TakayukiIshiguro, HiroshiRuckert, Jolina H.Gary, Heather E.Shen, SolaceMaier, Rose2013-05-142013-05-142013-05http://hdl.handle.net/1773/22715Robots will increasingly take on roles in our social lives where they can cause humans harm. When this happens, will people hold robots morally accountable for the harms they cause? Toward addressing this question, 40 undergraduate students individually engaged in a 15-minute interaction with ATR’s humanoid robot, Robovie. At the end of the interaction, Robovie incorrectly assessed the participant’s performance in a game and denied the participant a $20 prize. Following the interaction, each participant was interviewed for 50 minutes to ascertain their judgments of Robovie’s sociality, mental-emotional states, and level of moral accountability. Results indicated that all participants engaged socially with Robovie (e.g., exchanged an initial introduction), and many of the participants conceptualized Robovie as having social attributes (e.g. the ability to be a friend), as well as mental-emotional states (e.g., the ability to think or feel happy). Sixty five percent of the participants attributed some level of moral accountability to Robovie. Statistically, participants held Robovie less accountable than they would a human but more accountable than they would a vending machine. This technical report provides the coding manual used in the systematic assessment of participant’s behavioral interactions with and reasoning about Robovie. By a coding manual we mean an empirically and conceptually grounded means of coding qualitative social-cognitive data. The purpose of presenting this manual is to make it available to others interested in investigating people’s social and moral relationships with robots so that it can be utilized and modified as part of an ongoing iterative scientific process.Coding Manual for the Study: “Do People Hold a Humanoid Robot Morally Accountable for the Harm It Causes?”