Ame robot). Repliee Q has degrees of freedom and may make face,head,and upper body movements (Ishiguro. The robot’s movements are mechanical or “robotic,” and don’t match the dynamics of biological motion. Exactly the same movements were videotaped in two look circumstances. For the Robot situation,Repliee Q’s surface elements have been removed to reveal its wiring,metal arms,and joints,and so forth. The silicone “skin” on the hands and face and a few of the fine hair around the face could not be removed but was covered. The movement kinematics for the Android and Robot conditions was identical,given that these circumstances comprised the same robot,carrying out the very exact same movements. For the Human condition,the female adult whose face was employed in constructing Repliee Q was videotaped performing the exact same actions. All agents were videotaped within the similar space with the same background. Video recordings had been digitized,converted to grayscale and cropped to pixels. Videos have been clipped such that the BRD7552 site motion of your agent started in the 1st frame of every s video. In summary,we had 3 agents and varied the form and motion in the observed agent: a human with biological appearance and motion,an Android with biological look and mechanical motion,and also a Robot with mechanical look and motion. As a result of considerable technical difficulty in developing these stimuli and limitations inherent towards the robot systems we worked with,we did not have a fourth situation (i.e an agent using a wellmatched mechanical appearance and biological motion) that would make our experimental design and style (motion) (look).PROCEDUREMATERIALS AND METHODSPARTICIPANTSTwelve righthanded adults (three females; mean age , SD) in the student neighborhood in the University of California,San Diego participated within the study. Participants had standard or correctedtonormal vision and no history of neurological problems. We recruited only those participants who had no practical experience operating with robots so as to decrease attainable effects of familiarity or expertise on our final results (MacDorman et al. Informed consent was obtained in accordance together with the UCSD Human Analysis Protections Program. Participants PubMed ID:https://www.ncbi.nlm.nih.gov/pubmed/26797604 were paid per hour or course credit.STIMULIStimuli had been video clips of actions performed by the humanoid robot Repliee Q (in robotic and humanlike look,FigureBefore starting EEG recordings,participants were presented with all the action stimuli and had been informed as to no matter if every single agent was human or robot. Given that prior knowledge can induce cognitive biases against artificial agents (Saygin and Cicekli,,every participant was provided specifically exactly the same introduction to the study. Participants went by way of a short practice session prior to the experiment. EEG was recorded as participants watched video clips on the three agents performing 5 different upper body actions (drinking from a cup,picking up and looking at an object,hand waving,introducing self,nudging). The experiment consisted of blocks of trials with equal quantity of videos of every single agent and action (4 repetitions of each and every video in every single block). Stimuli have been presented in a pseudorandomized order ensuring that a video was not repeated on two consecutive trials. Each participant experienced a diverse pseudorandomized sequence of trials. Stimuli have been displayed on a Samsung LCD monitor at Hz employing Pythonbased Vizard (Worldviz,Inc.) software program. We displayed a gray screen with a fixation cross before the commence of your video clip on each trial. Participants.