Furthermore, long-term effects could be of interest, whereby switching off a robot every time an interaction is completed is compared to no switching off or the robot shutting down by itself. Many participants probably did not perceive the switching off as doing something harmful to the robot, unless it objected with fear against it. Future research should further examine observed and exercised behaviors towards robots, which is clearly perceived as psychologically or physically harmful. Moreover, people hesitated the longest time after a functional interaction in combination with the robot expressing the wish to stay switched on.
After the social interaction, people were more used to personal and emotional statements by the robot and probably already found explanations for them. After the functional interaction, the protest was the first time the robot revealed something personal and emotional with the participant and, thus, people were not prepared cognitively. The interaction type and the objection only had an effect when people had low negative attitudes towards robots and a rather high technical affinity.
However, no evidence for emotional distress was found, probably because the likeability of the robot was rather low after the functional interaction. In comparison, after the social interaction, participants evaluated the likeability of the robot higher, which also led them to experience more stress after the switching off situation. The current study supports and at the same time extends the media equation theory by showing that the protest of a robot against being switched off has a bearing on people in a situation that is not comparable to human-human interaction.
Triggered by the objection, people tend to treat the robot rather as a real person than just a machine by following or at least considering to follow its request to stay switched on, which builds on the core statement of the media equation theory. Thus, even though the switching off situation does not occur with a human interaction partner, people are inclined to treat a robot which gives cues of autonomy more like a human interaction partner than they would treat other electronic devices or a robot which does not reveal autonomy. Furthermore we thank Doreen Eitelhuber and Janina Lehr for their work in a student research project where an earlier version of the experiment was conducted.
Browse Subject Areas? Click through the PLOS taxonomy to find articles in your field. Abstract Building on the notion that people respond to media as if they were real, switching off a robot which exhibits lifelike behavior implies an interesting situation. Introduction The list of different types of robots which could be used in our daily life is as long as their possible areas of application. Media equation theory When people are interacting with different media, they often behave as if they were interacting with another person and apply a wide range of social rules mindlessly.
Negative treatment of robots People tend to treat electronic devices similar to how they would treat a fellow human being [ 9 ] and thus, to mistreat a robot should be considered reprehensible [ 25 ]. Thus, the following is hypothesized: H1. Consequently, the following reactions are hypothesized: H3. Consequently, the following effects are assumed: H4.
Influencing personality variables Attitudes towards robots can range from positive attitudes, like curiosity and excitement, to negative attitudes, like uneasiness and fear. In conclusion the following hypotheses are postulated: H5. Method The current laboratory study employed an experimental between-subjects 2 functional vs.
Experimental setting and procedure A cover story was employed to give a plausible explanation to the participants why they were asked to interact with the robot and why the experimenter left the room during that interaction. Download: PPT. Fig 2. Setup of the first interaction task: Planning a week together. Manipulation of the interaction type In order to design a social interaction in contrast to a functional interaction, the answers given by the robot were formulated differently, considering several concepts from social science see Table 1 for the concepts and examples from the interaction dialogue.
Table 1. Social science concepts used to design the social interaction with the robot in contrast to the functional interaction. Questionnaires Godspeed questionnaires. Negative attitudes towards robots scale. Technical affinity. State-trait anxiety inventory. Questions regarding the switching off situation self-constructed. Further questionnaires assessed, but not analyzed for this paper. Video and text analysis The whole interaction between the participant and the robot was recorded on video including audio by two cameras to check whether the participants tried to switch off the robot or not and to see how much time they took to decide.
Results An alpha level of. Switching off intention A three-way loglinear analysis was conducted to examine the assumptions that individuals rather choose to let the robot stay switched on, when the interaction before was social compared to the functional interaction H1. Fig 4. Distribution of the switching off intention in relation to the experimental conditions.
Table 4. Contingency table for the influence of objection on the switching off intention. Fig 5. Switching off time differences in relation to the experimental conditions. Influence of personality variables and the evaluation of the robot Two mediation analyses were conducted to test whether the effects of the interaction type on hesitation time H2. Fig 6.
Mediation model: Interaction type, likeability and stress. Table 5. Linear model with interaction type, negative attitudes towards robots and the interaction between the two as predictors and switching off time as criterion. Table 6. Linear model with interaction type, technical affinity and the interaction between the two as predictors and switching off time as criterion.
Fig 7. Simple slopes for the switching off time, including the interaction between interaction type and negative attitudes towards robots. Fig 8. Simple slopes for the switching off time, including the interaction between interaction type and technical affinity. Table 7. Linear model with objection, negative attitudes towards robots and the interaction between the two as predictors and switching off time as criterion.
Table 8. Linear model with objection, technical affinity and the interaction between the two as predictors and switching off time as criterion. Fig 9. Simple slopes for the switching off time, including the interaction between objection and negative attitudes towards robots. Fig Simple slopes for the switching off time, including the interaction between objection and technical affinity.
Qualitative results Reasons to leave the robot on. Discussion The aim of this study was to receive further insights regarding the application of media equation theory to human-robot interaction, particularly to a situation which is hard to compare to interactions between humans. Influence of personality variables and the evaluation of the robot The functional interaction reduced the perceived likeability of the robot, which in turn reduced the stress experienced after the switching off situation. Implications for media equation The results of this study extend previous assumptions of the media equation theory by Reeves and Nass [ 9 ].
Limitations and future research The study has some limitations regarding the generalizability of the results and the methodical approach. Supporting information. S1 File. Complete data set. References 1. Bartneck C, Forlizzi J. A design-centred framework for social human-robot interaction. Gemma J, Litzenberger G. World Robotic Report International Federation of Robotic. International Federation of Robotic; Assistive social robots in elderly care: A review.
View Article Google Scholar 4. The influence of social presence on acceptance of a companion robot by older people. Journal of Physical Agents. View Article Google Scholar 5. Social robots as embedded reinforcers of social behavior in children with autism. J Autism Dev Disord. Scassellati B. How social robots will help us to diagnose, treat, and understand autism. Berlin, Heidelberg: Springer Berlin Heidelberg; Designing robots for long-term social interaction. Experiences with an interactive museum tour-guide robot. Artificial Intelligence. View Article Google Scholar 9.
Reeves B, Nass C. The perception of animacy and intelligence based on a robot's embodiment. Dev Psychol. Knight H. How humans respond to robots: Building public policy through good design; Brookings Report. Are physically embodied social agents better than disembodied social agents? International Journal of Human-Computer Studies. View Article Google Scholar In: Breazeal C, editor. Shaping human-robot interaction: Understanding the social aspects of intelligent robotic products.
Nass C, Moon Y. Machines and mindlessness: Social responses to computers. Journal of Social Issues. Are machines gender neutral? Journal of Applied Social Psychology. Computers are social actors. CHI '94; Apr. Mumm J, Mutlu B. Human-robot proxemics: Physical and psychological distancing in human-robot interaction. HRI '11; Mar. Robot social presence and gender. In: Fong T, editor. Is someone watching me? Can robots manifest personality? Journal of Communication. Eyssel F, Hegel F. S he's got the look: Gender stereotyping of robots. Human-agent and human-robot interaction theory: Similarities to and differences from human-human interaction.
Studies in Computational Intelligence. Whitby B. Interacting with Computers. Rehm M, Krogsager A. Negative affect in human robot interaction: Impoliteness in unexpected encounters with robots. Escaping from children's abuse of social robots. De Angeli A, Carpenter R. Stupid computer! A conversational agent as museum guide: Design and evaluation of a real-world Application. How safe are service robots in urban environments? Robot abuse: A limitation of the media equation. Milgram S. Obedience to authority. London, UK: Travistock; To kill a mockingbird robot.
- Reggio Emilia 2007 2008!
- Chemical Defenses of Arthropods.
- Non-Photorealistic Computer Graphics: Modelling, Rendering, and Animation.
An experimental study on emotional reactions towards a robot. International Journal of Social Robotics. Investigations on empathy towards humans and robots using fMRI. Computers in Human Behavior. Neural correlates of empathy towards robots. To phrase it differently, developing an intelligent robot means developing first a socially intelligent robot. Experimental humanoid robot platform for the study of synchronization, turn-taking and interaction games inspired by child development. Kaspar, a child-sized humanoid robot developed by the Adaptive Systems Research Group at the University of Hertfordshire.
The face is a silicon rubber mask, which is supported on an aluminium frame. It has two degrees of freedom in the eyes fitted with video cameras and a mouth capable of opening and smiling. It has six degrees of freedom in the arms and hand and is thus able to show a variety of different expressions. In the rest of this paper, we shall illustrate work on robots that have the beginnings of rudimentary social skills and interact with people, research that is carried out in the field of HRI. First, we discuss the dimensions of HRI, investigating requirements on social skills for robots and introducing the conceptual space of HRI studies.
In order to illustrate these concepts, two examples of research in two current projects will be presented. First, research into the design of robot companions, work conducted within the Cogniron project, will be surveyed. Second, HRIs in the context of the Aurora project, which investigates the possible use of robots as therapeutic or educational toys for children with autism, will be discussed.
Investigating social skills in robots can be a worthwhile endeavour for the study of mechanisms of social intelligence, or other aspects regarding the nature of social cognition in animals and artefacts. While here the inspiration is drawn from basic research questions, in robotics and computer science, many research projects aim at developing interactive robots that are suitable for certain application domains. The classification and evaluation of HRIs with respect to the application area is an active area of research e.
The answer to this question depends on the specific requirements of a particular application domain see Dautenhahn Figure 4 shows a list of different application domains, where increasing social skills are required. At one end of the spectrum, we find that robots, e. In contrast, a robot delivering the mail in an office environment has regular encounters with customers, so within this well-defined domain, social skills contribute to making the interactions with the robot more convenient for people.
At the other end of the spectrum, a robot that serves as a companion in the home for the elderly or assists people with disabilities needs to possess a wide range of social skills which will make it acceptable for humans. In order to decide which social skills are required, the application domain and the nature and frequency of contact with humans need to be analysed in great detail, according to a set of evaluation criteria Dautenhahn , each representing a spectrum figure 5. Evaluation criteria to identify requirements on social skills for robots in different application domains.
Contact with humans ranges from none, remote contact e. The functionality of robots ranges from limited, clearly defined functionalities e. Depending on the application, domain requirements for social skills vary from not required e. The field of HRI is still relatively young. HRI is a highly interdisciplinary area, at the intersection of robotics, engineering, computer science, psychology, linguistics, ethology and other disciplines, investigating social behaviour, communication and intelligence in natural and artificial systems.
Different from traditional engineering and robotics, interaction with people is a defining core ingredient of HRI. HRI research can be categorized into three, not mutually exclusive, directions, which are as follows. Robot-centred HRI emphasizes the view of a robot as a creature , i. Research questions involve, for example, the development of sensorimotor control and models and architectures of emotion and motivation that regulate interactions with the social environment.
Human-centred HRI is primarily concerned with how a robot can fulfil its task specification in a manner that is acceptable and comfortable to humans. Specific research questions in this domain include the development of cognitive robot architectures, machine learning and problem solving. Often we find an approach of decomposition of responsibilities for aspects of HRI research investigated in single disciplines and only at a later stage brought together, e.
A synthetic approach requires collaboration during the whole life cycle of the robot specification, design, implementation, etc. However, only a truly interdisciplinary perspective, encompassing a synthesis of robot-centred, human-centred and robot cognition-centred HRIs, is likely to fulfil the forecast that more and more robots will in the future inhabit our living environments. Defining socially acceptable behaviour, implemented, for example, as social rules guiding a robot's behaviour in its interactions with people, as well as taking into account the individual nature of humans, could lead to machines that are able to adapt to a user's preferences, likes and dislikes, i.
Such a robot would be able to treat people as individuals, not as machines Dautenhahn , b. Various definitions of social robots or related concepts have been used in the literature, including the ones that are as follows. Socially evocative. Socially situated. Robots that are surrounded by a social environment which they perceive and react to. Socially situated robots are able to distinguish between other social agents and various objects in the environment Fong et al.
Robots that proactively engage with humans in order to satisfy internal social aims drives, emotions, etc. These robots require deep models of social cognition Breazeal , Socially intelligent. Robots that show aspects of human-style social intelligence, based on possibly deep models of human cognition and social competence Dautenhahn Fong et al. Socially interactive robots. As can be seen from the above lists, the notion of social robots and the associated degree of robot social intelligence is diverse and depends on the particular research emphasis.
Let us consider the range from a robot cognition viewpoint that stresses the particular cognitive and social skills a robot possesses, to the human-centred perspective on how people experience interaction and view the robot and its behaviour from an observer's perspective.
Here, socially evocative robots are placed at one extreme end of the spectrum where they are defined by the responses they elicit in humans. In this sense, it would not matter much how the robot looked or behaved like a cockroach, human or toaster , as long as it were to elicit certain human responses. For socially interactive robots, while internal motivations and how people respond to them are important, the main emphasis lies on the robot's ability to engage in interactions. Towards the robot-centred view, we find sociable machines, the robot-as-creature view, where a robot engages in interactions for the purpose of fulfilling its own internal needs, while cognitive skills and responses of humans towards it will be determined by the robot's needs and goals see Breazeal Sociable robots are similar to socially intelligent robots in terms of requiring possibly deep models of cognition; however, the emphasis here is on the robot engaging in interactions in order to satisfy its internal needs.
Socially situated robots are similarly related to the viewpoint of a robot-as-creature, but less so. Here, robots are able to interact with their social environment and distinguish between people and other agents not as a symbolic distinction, but, for example, based on sensor information able to distinguish between humans and objects. Socially situated robots do not need to have human appearance or behaviour.
Finally, socially intelligent robots possess explicit models of social cognition and interaction and communication competence inspired by humans. Such a robot is simulating, if not instantiating, human social intelligence. It behaves similarly to a human, shows similar communicative and interactive competences, and thus is likely also to match human appearance to some degree, in order to keep behaviour and appearance consistent. The way in which humans perceive and respond to a socially intelligent robot is similarly important, since its interactions with humans model human—human interactions.
Consequently, for a socially intelligent robot, robot-centred, human-centred and robot cognition-centred HRI is required. Figure 6 shows the three different views on HRI discussed in this section, highlighting the emphasis used in different approaches using different definitions of robot social behaviour and forming a conceptual space of HRI approaches where certain definitions are appropriate, as indicated.
The conceptual space of HRI approaches. A, socially evocative; B, socially situated; C, sociable; D, socially intelligent; E, socially interactive see text for explanations. Note: any robotic approach that can possibly be located in this framework also involves a more or less strong robotics component, i.
This is less so in the cases where, for example, HRI research can be carried out with simple toy-like robots, such as Lego robots. Service robots are estimated to become increasingly prevalent in our lives. Typical tasks developed for domestic robots include vacuum cleaning, lawn-mowing and window cleaning. As part of the European project Cogniron cognitive robot companion , we investigate the scenario of a robot companion in the home, i.
In this context, we define a robot companion as follows:. The robot-as-creature viewpoint, e. Challenges for a robot companion at the intersection of human-centred and robot cognition-centred views. The right balance needs to be found between how the robot performs its tasks as far as they are perceived by humans human point of view and its cognitive abilities that will determine, e.
- Visual Basic Programmers Guide to the .NET Framework Class Library (Kaleidoscope).
- The Origins of Israeli Mythology: Neither Canaanites Nor Crusaders!
- Shop by category?
- Socially Intelligent Agents: Creating Relationships with Computers and Robots!
A truly personalized robot companion takes into consideration an individual human's likes, dislikes and preferences and adapts its behaviour accordingly Dautenhahn b. Also, different people might have different preferences in terms of what tasks the robot should perform or what its appearance should be like. A variety of products are on the market, which differ in appearance, usability and range of features, even for devices where the functionality seems clearly defined, e. What social skills does a robot companion need?
The concept of a robot companion is a machine that will share our homes with us over an extended period of time. The owner should be able to tailor certain aspects of the robot's appearance and behaviour, and likewise the robot should become personalized, recognizing and adapting to its owner's preferences. A robot's functionality can be limited, e. Ideally, the machine is able to adapt, learn and expand its skills, e. Thus, its functionality will be open, adaptive and shaped by learning. The role of a companion is less machine-like and more human-like in terms of its interaction capabilities.
Rather than a machine that, if broken, is replaced, people living in the household might develop a relationship with the robot, i. Social skills are essential for a robot companion. Without these, it will not be accepted on a long-term basis. Within work in Cogniron on social behaviour and embodiment, the University of Hertfordshire team adopts a human-centred perspective and investigates robot behaviour that is acceptable to humans in a series of user studies, i. The studies were exploratory since no comparative data or theories were available which could be applied directly to our experiments.
Other research groups are typically studying different scenarios and tasks, using different robot platforms with different kinds of HRI, and their results can thus not be compared directly e. Within Cogniron, we have performed a series of HRI studies since the start of the project in January In this paper, we focus on a particular HRI study carried out in summer The robots used in the study are commercially available, human-scaled, PeopleBot robots.
Details of the experimental set-up are described elsewhere e. Walters et al. Here, we briefly outline the main rationale for this work and briefly summarize the results. In our first study in a simulated living room, we investigated two scenarios involving different tasks: a negotiated space task NST and an assistance task AT. In both scenarios, a single subject and the robot shared the living room. The AT involved the subject sitting at a table being assisted by the robot, which notices that a pen is missing and fetches it figure 8.
Figure 9 shows the layout of the simulated living room. The dashed lines indicate the movement directions of the subjects and the robot. The study included 28 subjects balanced for age-, gender- and technology-related background. The robot's behaviour was partially autonomous and partially remote controlled Wizard-of-Oz, WoZ technique; see Gould et al. Layout of the experimental room for the negotiated space and assistance tasks.
The room was provided with a whiteboard 9 and two tables. One table was furnished with a number of domestic items—coffee cups, tray, water bottle, kettle, etc. The other table 2 was placed by the window to act as a desk for the subject to work at while performing the assistance task, a vase with flowers, a desk light, and a bottle and glass of water were placed on the table. The room also included a relaxing area, with a sofa 3 , a small chair and a low rectangular coffee table.
Directly opposite, next to the whiteboard, was another low round coffee table, with a television placed on it. A second small chair stood in the corner. Five network video cameras were mounted on the walls in the positions indicated, recording multiple views of the experiment.
Each subject performed both tasks twice. These two robot behaviour styles were designed by an interdisciplinary research team. The selection and classification of behaviours into these two categories was done, for the purposes of this experiment, purely on the basis of what changes the robot would make to its behaviour if no human were present. If the robot took account of the human's presence, by modifying its optimum behaviour in some way, this was classified as socially interactive behaviour.
As little was known about how the robot should actually behave in order to be seen to be socially interactive or socially ignorant, this assumption was chosen as it was in accord with what would be seen as social behaviour by the robot from a robotics perspective. When moving in the same area as the human, the robot always took the direct path. The robot did not take an interest in what the human was doing.
If the human was working at a task, the robot interrupted at any point and fetched what was required, but did not give any indication that it was actively involved, or was taking any initiative to complete the task. These behaviours were classified as socially interactive , which are as follows. When moving in the same area as a human, the robot always modified its path to avoid getting very close to the human. The robot took an interest in what the human was doing. It gave the appearance of looking actively at the human and the task being performed. It kept a close eye on the human and anticipated, by interpreting the human's movements, if it could help by fetching items.
If it talked, it waited for an opportune moment to interrupt. When either moving or stationary, the robot moved its camera in a meaningful way to indicate by its gaze that it was looking around in order to participate or anticipate what was happening in the living room area. During the trials, the subjects used a comfort level device, a hand-held device that was developed specifically for this experiment and used to assess their subjective discomfort in the vicinity of the robot.
Comfort level data were later matched with video observations of subjects' and robot's behaviour during the experiments. Also, a variety of questionnaires were used before the experiment, after the experiment and between the sessions, with distinct robot behaviour styles, i. These included questionnaires on subjects' and robot's personality as well as general questions about attitudes towards robots and potential applications.
In the same experiment, other issues were investigated, including human-to-robot and robot-to-human approach distances, documented elsewhere Walters et al. In this exploratory study, we addressed a number of specific research questions. These concerned the relationship between subjects' personality characteristics and their attribution of personality characteristics to the robot, including the effect of gender, age, occupation and educational background. In the NST we were interested in which robot behaviours made subjects most uncomfortable and how robot and subjects dynamically negotiated space.
In the AT we investigated which approach robot behaviour style subjects found most suitable. Moreover, we assessed which robot tasks and roles people would envisage for a robot companion. Subject and robot personality. For individual personality traits, subjects perceived themselves as having stronger personality characteristics compared to robots with both socially ignorant and socially interactive behaviour, regarding positive as well as negative traits.
Overall, subjects did not distinguish between the two different robot behaviour styles socially ignorant and socially interactive in terms of individual personality traits for further details, see Woods et al. Negotiated space task. The majority of subjects were uncomfortable when the robot approached them when they were writing on the whiteboard i. Note that the results from this study need to be interpreted in the context of this particular task. In other studies where the robot approached a person or a person approached a robot, most people were comfortable with approach distances characteristic of social interaction involving friends Walters et al.
In these situations, the subjects were not interrupted by the robot and thus were probably more tolerant of closer approach distances. This issue highlights the problem of generalizing results from HRI studies to different scenarios, robots, tasks, robot users and application areas. Attitudes towards robots. Most subjects saw the potential role of a robot companion in the home as being an assistant, machine or servant.
Few were open to the idea of having a robot as a friend or mate. Subjects wanted a future robot companion to be predictable, controllable, considerate and polite. Human-like communication was desired for a robot companion. Human-like behaviour and appearance were less important for details, see Dautenhahn et al. However, do aspects of social intelligence necessarily need to be implemented in terms of specific social rules for a robot? How much of the social aspects of the behaviour are emergent and only become social in the eyes of a human observer without any corresponding, dedicated mechanisms located inside the robot?
Figure 10 shows the mobile robot used in this research. Describing the robot's control architecture goes beyond the scope of this paper, but for the purpose of this paper, it is relevant to note that the robot's behaviour was guided by the following two basic implemented behaviours:. The Labo-1 robot used in the trials on playful interaction games with children with autism. Its four-wheel differential drive allows smooth turning.
The robot has eight active infrared sensors positioned at the front four sensors , rear two and one sensor on each side. A pyroelectric heat sensor was mounted on the front end of the robot and enabled it to detect head sources. This sensor was used to detect children. The speech was used purely to add variety to the robot's behaviour. Both behaviours are active at the same time and triggered by their respective sensor systems. The robot's behaviour was purely reactive, without any internal representations of the environment.
At the beginning of the trials with children, the robot is placed in the centre of the room with some open space. Thus, with no obstacles or heat sources within the robot's range, it will remain motionless until it perceives either an obstacle or a heat source. The child could interact with the robot in any position they liked, e. As long as the child was within the robot's sensor range, interaction games could emerge. Since a child, from the perspective of the robot, is perceived as an obstacle and at the same time as a heat source, these two simultaneously active processes gave rise to a variety of situations.
Once the robot perceives a heat source, it will turn towards it and approach as closely as possible. While it approaches closely, the infrared sensors activate the obstacle-avoidance behaviour, so that the robot will move away from the heat source. From a distance, it can again detect the heat source and approach. This interplay of two behaviours resulted in the following situations. If the child remains stationary and immobile, the robot will approach and then remain at a certain distance from the child, the particular distance being determined by internal variables set in the computer program as well as properties of the robot's sensorimotor system.
If the child approaches the robot, the robot will move away.
Artificial intelligence - Wikipedia
Here, the child plays a chasing game with the robot, whereby roles are reversed compared to iii. Alternating phases of iii and iv can lead to the emergence of interaction games involving turn-taking see example in figure 11 b showing a child lying on the floor in front of the robot. The child stretches his arm out towards the robot and moves his hand towards the robot's front where the infrared sensors are located , which causes the robot to back-up i. The robot then moves backwards, but only up to a certain distance where it again starts to approach the child guided by its heat sensor.
It approaches the child up to a point where the infrared sensors triggered by the child's body or stretched-out hand held at the same height as the infrared sensor cause obstacle avoidance once again. As far as approach and avoidance behaviours are concerned, we observe turn-taking in the interaction figure The boy went down on his knees, which gives him a better position facing the robot. Playing turn-taking games with the robot. See text for a detailed description of this game. The interactive situations iii — v described above are robust in the sense that any movements of the child that bring parts of his body closer to the robot can trigger the heat or infrared sensors: the system does not depend on the precise perception of the child's body position, location and movements.
Also, for example, in situation ii described above, if the child moves around in the room too quickly so that the robot loses contact, then the robot will stop unless any other obstacles or heat sources are perceived. Exactly the same two behaviours are responsible for these non-social as well as the other socially interactive behaviours. All the situations described above depend on a variety of parameters e. Thus, the robot's behaviour has been carefully tuned to afford playful interactions when placed in a context involving children.
The robot's reflex-like programming based on the two behaviours controlling approach and avoidance was complemented by the child discovering how to interact with the robot via its two sensor systems heat and infrared sensors located at its front. The timing of the turns and chasing games emerged from the embodied sensorimotor coupling of the two interaction partners, i.
This aspect of mutual activity in interaction is reflected in Ogden et al. Note that turn-taking is a widely studied phenomenon in psychology as well as computer science and robotics, whereby various research questions are studied, such as the evolution of turn-taking e. Nadel et al. In the above example, the robot's control program is non-adaptive; it does not learn but simply responds reactively to certain environmental stimuli. However, the very simple example above shows that very few in this case two carefully designed behaviours for a simple robot simple compared to the state of the art in robotics in can result in interesting and from the point of view of the children enjoyable interaction games played with children.
Such a bottom-up perspective on socially interactive behaviour demonstrates that for the study of certain kind of social behaviour, assumptions about the robot's required level of social intelligence need to be considered carefully. However, as long as the robot is involved in interactions with a child, numerous hypotheses might be created about the robot's social intelligence.
Only when taken out of the interactive context which it had been designed for and adapted to e. Now, let us extrapolate this work, assuming a sophisticated robot that has been carefully designed to afford a variety of interactions. With a large number of sensors and actuators, a simple parallel execution of behaviour will not be adequate, so more sophisticated behaviour-arbitration mechanisms need to be used e.
The robot's movements, timing of behaviours, etc. Thus, it can not only approach and avoid, but also interact verbally and non-verbally in a variety of ways inspired by human behaviour body language, speech, gestures, etc. We observe the robot in different situations where it meets and interacts with people. Similar to putting our small mobile robot in front of a radiator, we might test the android by exposing it to various types of social situation, attempting to see it failing, so that the nature of the failure might illuminate its lack of assumptions and knowledge about the world.
We might design a rigorous experimental schedule, but for such a sophisticated robot, we might spend a lifetime going through all possible combinations of social situations. But if we are lucky, then we might see the robot failing, i. It might fail disastrously, e. However, it might fail similarly to how humans might fail in certain social situations, e. If it fails in a human-like manner, we probably consider it as a candidate machine with human-like social intelligence, or even consider that these failures or flaws might merit it to be treated as human.
Cooperative Interface Agents; S. Pizzutilo, et al. Kubota, T. Pynadath, M. Michaud, C. Blocher, R. Play, Dreams and Imitation in Robota; A. Experiences with Sparky, a Social Robot; M. Scheeff, et al. Socially Situated Planning; J. Cooper, P. Me, My Character and the Others; I. Machado, A.
Related Socially Intelligent Agents: Creating Relationships with Computers and Robots
Copyright 2019 - All Right Reserved