Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings


Free download. Book file PDF easily for everyone and every device. You can download and read online Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings book. Happy reading Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings Bookeveryone. Download file Free Book PDF Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings Pocket Guide.
Follow Us!

Connection Science, 15 4 —, Robotic societies: Elements of learning by imitation. Fong, I. Nourbakshsh, and K. Robotics and Autonomous —, Innsbruck, Austria, b. Systems, 42 —, Imitation towards A. Intention and Intentionality, chap- service robotics. In Int. Robot imita- W. Ilg, G. Bakir, M. Franz, and M. Sym- Giese. Hierarchical spatio-temporal morphable posium on Robotics and Automation ISRA , models for representation of complex movements pages —, Queretaro, Mexico, b. Bakker and Y. Robot see robot do: An Coimbra, Liu and J. Multi-Agent Robotic Systems.

CRC Brighton, England, Press, London, Becker, E. Kefalea, E. Mael, C. Mals- M. Imitation in Animals and Artefacts, burg, M. Pagel, J. Triesch, J. Vorbruggen, R. Gripsee: A gesture-controlled Imitation: Linking Perception to Action and Biol- robot for object perception and manipulation. Au- ogy to Robotics, pages — Cambridge, MA, Billard, Y. Epars, S. Calinon, S. Schaal, and M. Mataric and M. Fixation behavior G.

Item Preview

Discovering optimal imitation strate- in observation and imitation of human movement. Meltzoff and R. Intention and In- C. Breazeal, D. Buchsbaum, J. Gray, D. Learning from and about others: To- for Understanding Other Minds: Bodily Acts, At- wards using imitation to bootstrap the social under- tention, and Intention, pages — The MIT standing of others by robots. Meltzoff and M. Early imitation G. Buccino, F. Binkofski, G. Fink, L. Fadiga, within a functional framework: The important of L. Fogassi, V. Gallese, R. Seitz, K. Zilles, G.

Riz- person identity, movement, and development. In- zolatti, and H. Action observation activ- fant Behaviour and Development, 15 —, ities premotor and parietal areas in a somatotopic Imitation, memory, C. The Imitative Mind, chapter What is the and representation of persons.

Mobi ebook nedlastingsforum Anahareo: A Wilderness Spirit PDF by Kristin Gleeson 1611792207

Infant Behaviour body schema? Cambridge Uni- and Development, 17 —99, Nehaniv and K. Like me? Schaal, A. Ijspeert, and A. Computa- sures of correspondence and imitation. Cybernet- tional approaches to motor learning by imitation. Imitation in An- , Shen and H. Mobile robot navigation through digital landmarks. Oztop and M. Schema design and imple- Automation and Computing Conf. Biological Cybernetics, 87 2 —, Zollo, B. Siciliano, C. Laschi, G. Teti, and P. Rao and A.

Imitation learning An experimental study on compliance control for a ininfants and robots: Towards probabilistic com- redundant personal robot arm. Robotics and Au- putational models. Nehaniv, Kerstin Dautenhahn, and Joe Saunders? In our approach, the matching is according to different metrics and granularity.

Using as an example a captured demonstration from a human, the system produces a correspondence solution given a selection of effect metrics and starting from dissimilar initial object positions, producing ac- tion commands that are then executed by two imitator target platforms in simulation to successfully imitate. Imitation is a powerful learning tool that can be used by robotic agents to socially learn new skills and In this approach, a solution to the correspondence tasks. One of the fundamental problems in imitation problem can be used to generate a recipe a loose is the correspondence problem, how to map between plan through which an imitator can map sequences the actions, states and effects of the model and imi- of observed actions of the model agent to its own tator agents, matching according to different metrics repertoire of actions as constrained by its own em- and granularity, when the embodiment of the agents bodiment and by context Nehaniv and Dautenhahn is dissimilar Nehaniv and Dautenhahn The , , Qualitatively different kinds of following statement of the correspondence problem social learning result from matching different com- Nehaniv and Dautenhahn , , draws binations of matching actions, states and effects at attention to the fact that the model and imitator agents different levels of granularity Nehaniv The may not necessary share the same morphology or sub-goals define the granularity to match and vice may not have the same affordances: versa.

The choice of metrics used is bly dissimilar embodiment, which from a therefore very important as it will have an impact on corresponding starting state, leads through the quality and character of the imitation. The corresponding actions, states and effects as demonstrated by the imitator can also be captured and used as a demonstration for another imitating agent.

Differently embodied and constrained target systems in various contexts need to be sup- ported. The learning algorithms to be desired actions and also the difference between at- developed should be general and address fundamen- tained and desired states and effects Nehaniv and tal questions of imitation learning, applied to manip- Dautenhahn , The choice of metric de- ulation tasks. For example a robotic companion at termines, in part, what will be imitated, whereas solv- home could acquire knowledge of e. In general, man owner. Acquiring such skills socially requires aspects of action, state and effect as well as the level matching different aspects of the effects that the hu- of granularity what to imitate do all play roles in man actions have on objects in the environment.

Also the choice of metric for solving the problem of how the context within which a skill is replicated might re- to imitate Nehaniv and Dautenhahn ; Alissan- quire its generalization to various settings and to other drakis et al. On-going types and shapes of objects. Focusing on propriate action commands see Figure 1 , addressing object manipulation and arrangement demonstrated the correspondence problem in imitation.

The action by a human, this paper presents a system that uses commands can be targeted for various software and different metrics and granularity to produce action hardware platforms. Depending on the partic- ity provided by a what to imitate and sub-goal ex- ular metrics and granularity used, the corresponding traction module , embodiment restrictions and con- effects will differ shown in an example , making the straints imposed by the targeted imitator platform , appropriate choice of metrics and granularity depend and possibly different initial state of the objects in the on the task and context.

The colors red, positions. The dotted outlines indicate the initial po- green and blue indicate the three different objects. The blue object has the same initial with solid thin outlines, linearly scaled at intervals position. In example shown in Figure 2, the demonstrated task consists of three block objects colored red, for solving the correspondence problem see Alissan- green and blue arranged in a 2D workspace sur- drakis et al. The builds up a library of actions from the repertoire of workspace is a square grid 50 cm by 50 cm, and the an imitator agent that can be executed to achieve cor- sizes of the objects are: 10 cm by 8 cm red and 8 responding actions, states and effects to those of a cm by 5 cm green and blue.

As the manipulations model agent according to given metrics and granu- occur only in a 2D plane, only the XZ dimensions are larity. The ALICE framework provides a functional given here and shown in the figures omitting the Y architecture that informs the design of robotic sys- dimension height. The choice of initially concentrating on ef- action commands targeted for a variety of platforms, fects for this work is guided by the assumption that both in software and hardware to match different be- the manipulation of objects will be the most impor- haviour aspects and achieve various types of social tant aspect of the demonstrated behaviours that users learning.

In ongoing work, three or more additional sen- The system uses captured data from a human demon- sors will be used, one attached to the human torso and strator. Tak- ture system. By attaching the motion sensors on the ing into account the states aspect would help the JAB - arms, hands and torso of the human, as well as on the BERWOCKY system solve possible ambiguities when objects that the demonstrator is manipulating, we can producing the corresponding actions for imitation.

We consider these aspects in a two-dimensional tion, based on the choice of hand used by the demon- workspace, such as a table surface. But if ; Alissandrakis The what to imitate the agents are active in a different workspace starting module will use the captured demonstration data to from a different initial configuration of objects, or the extract appropriate sub-goals granularity and also timing and the order of the manipulations is not the discover what metrics must be used to capture the ap- same, it will be impossible to satisfy simultaneously propriate aspects of the particular demonstration.

Therefore choosing to satisfy one par- In the current implementation of the JABBER - ticular aspect will result in a qualitatively different WOCKY system the metrics and the sub-goal granu- effect than if another one was chosen, but still satisfy larity are given, instead of being discovered by the those similarity quantitative criteria. The what to imitate module provides a choice 2. A critical point occurs when the direction minimised see Fig. The relative position ef- placement aspects and focus on the overall arrange- fect metric is defined here for three objects in the ment and trajectory of manipulated objects.

Look- workspace. The imita- relative or absolute related to the final position in the yB yC workspace, or relative to the other objects within the tor must move the same or corresponding object to workspace. To triangle formed by the model, i. To evaluate the similarity between object displacements, the relative displacement, absolute position and relative position effect metrics can be used. To evaluate the similarity between object rotations, the rotation and orientation effect metrics can be used.

The second row shows the way the corresponding object in a different workspace needs to be moved or rotated by an imitator to match the corresponding effects. The grey triangles are superimposed to show that for the relative position effect metric, the relative final positions of the objects are the same. The imitator should rotate the 0 effect metric. Therefore the an- gular effect aspect will be ignored when they imitate, 2. In the simulation, as the robots move around the The system is addressing the correspondence prob- workspace, they leave behind a colored trail of same lem for dissimilarly embodied imitators, so the how color as themselves and their corresponding objects to imitate module must produce action commands to help visualize the imitated trajectories.

The imitator The demonstrator and the imitator might share the is embodied as a single arm manipulator, positioned same workspace or they might operate in different above the workspace and able to pick-up, move and ones. Even in the same workspace, unless the objects rotate the three objects see Figure 6. This embod- and agents positions are arranged back into the same iment, although dissimilar to the one of the human initial configuration before the imitative behaviour, demonstrator, is nevertheless able to match both dis- the context will be different and the imitator therefore placement and angular effect aspects of the demon- has to take that into consideration when imitating.

Two targeted platforms are used in the current re- As the objects are moved and rotated around the alization of the system, both implemented using the workspace by the manipulator in the simulation, they WebotsTM robot simulation software. The manipulator is shown as a vertical yellow cylinder 2. In the current system imple- right. But some displacements or rotations, although mentation both the metrics and the sub-goal granu- minimizing the metric, might be invalid because the larity critical points are given. The how to imitate mod- effect metrics and the sub-goal granularity.

For exam- ule will then have to discover an alternative way in ple consider a human opening a cupboard, removing the given context including other agents, static or dy- an object, closing the cupboard and placing the object namic obstacles to achieve the same effects accord- on a table. This sequence of events can be achieved ing to the metric. In this case it might be acceptable by agents of varying embodiments, ignoring state as- to move the object up to the right edge and then con- pects like e.

In another board or how the object was held or grasped or even context, it might be preferable not to move the ob- action aspects e.

Read Now Microelectronics Education: Proceedings of the 3rd European Workshop on Microelectronics

This contextual information should be ide- across the room. Any agent that can open the cup- ally provided by the what to imitate module, based board, transport the object and place it on the table on observations of the currently demonstrated task can potentially imitate the effects of this particular and not pre-defined. But for this solution to be useful to implementation, the system attempts to move or ro- an imitating robotic companion, it must be converted tate the objects until they reach an obstacle based on to action commands that take into account its embod- simple 2D object collision detection , and then stop, iment and also the context e.

For each of these way-points, the robot object collisions and workspace confines. This sim- must use its differential wheel embodiment to move ulation can replay the captured model data at a given in a straight line up to that position in the workspace, granularity, displaying the trajectory and orienta- and after reaching the target position, move on to the tion of the objects as they move and rotate on the next.

Figure 7 right shows the resulting captured workspace, from the initial configuration to the final imitative behaviour.


  • The Complete A+ Guide to PC Repair (5th Edition).
  • Recommended for you?
  • Follow Us!.

Using the critical points shown in behaviour right. Using the critical points shown in Figure 2, starting from the initial positions shown in Figure 2, starting from the initial positions shown in Figure 3, and minimizing the absolute displacement Figure 3, and minimizing the absolute displacement red object , relative displacement green and rela- red object , relative displacement green and rel- tive position blue effect metrics, each of the robots ative position blue effect metrics, the manipulator must move along the way-points shown left.

The must follow the continuous closed path starting and initial dotted outline and final solid outline posi- ending at the left top corner of the workspace shown tions are shown as circles, indicating that the orienta- as a dotted line left. Since the human demonstrator tion of the robots is not considered the actual robots did not rotate the objects, no angular effect metrics are square, but of equivalent size.

Each way-point is were used. The line in drawn using a gray to black indicated as a dot. The robots then perform an imi- color gradient to indicate the direction of the path. The manipulator then performs an imitative behaviour in Webots and the captured results from the simulation way-points above the current and future positions are shown in the right plot. When the manipulator is above an ob- ject that must be moved, the manipulator will pick it up, then move together with the object to the tar- to map human demonstrated manipulations to match- get position and place the object down while also, ing robotics manipulations in simulation , generaliz- if required, rotating it , before continuing to the next ing to different initial object configurations.

To match the effects at each critical point, the order the manipulator approaches the objects is the From the examples shown it becomes apparent that same red object, green, blue. Figure 8 right shows the resulting cap- state of the objects in the environment and the con- tured imitative behaviour. This wide range of possible effect metrics il- lustrates that even the effect aspect of the correspon- The experiments shown in Figures 7 and 8 illus- dence problem for human-robot interaction by itself trate the diverse character of different successful im- is already quite complex.

Goal extraction in terms itative behaviours optimized to match particular as- of effect metrics and granularity may have many dif- pects of the effects of demonstrated human manip- ferent solutions that might not all be appropriate ac- ulation of objects. Aspects captured by metrics for cording to the desired results or context. Calinon and A. Stochastic gesture produc- during the imitation attempt. This has not been em- tion and recognition model for a humanoid robot. This creates particular problems tems IROS , Three sources of in- systems that can be used in programming robots by formation in social learning.

Dautenhahn and demonstration. The use of repeated demonstrations C. Nehaniv, editors, Imitation in Animals and Arti- Billard et al. MIT Press, An agent-based perspec- tive on imitation. Other re- MIT Press, Kuniyoshi, M. Inaba, and H. Learning by watch- lishing object-object correspondence. IEEE Trans. Acknowledgements C. Nine billion correspondence problems and some methods for solving them. Emerging Technologies under Contract FP Mapping between dis- similar bodies: Affordances and the algebraic founda- tions of imitation. Imitation and Solving the Correspon- 64—72, PhD thesis, University of Hertfordshire, C.

Of hummingbirds Alissandrakis, C. Nehaniv, and K. Im- J. Demiris and A. World Sci- actions across dissimilar embodiments. Sys- entific Series in Robotics and Intelligent Systems, To- correspondence and imitation. Cybernetics and Systems, wards robot cultures? The correspondence Biological and Artificial Systems, 5 1 :3—44, Nehaniv, editors, Imitation in Animals and Artifacts, pages 41— Goal representa- Press, Imitation and mechanisms of joint at- pages — Nehaniv, editor, Computation A. Calinon, G. Cheng, and S. Discovering optimal imitation strategies. George Butterworth.

Pointing is the royal road to lan- guage for babies. Lawrence Erlbaum Assoc Inc, Berthouze aist. We discuss the role of this activity in the perception of visual speech, and speculate on how it shapes the underlying neural circuitry. We argue that in the early stages, speechreading involves an active phase of selection and sequencing of motor plans corresponding to representations of visible articulators acquired during articulatory mimicry.

This sequencing activity results in activation of lateral and medial premotor areas BA6 which we observed in our fMRI study of speechreading in naive subjects. As the repertoire of visual-motor associations expands, the automatic recognition of the visual stimulus and the retrieval of the corresponding motor plan becomes possible, consistent with the activation of the left inferior frontal gyrus putative locus of the human mirror system reported in studies of speechreading of trained stimuli.

We conclude by outlining a computational model, and reporting on simple experiments of deferred head imitation. Yet, it re- mains to be seen whether articulatory mimicry can be explained by mirror neurons as found in the mon- speechreading is the ability to perceive speech by a key. In its early stages, i. In itates Studdert-Kennedy, The deaf infant is not required to seem adapted to serve imitation of new, never seen, articulate, as long as it properly comprehends speech.

Thus, we are left with the question of whether means of communication. In the hearing-impaired, or this early imitation involves a different circuitry e. In the hearing, speechread- tem — a reasonable assumption from an evolutionary ing also occurs, as evidenced by the McGurk ef- perspective. Calvert and Meltzoff and Moore , who reported imita- Campbell, An obvious corollary of support such extension. However, the number of this definition is that, since confusion occurs of- such visible articulators is, in reality, very limited. This is plausible.

Finally, two critical differences between fa- guistic competence. First, self-produced articulations play a pre- process. The idea of a motor simulation pro- ponderant role in what is being mimicked. Wihman cess is not novel per se e. In our context, however, such for- latory routines Studdert-Kennedy, Secondly, ward controllers are not necessarily available at articulatory activity follows a developmental trajec- least, not in the initial stages and a generative tory, from pre-linguistic mouthing to purposive pho- process is therefore necessary.

Thus, we hy- netic act. This, in turns, involves a transition from pothesize that perception of novel visual speech recognizing discrete patterns elementary gestures, or involves an active phase of generation, selec- movement of speech articulators to recognizing con- tion and sequencing of actions, biased by al- tinuous patterns coordinative structure of gestures. As Indeed, the utterance of words requires an accurate such, our proposal has conceptual similarities timing of each gesture itself and accurate phasing of with the ASL Associative Sequence Learning gestures with respect to one other Studdert-Kennedy, hypothesis of Heyes , in particular, the Existing studies, however, do not show contingent imitation by the caregiver.

Such de- such pattern. Campbell et al. Since infants are and fusiform gyri and the posterior part of the inferior observed to execute consonants with their pre- temporal gyrus. Further activation was evident in the cise, categorical loci of constriction more accu- superior temporal gyrus, with large clusters of activa- rately than the less precise, continuously vari- tion showing peak foci in the STS bilaterally, and in able vowels Studdert-Kennedy, , those ar- the inferior frontal gyrus, more extensively in the left ticulations are more likely to elicit a response than right hemisphere.

The contractions or relax- ipated after providing informed consent according to ations of each muscle result in a motion field AIST safety and ethics guidelines. The subjects were in the skin structure, including the lips. A jaw not exposed to the stimuli before scanning and were mechanism enables the mouthing actions needed not informed of the nature of the stimuli i. The subjects were instructed to covertly sphincter controlling the roundness of the lips.

With respect to our hypothesis, this result is signifi- cant because covert speech has been widely shown to elicit a rather exclusive left lateralization of the pre- central gyrus activation Wildgruber et al. Thus, if this right-hemisphere activation is not accounted for Figure 1: Left Skin surface of the facial simulator by covert speech, it may then be related to our hy- and the underlying musculoskeletal structure. Right pothesized motor sequencing activity.

Publications

In fact, stud- Appropriate control synergies between jaw articula- ies on motor sequence learning actually support that tion, mouth sphincter, and facial muscles can imple- view. Rushworth et al. Visual apparatus The visual apparatus consists of a distributed network of feature detectors that respond selectively to apical segments of artic- 4 Outline of model and results ulations.

These plements the specifics of speechreading. The system detectors are trained with sample views of a par- consists of three major modules modeling the critical ticular object e. In the experiment described 1 The simulator is an extension of the facial simulator developed at Imperial College under the supervision of Y. Figure 2: Haar-like features for fast object recogni- tion, from Lienhart and Maydt Sequence learning module This module consists of sequence learning networks that seamlessly combine the learning and the prediction of ar- bitrary sequences of patterns into a single gen- erative process Berthouze and Tijsseling, in re- view.

The sequence learning neural network see Figure 3 was constructed according to de- Figure 3: The input layer is a placeholder for each sign principles derived from neuroscience and pattern in a presented sequence, while the context existing work on recurrent network models. It layer receives both external contextual information utilizes sigmoid-pulse generating spiking neu- as well as feedback information from the predicted rons to extract timing information from the input context module i. In- tive learning rule with synaptic noise. Combined put and context information as well as feedback from with coincidence detection and an internal feed- the output module is propagated to the central module back mechanism, it implements a learning pro- that contains a variant of spiking neurons.

This mod- cess that is driven by dynamic adjustments of the ule is responsible for extracting the variety in timing learning rate. This gives the network the abil- information from the input. All learning occurs in the ity to not only adjust incorrectly recalled parts connections from the central module to the output and of a sequence but also to reinforce and stabi- the predicted context modules.

The output module is lize the recall of previously acquired sequences. Hebbian learning is used to establish connections between visual and mo- already been investigated, in particular using a model tor networks so that resonant coupling can be of inter-modal matching Demiris et al. Five detectors were trained off-line to detect At this stage of the project, the integration of the three five discrete head orientations the simplified equiv- components was only tested on a simplified task: the alent of the apical segments of a visible articulator.

As a result of learning, a continuous visual-motor mapping was acquired see Figure 5. Successful acquisition required only a relatively low number of presentations see Figure 4 , after which the sequencing activity was reduced to a minimum. Nonethe- less, there is supporting evidence for the three design principles used in the model. Studies showing that Figure 4: Learning curve for a novel stimulus as a cells in the superior temporal sulcus STS are sen- function of the number of presentations.

The hor- sitive to discrete features of biological motion pro- izontal line denotes the timing of the actual visual vide plausibility to our thesis that infants could con- stimulus i. The curve was struct detectors for apical segments of articulations. The fact that displaying such segments during percep- tion of time-varying speech results in McGurk ef- fects Calvert and Campbell, justifies our idea that articulations are trajectories in the viseme space. This, in turn, could well explain why infants proceed from prosodic to segmental imitation. Indeed, a lim- ited articulatory behavior of the child may result in its inability to detect continuous changes in the incoming visual patterns, and thus puts the focus on the dura- tion rhythm of each discrete visible pattern.

As the repertoire extends, segmental imitation becomes possible, through resonant coupling between external events and internally motor-based representations. A future focus of this research will be to investigate Figure 5: Relationship between perceived orientation the origins of the differences observed in the neural vertical axis and actual orientation horizontal axis.

The red deaf. Since we considered a single model to account line denotes a fit by logistic regression. The effect for both deaf and hearing articulation mimicry, it will of the discrete encoding of the head orientation ,- be interesting to see if the above differences can be 45,0,45,90 is noticeable in the acquired representa- explained by feedback modality, rather than by func- tion.

Acknowledgements During subsequent interaction, the incoming visual stimuli head panning movements were processed by This research was funded by the Advanced and Inno- each detector in parallel, and a predictive filter was vational Research program in Life Sciences from the used to filter out noise-induced errors in the set of de- Ministry of Education, Culture, Sports, Science and tectors. With the system initially without any estab- Technology, the Japanese Government.

The author lished visual-motor mapping, but the ones described wishes to thank Torea Foissotte for his programming above, the feeding of the time-series of detector ac- contribution, and Naomi Nakagawa for her help with tivities to the sequence learning module resulted in the human experiments. The author also thanks two the system generating head movements, with a bias anonymous reviewers for their comments.

About This Item

Heyes, G. Bird, H. Johnson, and P. Ex- perience modulates automatic imitation. Cognitive L. Berthouze, S. Phillips, O. Terasaki, and Brain Research, in press. Liberman and I. The motor theory japanese subjects. NeuroImage, 22 suppl. Cognition, —36, Lienhart and J. An extended set of haar- L. Berthouze and A. A neural network like features for rapid object detection.

Neural Networks, in review. Processing, pages —, Calvert and R. Reading speech from H. McGurk and J. Hearing lips and see- still and moving faces: The neural substrates of ing voices.

Library of Academia Sinica

Nature, —, Journal of Cognitive Neuroscience, 15 1 —70, Explaining facial imitation: A theoretical model. Early Development R. Campbell, M. MacSweeney, S. Surguladze, and Parenting, —, Calvert, P. McGuire, J. Suckling, M. Bram- mer, and A. Cortical substrates for the R. Connecting mirror neurons and forward perception of face actions: an fmri study of the models. Neuroreport, 14 17 —, Nixon, J. Lazarova, I. Hodinott-Hill, P. Gouch, meaningless lower-face acts gurning. Cognitive and R. The inferior frontal gyrus and Brain Research, —, Demiris, S. Rougeaux, G. Hayes, L. Berthouze, rtms.

Journal of Cognitive Neuroscience, — and Y. Deferred imitation of human , Rushworth, P. Nixon, D. Renowden, and R. Neuropsychologia, 36 1 —24, Imitation in Animals and Artifacts, chap- M. Mirror Neurons and the ter Imitation as a dual-route process featuring pre- Evolution of Brain and Language, chapter Mir- dictive and learning components: a biologically- ror neurons, vocal imitation, and the evolution plausible computational model, pages — Dogil, H. Ackermann, W. Grodd, H. Haider, M. Mirror Neurons and the Evolution H. Kamp, J. Mayer, A. Riecker, and D.

Journal of Neurolinguistics, 15 1 — D. Wildgruber, H. Ackermann, U. Klose, B. Kar- 90, Functional lateralization P. Ekman and W. Neuroreport, —, Yoshikawa, M. Asada, K. Hosoda, and J. Gallese, L. Fadiga, L. Fogassi, and G. Connection Action recognition in the premotor cortex.

Cogni- Science, 15 4 —, Causes and consequences of imitation.


  1. Latest News.
  2. Introduction to Clinical Psychology.
  3. Management, Control and Evolution of IP Networks;
  4. Ubuy South Africa Online Shopping For boost in Affordable Prices.?
  5. Luis Buñuel.
  6. So Long Been Dreaming: Postcolonial Science Fiction & Fantasy!
  7. Getting Started in Futures 5th Edition (Getting Started In.....).
  8. Trends in Cognitive Sciences, 5 6 —, Blanchard herts. Canamero herts. In this paper, we study different methods to improve the reactivity of agents to changes in their environment in different coordination tasks. In a robot synchronization task, we compare the differ- ences between using only position detection or velocity detection.

    We first test an existing position detection approach, and then we compare the results with those obtained using a novel method that takes advantage of visual detection of velocity. We test and discuss the applicability of these two meth- ods in several coordination scenarios, to conclude by seeing how to combine the advantages of both methods. In fact, the subject agent can start to move only after the Synchronization and coordination are important object agent is in a new position.

    Even if such delay mechanisms involved in imitation and social inter- is not always a problem when following trajectory, it action. As put forward by psychological studies, usually poses a problem for synchronization tasks. Hatfield et al. This system is applicable and rhythm. However, achieving good coordination is not only in the case of precise reproduction of move- a very challenging problem in robotics. In this study, ments e. A property often used itations of using only velocity detection in other imi- as input information for imitation is the position of tation tasks and we see how we can combine position the object agent.

    Using position information, the sub- and velocity detection to improve performance. Position information can also be used to achieve synchronization—while dancing, for example. This technique is efficient and simple problem that we have addressed in this study aims at as it does not need complex visual tasks such as ob- achieving natural and fast, adapted reactions of the ject recognition.

    However, a problem with this mode robot to changes detected in its environment. The new resulting behavior of the robot is not the the agent impose severe constraints. This was made same but is still interesting: now, the subject robot possible by our biologically plausible, bottom-up ap- reaction not only depends on the target position, but proach, following which we have adopted a minimal also on its contrast and activity.

    The problem is that architecture that we have built using a neural network. We have developed four 2. This velocity focalization. We have implemented this architecture detection method, proposed by Johnston et al. The target is composed of two verti- constant luminosity. Therefore, the luminosity varia- cal strips or a pattern of strips drawn on a white paper tion of an image is due only to the movement of its attached to an object Koala robot, as shown in Fig. By considering vx the velocity of one point in x, k a constant coefficient that essentially depends on the distance to the object, and i the light intensity, we use 1.

    This is not surprising since without con- trast we cannot estimate the movement of an object. To solve this problem we use a threshold for the con- trast: a low value of contrast i.

    UL EE Swarm Bots #2

    Figure 1: Experimental setup. On the left the Koala We can be interested either in focusing our atten- robot object moves the target observed by a Hemis- tion on a small part of the visual field method 3 , or son robot subject on the right. We can use the system of position detection to focus on the target Fig. We first use a temporal smoothing in order to keep a small In all the experiments, we have used the Hemisson signal when the target stops moving for a short time.

    Since of the visual field. Once this position has been set, the it is impossible to know the exact position of a He- subject agent only has to follow this position method misson robot it has no odometer sensor , we had to 1. In the present architec- the robot is inhibited. The gray part could be replaced by a large static gaussian and the architecture now only takes care of the global overview velocity method 4. On this scheme, the curves are the result of real data. With the first three methods, we use the same setup The second method, which uses a simpler version see Fig.

    To test the last method 4 we use a The third method uses focalization on the object very similar setup but this time the target is a wide agent defined using the detection position system. The reaction is fast and proportional to the stimulus velocity since only the area of the target is consid- ered.

    This is the ideal method for synchronization in 3. The first graph shows the re- locity without focalization, allows us to do pure syn- sults of the synchronization task using position detec- chronization. The target position does not matter and tion with WTA method 1 and the second one with- all the visual field is considered.

    Therefore, if the ob- out WTA method 2. We last graph. The dash line We have been able to reproduce the fly phe- corresponds to the velocity of the object agent and the nomenon with our robot. We put the robot in a drum solid line corresponds to the velocity of the subject with black and white strips and, when we move the agent. Each iteration carried on for around ms. The robot thus stays relative to the drum at the same place.

    We can see that we have two kinds of methods po- 3. The first category does All the methods that we have presented here have not produce a drift but is not very reactive. The sec- some interesting properties, depending on the task, ond category is very reactive but has a drift that does when we want agent interactions, notably in imitation not permit a prolonged interaction since the target be- and synchronization. To drive a system it is possible to use ei- The first method, which uses position detection, is ther the position first order with a stable but slow very useful to follow the target trajectory.

    Neverthe- system, or the velocity second order with a fast but less, the delay that it produces is not very convenient unstable system. The best results are obtained by for synchronization tasks or when we have a situation combining both methods and this leads us to think that changes often. Andry, P. Gaussier, and J. From sensori- We have presented different methods that allow us motor development to low-level imitation. In 2nd to increase the level of interaction synchronization- Intl. These processes are simple and easy to implement.

    Gaussier, S. Moga, JP. Banquet, and M.

    If we want to synchronize a dance, velocity detec- From perception-action loops to imitation pro- tion is very useful. However, the detection of posi- cesses. Applied Artificial Intelligence, 1 7 , We see also that velocity detection can help to anticipate E. Hatfield, J. Cacioppo, and R. Emotional the target tracking by anticipating. The robot could contagion.

    Cambridge university press, The development tion for best tracking. Studies such as Hofsten and of gaze control and predictive tracking in young Rosander and Richards and Holley in- infants. Vision Research, 36, Since we E. Das reafferen- have access to the velocity and not only to the area zprinzip.

    Naturwissenschaften, learn what is associated with its own movement. Hof- 37, Johnston, C. Benton, and M. Con- movement of the eyes and the head. We could use this current measurement of perceived speed and speed work as inspiration to reproduce this phenomenon discrimination threshold using the method of sin- with robots. Vision Research, 39, Further work could try to make this architecture W.

    Perception and action planning. European more biologically realistic, allowing the robot to inte- journal of cognitive psychology, 9 2 , Richards and F. Infant attention and and coordination problem. Therefore, we will fo- the development of smooth pursuit tracking. De- cus our work on the learning of the perception-action velopmental Psychology, 35, Mathematical analysis of behavior systems. Acknowledgments Arnaud Blanchard is supported by a research scholar- ship of the University of Hertfordshire. Bryson Mark A.

    We suggest that this model solves the problem of discrete replicants in memetics. We also describe some very preliminary work in implementing and testing our ideas through social learning in a computer game context. While some behaviours are known ex- Human-like intelligence requires an enormous plicitly and transmitted deliberately by teaching , amount of knowledge — solutions to the hard there is evidence that our species may have evolved problems of survival and reproduction, which for the ability to take advantage of this powerful mech- our species have come to involve complex social anism for increasing knowledge and fitness before and technological manipulations.

    Some of these we were capable of such explicit mechanisms, and solutions are passed to us genetically, and some that indeed we still implicitly learn complex multi- are learned by an individual during their lifetime modal behaviours from our conspecifics. This allows through trial-and-error experience. For humans, us to build and transmit knowledge that our cultures one key source of knowledge is culture. By culture have not yet developed words or theories to describe here we mean any knowledge an agent has derived or deliberately represent.

    This theory of cumulative from conspecifics by non-genetic means. In order knowledge generation is called memetics. Memetics is based on time consuming at least for the individual than the concept of a meme which is meant to be analo- individual trial-and-error learning. Some theorists have claimed that this In this paper we discuss first how such learning analogy is invalid, on the grounds that genes are dis- may be accumulated socially by a culture, and then crete, but memes are not.

    This claim is itself suspect, relate this to what we know about learning in individ- since to this day the term gene still does not describe uals. We propose a model for task learning in general, a well-defined entity Dennett, , but is based on which is clearly facilitated by social information. We the fact that the DNA molecule ultimately encodes then briefly describe our preliminary attempts to build information in terms of discrete patterns of four pos- and exploit such a model of learning.

    The underlying representation for a meme, though 2 Discretion in Memetics still completely unknown, is suspected not to be dis- crete, and therefore to be open to corruption. To Dawkins b proposes that knowledge and be- describe the problem, Dawkins proposes a haviour can be viewed as developing through a pro- thought experiment where a child is shown a draw- cess of evolution, just as biological life has.

    Ideas ing of an unfamiliar type of boat and asked to copy it; or behaviours are propagated if they survive intact then the process is repeated with another child who long enough to be reproduced. Reproductive suc- sees only the new drawing. Dawkins proposes a so- of actions to get open. Subjects are generally able to lution to this problem, which is that one learns not open these boxes if they have first observed a demon- gross behaviours, but instructions as to how to be- strator, but they will not necessarily go through all have.

    He proposes an alternate thought experiment, the same steps in the same order as the demonstra- whereby children learn to build a boat by origami, an tor. However, if the demonstrator demonstrates re- art based on folding paper. In other words, teeth rather than their fingers, for example. We think individuals learn in terms of duced, because it becomes a relatively short sequence skills, not instructions.

    There are two differences: of relatively large-grain actions rather than a long se- quence of basic motor commands. The hypothesis described above ties in neatly to an- other hypothesis in learning — this one about how This hypothesis has several interesting ramifica- brains can learn from experience. First, having variations of granularity in memetic represen- we can learn very slowly, taking a large number of ex- tation.

    For example, consider some teacher J who amples to build up a model of how the world seems starts with relatively few mathematical skills, but has to be working, or at least what the right thing is to by a slow laborious process managed to learn a tech- do in a particular context. The second way is to learn nique for writing back-propagation networks.

    Her very quickly. The problem with learning very quickly representation might be a long string of relatively is that we may be overly influenced by a very im- simple arithmetic and trigonometric operators. If she probable event, taking it to mean more than it should. Note too that the situa- ministic domains it can be derived by extrapolating tion could be reversed — if J only knows trigonom- over a set of exemplars Poggio, ; Atkeson et al. But any such slow- gorithm at a different level of granularity than that learning system that builds its knowledge from expe- with which M was generating the code.

    The problem is, experience This sort of model could explain the results of happens quickly. Consequently, what is needed is a Whiten McClelland et al. Slow learning, they say, happens in the system, which is in turn driven by a set of skills neocortex — fast learning happens in the hippocam- learned or formed in the slower learning system.

    We pus see also Treves and Rolls, already know that representations in the hippocam- Another problem with fast learning is that it re- pus are highly dynamic and vary by context Wiener, quires learning a large number of things — particu- ; Kobayashi et al. And clearly learned larly if the system needs to hold each learned thing experience is itself a form of context. Thus the hy- around long enough to allow a slow-learning system pothesis that what and how we can learn with this to process it.

    If two different things are learned that system changes over time and experience is not ex- happen to be similarly indexed by whatever cate- cessively radical, although it does have interesting gory mechanism has emerged in a largely unsuper- implications for the veracity of recall. If accommodation of new information is not done sys- tematically which is generally seen as the purpose 4 A Model of Task Learning of a slow learning system McClelland et al. That is, to use relatively few changes in mem- e ory in order to represent the full event.

    And indeed, n this seems to be what the hippocampus does Rolls, t In order for a few changes to represent a com- a plex event, each change must be highly salient — c it must represent a relatively broad chunk of seman- t tics, a complex concept. As McClelland et al. Switching between stores will remove products from your current cart. App Download Follow Us. New Zealand. Macbooks All In Ones 2 in 1 Laptops. Home Theaters Headphones. Towels Sink Urinals. Tool Sets Bathroom Accessory Sets. Close to Ceiling Lights Pendant Lights.

    Body Lotions Face Creams. Tents Accessories Lights Camping Bed. Billiard Fishing Toss Games. Business Writing Skills. Graphic Novels Comic Strips. My Wishlist. Know about stores. Products of this store will be shipped directly from the US to your country. Products of this store will be shipped directly from the UK to your country. Products of this store will be shipped directly from China to your country.

    Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings
    Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings
    Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings
    Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings
    Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings
    Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings
    Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings
    Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings

Related Learning Robots: 6th European Workshop, EWLR-6 Brighton, England, August 1–2, 1997 Proceedings



Copyright 2019 - All Right Reserved