13/VII/2006

Conference day four: projects day

Filed under: — Ricardo Téllez @ 01:55 pm

The whole day was dedicated to the presentation of different AI projects in development around the world. Due to some unspected problems, the schedule of the day was changed. Even that, the day was very dense...

Fumiya Ida presenting the new schedule for the day (afternoon)

GIULIO SANDINI, GIORGIO METTA AND DAVID VERNON

Their talk was entitled Robotcup - sharing a body for the advancement of AI. The three speakers explained at turns the Robotcup project, a five years collaborative project which constructs a robot with the shape of a baby. Its goals are to study the development of cognitive manipulation skills, and to construct an engineering tool. This project is done through a consortium of different Universities and research centers, each one in charge of one part of the project.

Giulio Sandini after his talk

Robotcup primary goal is to bring to the scientific community an open platform, in both software and hardware. During the open days (a kind of meeting they do every year) they seek for collaboration, joining effort with on-going projects of others, and directly supporting projects on cognition based on the Robotcup platform.

The Robotcub approach is based on the believe that cognition emerges through interactions. In this sense, Cog is best study through a program of progressive development. Given that, which core abilities do this project need to address?

- First, it has to be determined the initial abilities the system is starting with.

- Second, it was decided to concentrate on the development of control of posture and locomotion, looking for other modes of exploration, reaching and manipulation, and social skills. Abilities defined the physical requirements of the body, that are:

for posture and locomotion -> full-body and locomotion (crawling)

for exploration -> head-eye system with vestibular system

for reaching and manipulation -> arms with dexterous hands and touch

for social skills -> rich movements

At this point, Giulio left place to Giorgio Metta to continue with the explanation of the iCub platform

Giorgio Metta

The iCub is the humanoid baby-robot being designed in the RobotCub project. It has some constraints in kinematics (geometry of the robot: size, shape, degrees of freedom, etc) and dynamics (forces and torques it has to support). The robot has 53 dof. 9 for each hand, arms 7 dof, legs 6 dof, waist 6 dof.

Each joint has sensors for position torque/tension and temperature. Other sensors include cameras, tactile sensors, skin, torque sensors, microphones, speaker, impact, contact, gyroscopes, linear accelerometers. Apart from the sensors and electronics, it will have a skin-shell that covers all the systems (in plastic)

The ICub software architecture is a set of specifications for how the software components of the system are constructed and interact. Once the hardware and software architectures are done, it will be easy to move the robot and start doing things with it quickly.

Giorgio ended with a progress report. At present they are constructing the first prototype of the mechanical part. And the electronics is about to be finished to test.

Giorgio left place for David Vernon who explained the cognitive aspects of the robot.

David Vernon and the upper torso of the iCub

He defined cognition as a process by which a system achieves robust, adaptive, anticipatory and autonomous behavior, in face of intelligence which is prediction, prediction, prediction. But where does meaning comes from?. He pointed that meaning emerges through shared consensual experience. Then in order to have interaction with a robot, it has to have a body that will allow it to ground experiences, and to be able to communicate with us by sharing experiences.

The robot needs a progressive onto-genetic acquisition of anticipatory capabilities (development). How can such cognitive architecture be developed?. They follow some of ideas from diverse cognitive systems, but those were very quickly presented, as with the rest of the presentation. Their idea is not only based on model fitting, but also on model creation. It will need embodiment, affect, learning, social motivations,etc.

Main idea of the project: they first provide the robot with a cognitive substrate and then construct the cognitive ontogeny on top of it. More info can be found at www.icub.org.

CLAES VON HOFSTEN

His talk was entitled The development of gaze control in human infants. Gaze stability is an important control problem. It is difficult because acuity is so much higher in the center of the eye than in the periphery. Both head and eye movement are involved on it. It requires prediction.

Tracking objects with both head and eyes is the rule rather than the exception in human infants.

At least 5 visual vestibular and propioceptive mechanisms have evolved to handle the problem. These are divided into two types:

1- Systems designed to handle subject movements (vestibular ocular system, vestibular collics system and ocular collic system). They all collaborate to obtain the stabilization.

2- Systems designed to handle object motion (SPEM (smooth pursuit eye movement) and head tracking system).

He then presented some important data about this mechanism:

* Tracking objects with the heads is phylo-genetically older than tacking with the eyes.

* To be useful, eyes and head had to be coordinated

* The gaze is completely developed at about 6 months.

He then presented the method they have developed for the study of gaze in infants. Using that method he showed that in infants, the gaze is mainly guided by vestibular systems.

von Hofsten showing the experiments done with his grandchild

Eye-head coordination: at around 3-4 months of age, infants start to use their head to a much greater extent than before. This fact introduces two additional problems to be solved:

1- Inability to inhibit VOR (vestibular controlled counter-rotations of the eyes) during passive motion.

2- The lagging head. The eyes and the head must function as a single system. It means that if the head lags, the eyes must lead. This is specially noted for rapid motions.

So how does the mature system know when to compensate for head movements?. In this case, the problem is solved in the frequency domain, but his explanation was too quick for me to understand it.

As a conclusion, he stated that the vestibular system dominates the control of the gaze.

=================

At this point of the conference, the schedule started to be very tight, and speakers speeded up their presentations. It was really a shame since the remaining talks were very interesting, but it was almost impossible to follow while taking notes.

=================

KERSTIN DAUTENHAHN

Why social intelligence matters in the design and development of intelligent robots

Her talk was about what are social robots and why are they required. Social robots are robots that rely on the human tendency to anthropomorphize and capitalize feelings. They need to have social skills in order to interact with humans, but since those skills are costly to implement, only have to been added when required by the application.

In order to reduce the social skills required, one important step is to identify the robot's social niche, it is where will it be required and for what (contact with human, robot functionality,...).

Possible human-robot relationships:

- Caretaker paradigm: interactions similar to with a kid

- Companion paradigm:

Possible issues for a robot companion:

- Be considerate

- Be pro-active

After defining those options for the social robot, she briefly talked about her experiments with autism infants and their relation with social robots, and how that relation helped the infants to better relate to other people.

Kerstin Dautenhahn presenting her social robot

More information about her research can be found at http://homepages.feis.herts.ac.uk/~comqkd

CHRYSTOPHER L. NEHANIV

Sensorimotor experience information and development

He started introducing the RobotCub project, and focussed on sensorimotor experiences, and how to use those experiences to go beyond reactive architectures to create artificial autonomous agents.

He introduced the definition of Kinesics in human robot interaction. It relates to the introduction of timing.

He advocated for a central role of information in embodied intelligence. The idea is to use Shannon's information theory to characterize control, organize and use embodied sensorimotor experience and interaction. Then he discussed how to get the sensorimotor structure and laws from uninterpreted sensor data based on information metric. His idea is the creation of somatosensoritopic cortex-like maps from information geometry of sensors, and sensorimotor laws.

The information theoretical framework that he uses is based in the following points:

- Shannon's information theory

- Sensors and motor variables seen as information sources

- Entropy estimated from the probability distribution of the variable

- Defines a measure of information distance which is measured in bits.

He applied this measure of information to the Aibo robot, giving as a result the construction of a map of the sensors based on this measure. He applied then to Aibo on an environment with only vertical contours leads to emergence of impoverishment of the sensoritopic maps. He showed how the visual field self-organizes based on sensory experience, generating a kind of map of sensorial experiences. Those maps can be adapted and integrate different sensors into the maps.

Then on a very quick explanation, he pointed possible applications of the sensory reconstruction method.

Next step on his research was to ground actions into sensorimotor perceptions, but this point was still on a early stage.

How can embodied agents develop in response to extended experiences at various scales in time: they define a geometry of experience that allows for information metric of experimental episodes of various temporal horizons. Then act using dynamic growing developing space of memories.

He used this measure to distinguish different behaviors on the Aibo robots, ending up with a map space of behaviors.

He used all that to define an interaction history architecture, but his explanation was so quick that I couldn't really understand what was it all about. Basically, he used the architecture to predict a ball path from history of experience on an Aibo robot, and to the development of capability to play a social interaction game (called Peekaboo).

RUDIGER DILLMANN

Emergent cognitive capabilities for humanoids: robots learning sensorimotor skills and task knowledge from multi-modal observation of humans

For him, the basic problem is to have a robot capable of developing perceptual behaviors and cognitive categories, and capable of communicating and sharing these with humans and other artificial agents. With this in mind, they developed a robot under a project called SFB588. These are the robots of series Armar, that are humanoid for household environments.


Rudiger Dillman

His research lines are based on multi-modal interaction, cooperation and learning.

He described the robot characteristics in detail: body, hands. Then an external system for perception of human motion that captures images of people doing different house keeping tasks. The information obtained from it is used to generate models o human motion which allows later classify what the person is doing. All the information is also used to transfer knowledge to the robot, like for example by imitation, by teaching the robot by teaching it.

Programming by demonstration:

they construct a system where the user does a demonstration which is observed; then the images are segmented and interpreted; then a hierarchical task model is generated; then a set a classes of manipulation tasks are created. An interactive object modeling is used for the objects, and finally at the end the robot tries to imitate the observed task.

Then he moved to another project called PACO-PLUS which aims at the design of a cognitive robot system capable of the previously defined goals.

The guiding principles are:

- Objects and actions are inseparably inter-wined

- Categories are determined by the action an agent can do

- This leads to the definition of OACs.

====================

POSTER SESSION -2

At this point, the second session of posters took place after lunch.

The second session of posters


It was not very crouded as in the previous day, since the number of posters was huge and people started to feel tired.

====================

NORMAN PACKARD

Artificial cells

It is impossible to buy a piece of engineering so complex and precise as living cells. So he goes for the design of artificial cells. His interest is on how to create them, how to program them and find their computational potential.

He showed a protocell schematics of the artificial cell, composed of the following:

- Container: some molecules acting like vesicle, lipid bilayer, made up of simple amphiphiles

- Information: molecules PNA and DNA oligomer

- Metabolism system

It has to take into account that this project does not uses proteins nor enzyme chemistry

The container molecules are called amphiphiles, and are divided into head and tail. The head is typically ionic and hydrophilic, and the tail is hydrophobic. When put together on water they obtain a large universe of amphiphilic molecules.

The information polymers are DNA oligomers and PNA

He pointed that the problem here is the integration. It is necessary component reactions to coexist, and this is extremely difficult to engineer using a bottom-up approach. That is why he asked himself if it is possible to compute how to go from molecular constituents to a target mesoscopic object. But he found that this problem is PSPACE hard.

So he tried to solve the problem on another way:

- Explore the space of chemical reactions combined with an complex molecular environment

- Use an evolutionary algorithm for optimization and search of the space of solutions

The evolutionary procedure involved to define an experiment on the gene, then implement it physically and get the fitness from the result of the experiment. By using this method, he achieved to obtain a new kind of molecule assemble, even-though it is still far from the living cell.

RONALD SIEGWART

The rise of robots - machines sharing the environment with natural creatures.

He started the speech with a classification of robots: industrial robots, service and personal robots, cyborg robotics. He decided to dedicate himself to the second type, because it is predicted that the market for those robots will highly grow on the near future.

He is concerned about the relations of robots with natural systems. He shown a first experiment where small robots interact with cockroaches. The first result showed that the robots were accepted by the cockroaches and that the robot could influence their society in order to put them together.

So he shown the bottom up architecture that he designs for his future robots.

First he concentrates on localization and SLAM. In this, he uses a system for extracting features of the scene and applies it into what he calls topological fingerprint map.

Then he addressed the problem of going from features to object recognition. Then from objects to regions (groups). Then all the relations between objects regions and places are put into a graph, called the object graph model, where object are seen as basic elements.

They plan to use all this to be used in the outside like for example as a safety support system for cars.

Ronald Siegwart and the object graph model

He ended his talk with a quick review of a new project EU funded called bayesian approach to cognitive systems.

KOH HOSODA

Mobiligence project: emergence of adaptive motor function through interaction among the body, brain and environment

Koh Hosoda presenting his mobiligence project

He addresses the question about how it is possible that we have such a good adaptive behavior. His hypotheses considers that it emerges from the interaction among body, brain and environment, which requires actions or motions of the subject. So they call this mobiligence, i.e., intelligence through mobility.

The main difference with conventional robotics is that robotics discusses intelligence for mobility, but mobiligence investigates intelligence emerged by mobility. Mobiligence is a collaboration between biology (physical models, clinical medicine, animal experiments) and engineering (dynamical system models, robot experiments and simulations). As an example, they applied this collaboration paradigm to explain the problem of bad walking in patients with Parkinson, and showed some simulation experiments that were able to predict the walking behavior of such type of real patients.

Then he described the Mobiligence project extremely quickly.

RAJA CHATILA

Cogniron

Raja Chatila is the leader of the Cogniron project, a project for the creation of a cognitive companion.

He started indicating that cognitives capabilities can be divided into:

- Sensory-motor activities.

- Interpretation of spatial and temporal information

- Focus of attention

- Inference, prediction, deliberation, reflection, planning

- Communication and interaction

- Learning

But all hose processes must be running continually and concurrently.

Raja Chatila

The cogniron project focuses on four capabilities:

- Learning and understanding space, objects and situations

- Learning skills

- Interacting with people

- Making decisions and taking initiative

The research areas of the project are:

- Multi-modal dialogues

- Detection and understanding of human activity

- Social behavior

- Skill and task learning

- Spatial cognition

- Intentionality

Every area is measured by means of some key experiments

The results achieved so far by the project:

* In space perception/interpretation

He showed how the robot achieves to recognize objects and create a 3D object modeling from stereo-vision. Then he showed how it learns about the environment for navigation and interaction with humans at a conceptual level. As a interesting trick, he uses objects as landmarks when moving on the environment. Landmarks are created by detecting some interesting points in objects.

For scene recognition, he uses what he calls object-invariant detectors.

* Communication interaction

First it concentrates on detecting people. Basically it works by doing face tracking, but also a mechanism of gestures tracking is implemented. He has done some works on human aware motion, which considers the human field of view when planning motion. They have implemented it based on some human studies. They also studied how to hand an object to a person.

* Skill/task Learning

Basically, it learns by imitation and interaction with the human.

* Decision

It is about symbolic task planning, with joint intention with humans

KOH HOSODA

Asada Sinergy project

He started his talk by introducing the concept of synergistic intelligence. This is based on cognitive developmental robotics. He observed that what is missing in robots are faculties of communication with ordinary people and of intelligent behavior. He defined synergistic intelligence as the emergence of intelligence through interaction with environment including humans.

Their methodology for the generation of intelligence is called cognitive development robotics. This consists of embedding a computational model of human development in a robot, make it relate and interact with the environment, and see how intelligence behavior emerge. For this approach, environmental design issues are very important.

Koh Hosoda on his second intervention of the day

What they try to do is to understand fundamental development processes of 3 years child for robot realization of this cognitive developmental process.


====================

The presentation of Hosoda was continued by another panel discussion about funding AI research.

The final panel discussion took place at the inside due to a rainy afternoon


Note: it may happen that the text contain errors. Please contact me if you find any error, since I will be able to fix it.

Web page by R. Téllez using rubric css by Hadley Wickham
Don't undertake a project unless it is manifestly important and nearly impossible (Edwin Land)