11/VII/2006

Conference Day Two: Cross-fertilization at work

Filed under: — Ricardo Téllez @ 01:55 pm

Or how other disciplines can help AI.

The conference day was divided in two main blocks, entitled Cross-fertilization at work Part 1 and Part 2. At the end of this day, everybody agreed that this conference was one of the best everyone never attended.

First block was done before lunch and was composed of three speeches.

RODNEY DOUGLAS

First speaker was Rodney Douglas with a speech entitled Neocortical architecture, possible computational capabilities and their implementation in hybrid VLSI electronic system.

His topic was about the substract were AI could be implemented. His research idea is to look at the human cortex and extract a kind of (electronic?) schematics of how the cortex works. Then transfer it into VLSI circuits, obtaining by hence an electronic emulation of the human cortex.

He shows an schematics of how a VLSI system works, and makes comparison with how the nervous system works (similarities and differences). Then he shows the cortex. He shows how neurons are interconnected paying special attention at their huge connectivity. In order to simplify a little bit, he tried to study neurons separately, and how they are connected to other neurons. He makes a 3D division of the neurons distribution into layers (physical layers). This division reduces considerably the type of connectivities that can be found between neurons. Then, he uses the data available about different types of neurons, and the 3D diagram to create a kind of electronic schematics of some areas of the cortex. The result are (electronic?) schematics that define (in theory) how those areas of the brain work.

After having obtained those circuits of the cortex in this kind of electronics schematics, he described some of the properties those circuits have, being some linear and some non-linear. He claims that those cortex circuits are digital and analog at the same time (in term of properties). It's a kind of mixed circuit (variable analog gain, analog signal restoration and multi-stability).

So next step is to implement those circuits using VLSI. After having done the implementation, he shows some simple results obtained by using those circuits together with an artificial retina.

Finally as conclusions, he pointed that the cortex has strong recurrent connections and that those have interesting computational properties that can be reproduced on VLSI circuits.

Rodney Douglas cross-fertilizing

ANDREW SCHWARTZ

His talk was about brain-machine interfaces and was entitled Useful signals from motor cortex.

He started with the history of the first researchers that started mapping the motor cortex, using electrodes stimulating a dog's brain. Despite those first good results, the motor cortex output physiology/anatomy showed to be more complex than that.

He showed then a kind of schematics of the brain... using boxes!. The concept of modularity is introduced and it looks like its going to have a big relevance. Every module (box) has a determined function and it is connected to others. The 'size' of the boxes depends on the level at which the description is done (anatomy, inverse dynamics, coordinate systems). The problem arises when one tries to explain the blocks of one level from another more fundamental level. It does not work.

So here he goes on describing his work. It is based on measuring the electromagnetic signals produced by single neurons when they fire. In order to have a good measure, he decided to measure motor cortex cells, because they are huge cells and the field they generate is strong and easy to measure. However, there is a problem on identifying whether the signals generated are due to the stimulus or to the response.

In order to avoid this problem, he decides to study the neurons response when performing motor activities. By doing some experiments, he concluded that cosine tuning functions are ubiquitous in the motor cortex.

He showed then some experiments that show a correlation between how the motor cortex fires, and how the body is moved, based on those cosine functions. So he can picture in some way the vector that the motor cortex is displaying, and the vector of the real body movement, and then compare those two on a series of experiments. Those experiments allowed him to find the differences between what the motor cortex 'thinks' we are doing, and what we actually do. Further experiments show the differences between the motor cortex and the pre-motor cortex in terms of behavior.

His experiments leaded him to conclude that one can say what someone is thinking by looking at the way he moves.

For the final part, he showed some implant probes (metal implants that are introduced in the brain) that can be used to neurally record the potentials fired by neurons. He proposed to use those implants for impaired people, and in this line he showed some cruel experiments where an implanted monkey controlled an arm prothesis.

Andrew Schwartz at the end of his talk, after having shown his cruel experiments with monkeys

PETER FROMHERZ

Last speaker of the session was Peter Fromherz who spoke about how to connect brain to computers. His talk was entitled Biophysical studies on brain-computer interfacing.

One of the problems he emphasized was that we need to make the difference that the brain thinks but a computer computes. Furthermore, the velocity of electrons is a lot slower than the movement of ions inside the nervous cells.

After having stated the problems he will encounter, he started talking about how to put a single cell on a silicon chip. He explained some of the problems found when attaching the cell to the silicon, like the distance at which it has to be placed, the resistance, etc. The connection of the cell to the silicon is done in a way that it performs the role of the gate of a field effect transistor (he makes a kind of hybrid FET transistor, where the gate of the transistor is implemented by the living cell, and the source and drain by the silicon). This connection creates a hybrid bio-electronic circuit, for which he showed the equivalent RC circuit.

Then he explained how to change fromions (in the living cell) to electrons (in the silicon) and vice-versa. On the first case, the movement of sodium ions through the nerve membrane produce a current that acts as a gate on the silicon FET transistor, regulating by this the electric current of the silicon. On the second case, he creates a capacitor in the silicon chip. When this capacitor is charged by the oxide layer, this makes the ions transfer from the inside of the nerve cell to the outside through the nerve membrane. By applying those voltages to the capacitor, it is possible to open and close the ion channel in the membrane. He showed then some implementation details using CMOS technology.

Then he talk about the synapse connection to the chip. Basically he exploits the same FET effect (gate-source-drain) as before by controlling the gate by means of the neurotransmitters.

Next, he shown some interesting experiments where two neurons put over the silicon were connected through an electrical circuit on the silicon, implementing a kind of feedback that allowed the creation of a memory cell.

Then he shown how to obtain networks of neurons over the silicon. This means that several neurons were put over the silicon on the desired positions, and their grow guided by some growing guiding mechanisms. However some important problems arise, and even that it worked for a small number of nets, Fromherz was skeptical about the possibility of connecting hundreds of them.

In order to solve this problem, he proposed a random connected system, that still needs to be tested.

Finally, he ended his talk giving some small tips about brain tissue on semiconductors.

Peter Fromherz during his talk

After lunch, we started the second part of Cross-fertilization at work, which consisted of four speakers and one panel discussion.

The gong sound indicated the start of each session

RUDOFL BANNASCH

I personally consider this presentation a master piece of communication. With a list of slides based on images with no text, Bannasch achieved to transmit his ideas an research results on a very funny and amusing way, and left all the audience on their seats waiting for more.

Rudolf Bannasch implementing the movement of the penguins

His speech was entitled Morphological intelligence in bionic applications, and the main idea was that it is not necessary a complex control system in order to obtain good control, since the own physical properties of the object provides a kind of control that simplifies control systems. Furthermore, systems can be optimized by an evolutionary process, hence biological systems are optimized and we can take inspiration from them.

This leaded him to talk about finding optimality in any process or engineering design. He said that the movement towards optimality has to be done by small steps in order to not break the whole thing (1 good out of 5 experiments). He explained on a very funny way how evolution works, how it finds minimal optima and how it stays there until a mutation burns everything and makes it jump to another place of the evolutionary search space.

Then he stated that evolution leads to knew knowledge. He showed this statement by reproducing an experiment that creates bridge structures (among others). He also showed that when a system becomes too specialized then in becomes unadaptative. As a conclusion, he stated that lower levels of fitness allows for better adaptation levels.

Then he addressed the efficiency that can be obtained by evolution, presenting the experiment of the penguins, about how the shape of penguins evolved and how that shape allows them to move on a optimized way underwater. Extra tip: if you obtain good shape, then it also will look nice!.

Next he showed a system that automatically adapt to the shape of anything, without any control, just by applying a force. He applied this mechanism to the generation of a manta robot which dives under water and adapts to collisions.

But then he asked himself the question about the possibility of going beyond nature. He showed the experiment done in order to avoid flight turbulence, starting from the natural design of a bird's wing, he achieved to evolve an optimal end of wing for planes.

Having shown all those results, he concluded that a bionic approach may be not necessary for the design of systems, but it can improve the start position and shortcuts ways in the evolutionary space, and the time of development can be decreased (seen as the time required to move through the evolutionary space).

He then showed how dolphins adapted to communication under water (which has lots of problems due to multiple paths of propagation. They achieve it by changing frequency all the time). He used this mechanism to build a device that works to measure distances and transmit real sonar data at high speed under water.

Final slides related to fluid artificial muscles and how they have constructed them. Then he developed artificial hands putting it together with the muscles creating the robot of the demo that can be remotely controlled.

METING SITTI

His talk was about nano-robotics, and was entitled Biologically inspired miniature robots.

Micro/nano (M/N) robotics is a field for programmable assembly and manipulation of micro and nano scale entities. The problem is that at small scale there is a great necessity of autonomy because the robot will have to face a lot of uncertainties. Another interesting problem is how to program them in order to have a large number of robots that coordinate for the achievement of a task.

M/N robots require new physics and mechanisms because of their dimensions. The challenges are the necessity of novel micro actuators and sensors. Furthermore, power sources are the real bottlenecks of those designs. In order to overcome those problems, they take inspiration from biology at small scales. They find that biological systems have sub-optimal solutions, but they are robust and adaptive, highly maneuverable and multi-funcional.

They start taking inspiration for the generation of a M/N robot that can climb, inspiring from attaching systems from nature (like for example mechanical interlocking, vacuum suction, wet adhesion, dry adhesion,hybrid of all of them). He focused on the use of hair for adhesion, and looking at how different small animals do, he constated that higher density of smaller hair fibers obtains the better adhesion. After studying the characteristics of those hairs on gecko, he created a synthetic fibrillar adhesive, and showed some results obtained. These were applied to a mini tank climbing robot that simulate a gecko movement. They also applied it to robot pills that can inspect the internals of the intestine.

Metin Sitti and his swimming robot prototype


Next he switched to legged locomotion on water. For this, he took also inspiration from nature, looking at the mosquito which walks over the water. He found that this walking mechanism could be implemented using micro-hairs. So he constructed a 16 teflon legs prototype.

Then he switched to running an swimming in water, by using multi-flagella. Since those couldn't beat bacteria in their way of swimming, they decided to use several bacteria and attach them to a surface and make them propel the robot. This also produced other problems like how to attach the bacteria, how to sustain them (food, short life time, etc). Those problems are not yet solved and they keep on working on it.

For the future he left the subject of how to generate millions of self-configurable spherical robots, which can mimic solid forms, look and motion macro-scale objects.

MICHAEL DICKINSON

He is a neuroethologist. He studies how nature has solved design and behavior problems in some animals. For this, he studies the fruit-fly.

On his talk entitled The neural control of aerodynamics in fruit flies he showed the neural circuit that allows fruit-flies to fly. He showed the differences between a voluntary take off and a escape take off. The second was found to be a lot faster but with less control. The amazing thing was the impressive capability of the fly to recover in front of unpredicted situations.

The flight of a fly shows very impressive with very quick direction changes. Two thirds of its brain is dedicated to the vision system. He concludes that saccades are triggered by visual input and not by CPGs (Central Pattern Generator). He showed some experiments intended to demonstrate this statement.

Michael Dickinson and his fruit-fly simulator

Then he explained how the flying system in the fly works: muscles do not move the wings but the thorax, creating a compression and decompression that degenerates on a wing swing. The motor neurons generate the frequency at which those muscles must oscillate. Based on those results, they constructed a robot-fly, still on a early state.

In order to improve the robot, he studied the fly dynamics. It followed a quite detailed analysis of the forces and details of the fly flight. He found that eyes and the halteres system take part in the flight, but they have different dynamics. He indicated that the haltere system can modify the phase of the frequency of the flight given by the motor neuron.

Furthermore, he showed that the fly tries to avoid anything that is moving, even that the fly doesn't know what it is, because it considers it as a danger. Afterwards, he described the strategy followed when it follows an odor (just straight when it has the odor, and abrupt changes when lost).

The final goal is to construct a controller for the robot which allows it to behave in the same way as the real fly. Even that he has achieved a lot of things he is not there yet.

KEVIN O'REGAN

His talk was about consciousness. That is why his speech was entitled Consciousness.

He started defining transitive and intransitive consciousness. Transitive is being conscious of something (I'm conscious of my existence). Intransitive is when you say that someone is conscious. At the same time transitive consciousness is divided into access consciousness (having cognitive access to something, judgement, decisions, rational behaviors and language. This is the easy problem of consciousness) and phenomenal consciousness (this is experiencing, or qualia or feeling. This the hard problem).

Kevin O'Regan during his talk

He then talk about vision. Due to the limitations of the visual system, we need an internal representation. Then, seen is creating an internal representation of the external world. But this approach has recently been put into doubt. It lead to the Change blindness subject, where subjects are unable to realize progressive changes on a picture. Instead of having a detailed internal representation we may have a sparse one. The conclusion is that richness is not in the head is on the world. It is only necessary to have algorithms to have the access to the info that resides on the outside world. This is called an active vision. Transients play a great role in order to see the changes.

Then he concluded that seeing is visually manipulating. The experience of seeing is relying on visually manipulating the scene (more on this subject can be found at his paper Sensorimotor theory of experience, Behavioral and brain sciences, 2001).

He applies this idea to any sensorial system we have. There is no way in doing the link between physical phenomena and the feeling phenomena (phenomenallity). Sensation is accessing knowledge. The differences between different sensor activities (hearing, seeing) is on what we do (is different) and feel the changes in the environment. I know that I feel something on my arm because if I move my arm, then the feeling is gone, but if I move instead the head, the sensation on the arm will stay there. In other words, red is the way red things change the light.

He indicated some results of him that were published in Neural Computation 2003 about a robot which, using this definition of sensorimotor, can discover that it is seated on an euclidian space. Basically what it does is to move randomly at first and stating looking at the correlations between movements and sensor inputs.

Following his reasoning, the problem of why do we feel at all is based on two factors:

- corporality (to sense)

- alerting capacity/grabbiness

Conclusion:

If we have cognitive access to the exercise of a skill then we can say that we are having an experience

For more information about it and discussion, he has started up a wiki at http://lpp.psycho.univ-paris5.fr/tikiwiki

=======================

O'Regan's talk leaded to a panel discussion, where the impact on AI of other research areas was analyzed and discussed.

Panel discussion at the outside


Final activity of the day was the conference banquet, where a local folk group sang during dinner.

Conference banquet with local group


Note: it may happen that the text contain errors. Please contact me if you find any error, since I will be able to fix it.

Web page by R. Téllez using rubric css by Hadley Wickham
Don't undertake a project unless it is manifestly important and nearly impossible (Edwin Land)