Archive for January, 2013

The essence of A.I.

// January 12th, 2013 // No Comments » // Artificial Intelligence

The questions was launched by Menno Mafait at the LinkedIn group Applied Artificial Intelligence: What is the essence of artificial intelligence?

The-Essence-of-Artificial-Intelligence-9780135717790

Answers to that question basically addressed which A.I. technique would actually be more suitable to generate an intelligent machine. The techniques discussed ranged from logic based approaches to genetic algorithms, moving through artificial neural networks, bayesian nets, probabilistic approaches, etc. The best technique would then be considered the essence of A.I.

My answer to the question was the following:
answer

Of course, after having provided such answer, the next two logical questions asked there were:
1. How do you define understanding?
2. How can you build a machine that understands?

I don’t know how to define understanding.
I know, though, when somebody (or something) understands or not. It is true that understanding can only be observed from the outside of the person/thing, but this does not imply that if I build a system that shows in the outside like if it understands, then it means that it has understood. Or even worst, that real understanding is not necessary since the system without understanding actually presents the same thing as the system with it.

understanding

I do really believe that understanding is an evolutionary advantage for creatures with limited resources. When the system to behave in the world has a (very) limited amount of computational resources (like ourselves), understanding makes the system more efficient and robust, and it works better with less computational resources.
For a system with unlimited resources, understanding may be not necessary, since it can compute/observe/store the solution for EVERY state in the Universe. But understanding releases us from having to have such big storage system. It help us to compress reality into chunks that we can use to solve the problems of life.

And that is exactly the role of science, to make us better understand the world to be able to live better in it.

American-decline-Chomskypeter  2937

Recently, there has been a long discussion between two great scientists: Noam Chomsky and Peter Norvig. Current A.I. is dominated by statistical analysis of data in order to find patters and build large tables that contain the answer to each possible situation in a narrow domain. Those techniques are extremely efficient and work quite well (in narrow domains, even if they increase their domains every year).

Chomsky argues that that kind of A.I. is useless because it does not provide knowledge to humanity, it just provides answers (or as Pablo Picasso said, Computers are useless, they only provide answers!). Instead, Norvig indicates the large list of successes where those systems have been useful in making humanity progress. The discussion goes on by Chomsky indicating that A.I. is then just an engineered tool but not real a scientific paradigm because it does not bring knowledge to the world. Norvig replies that real knowledge has been provided because they are able to predict what the next word in a sentence is the most likely (for example).

Introduction-to-Artificial-Intelligence-Ertel-Wolfgang-9780857292988aiapproach

For me, the vision of Norvig is like the current status of quantum mechanics: just a list of recipes to know what to do when such and such, but nobody has an idea about why the recipes do work that way.
This means, there is no real understanding of how quantum mechanics works. We just apply the recipe and the result is the predicted by it. But no real understanding of why.

In my opinion, that is what Chomsky was trying to say: even if it works, current A.I has no understanding at all.

Understanding is difficult because it cannot be analyzed or observed at the outside. And that is a great problem because it allows cheaters to use all their weaponry in order to make us believe that there is understanding behind a system that doesn’t have, just because it looks like it actually has. Of course, it is easier to try to cheat making a system look like it is something, than trying to make that something real (all of us know that situation when we tried to cheat at exams copying the answer of our colleague, instead of studying and learning ourselves).
And this line of cheating has become main stream in A.I., making us believe that there is no actual difference between cheating and not cheating, i.e. it does not matter whether the system looks like it understands or if it has real understanding (after all, nobody can define understanding).
I believe that such difference is what all is about. It is the ESSENCE OF INTELLIGENCE. Trying to avoid it through easier paths is just avoiding the real thing.

So now the most interesting part, how understanding is built on a system?
I can provide a brief answer without any real demonstration, based on my own theories. The theory is that understanding is built upon basic knowledge of the world, and scaled up to generate more complex understandings (like maths, philosophy, empathy, etc).

Small chunks of understanding are created at a first stage, by direct interaction with the world. Interaction with the world provides the basic understanding units for a system. This means, the system learns what up and down mean (and it is not something like the things about my head are up and the things below my chest are down . That is just a definition for a dictionary. That is what an A.I. would do (define a threshold where below that threshold is down and above the threshold is up. Tune the threshold with real experiments).

actioninwhy-red-doesnt-sound-like-a-bell1

Meaning is embedded in the sensorimotor law of the system, it means how the reading of the different sensors the system has, vary with the actions taken by the system. Those sensorimotor laws are the basic understanding units. It’s not a number, it’s a law learned by experiencing with the world.

Then, when an understanding unit is ready and well formed, it is suitable to be used for the generation of superior levels of understanding by using a metaphor engine (or a analogies, whichever is better, both are almost the same). This means that the basic law (understanding unit) is used to reason about other things that are completely different but that the law applies to them. By doing this, new laws are generated, this time not sensorimotor based anymore (only in their roots).

metaphors-we-live-bywhere math

And I am sad that I cannot provide more because I don’t know more.
Hopefully, some people we are still working on this line of research and eventually, we will have some results that may be shown for better UNDERSTANDING.

Cheers.

Related bibliography:
[1] Metaphores we live by,Lakoff and Johnson
[2] Where mathematics come from, Lakoff and Nuñez
[3] Perception of the structure of the physical world using multimodal sensors and effectors, Philipona and O’Regan
[4] Artificial Intelligence, a modern approach, Rusell and Norvig
[5] Understanding Intelligence, Pfeifer and Scheier
[6] A sensorimotor account of vision and visual consciousness, Noe and O’Regan
[7] Why red doesn’t sound like a bell, O’Regan
[8] Action in perception, Noe