<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Ricardo A. Tellez</title>
	<atom:link href="https://www.ouroboros.org/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.ouroboros.org</link>
	<description>About my research work and how the service robots industry is appearing</description>
	<lastBuildDate>Thu, 26 Jun 2014 16:41:54 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.5.2</generator>
		<item>
		<title>Reviewing a book about ROS programming</title>
		<link>https://www.ouroboros.org/2013/11/07/reviewing-a-book-about-ros-programming/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=reviewing-a-book-about-ros-programming</link>
		<comments>https://www.ouroboros.org/2013/11/07/reviewing-a-book-about-ros-programming/#comments</comments>
		<pubDate>Thu, 07 Nov 2013 19:52:58 +0000</pubDate>
		<dc:creator>ricardo</dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>

		<guid isPermaLink="false">http://www.ouroboros.org/?p=841</guid>
		<description><![CDATA[In the modern robotics community, there has been a necessity for a book that explains the ins and outs of the Robotic Operating System (ROS). ROS is the most popular robotic framework used in the world. Created by Willow Garage, it provides a framework for easily communicate processes, standarized access to robotic resources or clear [...]]]></description>
				<content:encoded><![CDATA[<p>In the modern robotics community, there has been a necessity for a book that explains the ins and outs of the <a href="http://www.ros.org" target="_blank">Robotic Operating System</a> (ROS).</p>
<p>ROS is the most popular robotic framework used in the world. Created by <a href="http://www.willowgarage.com" target="_blank">Willow Garage</a>, it provides a framework for easily communicate processes, standarized access to robotic resources or clear visualisation of robot data. Until now, your only source of material for learning ROS was the excellent, but sometimes confusing, documentation provided by Willow Garage.</p>
<p>Now, <a href="http://www.packtpub.com/authors/profiles/enrique-fernández" target="_blank">Enrique Fernández</a> and <a href="http://www.packtpub.com/authors/profiles/aaron-romero" target="_blank">Aaron Martinez</a> have filled the gap with their book <a href="http://www.packtpub.com/learning-ros-for-robotics-programming/book" target="_blank">Learning ROS for Robotics Programming</a></p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2013/11/learningros.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2013/11/learningros-150x150.jpg" alt="learningros" width="150" height="150" class="alignleft size-thumbnail wp-image-844" /></a></p>
<p>The book covers all the basic aspects required to understand ROS, how to install it, and how to use it with your own robot.</p>
<p>First chapter describes how to install two different versions of ROS (Electric and Fuerte), including the case of setting up a virtual machine to work with ROS.<br />
In the second chapter core concepts of ROS are explained: topics, nodes, stacks, packages, services, etc&#8230; they are not intuitive at all, but the book provides a clear explanation.</p>
<p>Next chapters includes debugging with ROS, how to use sensors and actuators, and how to simulate your own robot with Gazebo, the default simulator for ROS.</p>
<p>Then, several chapters are dedicated to bring some intelligence to your robot using out of the box solutions included in ROS. How to make a robot navigate and how to make a robot use visual information to define its behaviour.</p>
<p>The book ends with a chapter dedicated to practice all the stuff learned with actual complex robots (even if only in simulations). The last chapter is dedicated to use the freely available simulations of world wide robots to practice about ROS. How ROS has been implemented in those robots and how you can use ROS programming to make those robots perform some useful activity, all of it in the Gazebo simulation environment.</p>
<h3>Evaluation</h3>
<p>
The book is very well structured. It builds from the most simple things to the more complex in a smooth enough progressive complexity. It is full of exercises that guide you step by step in all the concepts. You can definitely use the book to learn the ROS.</p>
<p>There are three key parts of ROS that are correctly described in the book:</p>
<li>The content of chapter 3 that describes how to debug ROS code. This is one of the more difficult subjects when learning ROS, may be because it is not correctly described in the official documentation. However, the book correctly achieves to teach the subject.
</li>
<li>The use of actuators and sensors that can are of common use in robotics are described in great detail.
</li>
<li>The creation of a simulation for your own robot, is another of the confusing subjects that the book treats very well.</li>
<p>
I can point two drawbacks:</p>
<li>Indicate that the book is focused on ROS Fuerte version, which is a little bit outdated (three versions more have appeared after Fuerte). However, this point is of low importance because the core of ROS concepts do not change from version to version. It can only affect the last chapter of the book.
</li>
<li>The book is not deep enough to be used as a reference book. However, I believe that the goal of the authors was not that one, but to teach for new comers. If that was the case, the goal is perfectly achieved.
</li>
<p>My recommendation is that, if you are new to the world of ROS, this book is a must. It will definitely speed up your learning of the matter.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ouroboros.org/2013/11/07/reviewing-a-book-about-ros-programming/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Presentation during Robocup 2013</title>
		<link>https://www.ouroboros.org/2013/07/27/presentation-during-robocup-2013/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=presentation-during-robocup-2013</link>
		<comments>https://www.ouroboros.org/2013/07/27/presentation-during-robocup-2013/#comments</comments>
		<pubDate>Sat, 27 Jul 2013 17:48:02 +0000</pubDate>
		<dc:creator>ricardo</dc:creator>
				<category><![CDATA[Talks]]></category>
		<category><![CDATA[video]]></category>

		<guid isPermaLink="false">http://www.ouroboros.org/?p=830</guid>
		<description><![CDATA[Here you can find the presentation I gave during the Robocup@Home competition in June 2013 at Eindhoven. During the competition, every team must provide a scientific poster of their robot and give a talk about what makes their robot special, which are the subjects that the researchers are using the robot for, and what are [...]]]></description>
				<content:encoded><![CDATA[<p>Here you can find the presentation I gave during the Robocup@Home competition in June 2013 at Eindhoven.</p>
<p>During the competition, every team must provide a scientific poster of their robot and give a talk about what makes their robot special, which are the subjects that the researchers are using the robot for, and what are we going to show during the competition. It&#8217;s only a couple of minutes to show everything, so one has to go straight to the point.</p>
<p>Here there is the presentation of our team, the <a href="http://robocup.pal-robotics.com" target="_blank">Reem@IRI</a>. Sound is bad, but subtitles do the job more or less&#8230;</p>
<p><iframe width="500" height="281" src="http://www.youtube.com/embed/gUGS1h_Dg0k?feature=oembed" frameborder="0" allowfullscreen></iframe></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ouroboros.org/2013/07/27/presentation-during-robocup-2013/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The essence of A.I.</title>
		<link>https://www.ouroboros.org/2013/01/12/the-essence-of-a-i/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-essence-of-a-i</link>
		<comments>https://www.ouroboros.org/2013/01/12/the-essence-of-a-i/#comments</comments>
		<pubDate>Sat, 12 Jan 2013 19:40:23 +0000</pubDate>
		<dc:creator>ricardo</dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>

		<guid isPermaLink="false">http://www.ouroboros.org/?p=663</guid>
		<description><![CDATA[The questions was launched by Menno Mafait at the LinkedIn group Applied Artificial Intelligence: What is the essence of artificial intelligence? Answers to that question basically addressed which A.I. technique would actually be more suitable to generate an intelligent machine. The techniques discussed ranged from logic based approaches to genetic algorithms, moving through artificial neural [...]]]></description>
				<content:encoded><![CDATA[<p>The questions was launched by <a href="http://mafait.org/" target="_blank">Menno Mafait</a> at the LinkedIn group <a href="http://www.linkedin.com/groups?home=&#038;gid=127447&#038;trk=anet_ug_hm" target="_blank">Applied Artificial Intelligence</a>: <a href="http://www.linkedin.com/groups/What-is-essence-AI-127447.S.203055712?qid=0eb6c04f-d83d-4626-9ad0-3993bf3ad60b&#038;trk=group_most_popular-0-b-ttl&#038;goback=%2Egmp_127447" target="_blank">What is the essence of artificial intelligence?</a></p>
<p><img src="http://www.ouroboros.org/wp-content/uploads/2013/01/The-Essence-of-Artificial-Intelligence-9780135717790-150x150.jpg" alt="The-Essence-of-Artificial-Intelligence-9780135717790" width="150" height="150" class="aligncenter size-thumbnail wp-image-676" /></p>
<p>Answers to that question basically addressed which A.I. technique would actually be more suitable to generate an intelligent machine. The techniques discussed ranged from logic based approaches to genetic algorithms, moving through artificial neural networks, bayesian nets, probabilistic approaches, etc. The best technique would then be considered <em>the essence of A.I.</em></p>
<p>My answer to the question was the following:<br />
<img src="http://www.ouroboros.org/wp-content/uploads/2013/01/answer.png" alt="answer" width="500" height="204" class="aligncenter size-full wp-image-682" /></p>
<p>Of course, after having provided such answer, the next two logical questions asked there were:<br />
1. How do you define understanding?<br />
2. How can you build a machine that understands?</p>
<p>I don&#8217;t know how to define understanding.<br />
I know, though, when somebody (or something) understands or not. It is true that understanding can only be observed from the outside of the person/thing, but this does not imply that if I build a system that shows in the outside like if it understands, then it means that it has understood. Or even worst, that real understanding is not necessary since the system without understanding actually presents the same thing as the system with it.</p>
<p><img src="http://www.ouroboros.org/wp-content/uploads/2013/01/understanding-190x300.jpg" alt="understanding" width="190" height="300" class="aligncenter size-medium wp-image-690" /></p>
<p>I do really believe that understanding is an evolutionary advantage for creatures with limited resources. When the system to behave in the world has a (very) limited amount of computational resources (like ourselves), understanding makes the system more efficient and robust, and it works better with less computational resources.<br />
For a system with unlimited resources, understanding may be not necessary, since it can compute/observe/store the solution for EVERY state in the Universe. But understanding releases us from having to have such big storage system. It help us to compress reality into chunks that we can use to solve the problems of life.</p>
<p>And that is exactly the role of science, to make us better understand the world to be able to live better in it.</p>
<p><center><img src="http://www.ouroboros.org/wp-content/uploads/2013/01/American-decline-Chomsky-300x180.jpg" alt="American-decline-Chomsky" width="300" height="180"  /><img src="http://www.ouroboros.org/wp-content/uploads/2013/01/peter-2937-199x300.jpg" alt="peter  2937" width="199" height="300"  /></center></p>
<p>Recently, there has been a <a href="http://norvig.com/chomsky.html" target="_blank">long discussion between two great scientists</a>: <a href="http://www.chomsky.info" target="_blank">Noam Chomsky</a> and <a href="http://norvig.com/" target="_blank">Peter Norvig</a>. Current A.I. is dominated by statistical analysis of data in order to find patters and build large tables that contain the answer to each possible situation in a narrow domain. Those techniques are extremely efficient and work quite well (in narrow domains, even if they increase their domains every year).</p>
<p>Chomsky argues that that kind of A.I. is useless because it does not provide knowledge to humanity, it just provides answers (or as Pablo Picasso said, <em>Computers are useless, they only provide answers!</em>). Instead, Norvig indicates the large list of successes where those systems have been useful in making humanity progress. The discussion goes on by Chomsky indicating that A.I. is then just an engineered tool but not real a scientific paradigm because it does not bring knowledge to the world. Norvig replies that real knowledge has been provided because they are able to predict what the next word in a sentence is the most likely (for example).</p>
<p><center><img src="http://www.ouroboros.org/wp-content/uploads/2013/01/Introduction-to-Artificial-Intelligence-Ertel-Wolfgang-9780857292988-198x300.jpg" alt="Introduction-to-Artificial-Intelligence-Ertel-Wolfgang-9780857292988" width="198" height="300" /><img src="http://www.ouroboros.org/wp-content/uploads/2013/01/aiapproach-241x300.jpg" alt="aiapproach" width="241" height="300" /></center></p>
<p>For me, the vision of Norvig is like the current status of <a href="http://www.thekeyboard.org.uk/Quantum%20mechanics.htm" target="_blank">quantum mechanics: just a list of recipes</a> to know what to do when such and such, but nobody has an idea about why the recipes do work that way.<br />
This means, there is no real understanding of how quantum mechanics works. We just apply the recipe and the result is the predicted by it. But no real understanding of why.</p>
<p>In my opinion, that is what Chomsky was trying to say: even if it works, current A.I has no understanding at all.</p>
<p>Understanding is difficult because it cannot be analyzed or observed at the outside. And that is a great problem because it allows cheaters to use all their weaponry in order to make us believe that there is understanding behind a system that doesn&#8217;t have, just because<em> it looks like </em>it actually has. Of course, it is easier to try to cheat making a system look like it is something, than trying to make that something real (all of us know that situation when we tried to cheat at exams copying the answer of our colleague, instead of studying and learning ourselves).<br />
And this line of cheating has become main stream in A.I., making us believe that there is no actual difference between cheating and not cheating, i.e. it does not matter whether the system looks like it understands or if it has real understanding (after all, nobody can define understanding).<br />
I believe that such difference is what all is about. It is the <strong>ESSENCE OF INTELLIGENCE</strong>. Trying to avoid it through easier paths is just avoiding the real thing.</p>
<p>So now the most interesting part, <em>how understanding is built on a system?</em><br />
I can provide a brief answer without any real demonstration, based on my own theories. The theory is that understanding is built upon basic knowledge of the world, and scaled up to generate more complex understandings (like maths, philosophy, empathy, etc).</p>
<p>Small chunks of understanding are created at a first stage, by direct interaction with the world. Interaction with the world provides the basic understanding units for a system. This means, the system learns what up and down mean (and it is not something like the things about my head are up and the things below my chest are down . That is just a definition for a dictionary. That is what an A.I. would do (define a threshold where below that threshold is down and above the threshold is up. Tune the threshold with real experiments).</p>
<p><center><img src="http://www.ouroboros.org/wp-content/uploads/2013/01/actionin-203x300.jpg" alt="actionin" width="203" height="300"  /><img src="http://www.ouroboros.org/wp-content/uploads/2013/01/why-red-doesnt-sound-like-a-bell1-198x300.jpg" alt="why-red-doesnt-sound-like-a-bell1" width="198" height="300" /></center></p>
<p>Meaning is embedded in the sensorimotor law of the system, it means how the reading of the different sensors the system has, vary with the actions taken by the system. Those sensorimotor laws are the basic understanding units. It&#8217;s not a number, it&#8217;s a law learned by experiencing with the world.</p>
<p>Then, when an understanding unit is ready and well formed, it is suitable to be used for the generation of superior levels of understanding by using a <a href="http://theliterarylink.com/metaphors.html" target="_blank">metaphor engine</a> (or a analogies, whichever is better, both are almost the same). This means that the basic law (understanding unit) is used to reason about other things that are completely different but that the law applies to them. By doing this, new laws are generated, this time not sensorimotor based anymore (only in their roots).</p>
<p><center><img src="http://www.ouroboros.org/wp-content/uploads/2013/01/metaphors-we-live-by-189x300.jpg" alt="metaphors-we-live-by" width="189" height="300"  /><img src="http://www.ouroboros.org/wp-content/uploads/2013/01/where-math-236x300.jpg" alt="where math" width="236" height="300" /></center></p>
<p>And I am sad that I cannot provide more because I don&#8217;t know more.<br />
Hopefully, some people we are still working on this line of research and eventually, we will have some results that may be shown for better UNDERSTANDING.</p>
<p>Cheers.</p>
<p>Related bibliography:<br />
[1] Metaphores we live by,Lakoff and Johnson<br />
[2] Where mathematics come from, Lakoff and Nuñez<br />
[3] Perception of the structure of the physical world using multimodal sensors and effectors, Philipona and O&#8217;Regan<br />
[4] Artificial Intelligence, a modern approach, Rusell and Norvig<br />
[5] Understanding Intelligence, Pfeifer and Scheier<br />
[6] A sensorimotor account of vision and visual consciousness, Noe and O&#8217;Regan<br />
[7] Why red doesn&#8217;t sound like a bell, O&#8217;Regan<br />
[8] Action in perception, Noe</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ouroboros.org/2013/01/12/the-essence-of-a-i/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>A 10 Minutes Introduction to Embodied Cognition</title>
		<link>https://www.ouroboros.org/2012/12/12/a-10-minutes-introduction-to-embodied-cognition/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=a-10-minutes-introduction-to-embodied-cognition</link>
		<comments>https://www.ouroboros.org/2012/12/12/a-10-minutes-introduction-to-embodied-cognition/#comments</comments>
		<pubDate>Wed, 12 Dec 2012 17:44:55 +0000</pubDate>
		<dc:creator>ricardo</dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>

		<guid isPermaLink="false">http://www.ouroboros.org/?p=648</guid>
		<description><![CDATA[What is cognition?&#160; Basically, it is a group of mental processes.&#160; Cognition requires:&#160; Perception&#160; Attention&#160; Anticipation&#160; Reasoning&#160; Learning&#160; Inner Speech&#160; Imagination&#160; Memory&#160; Emotions&#160; Planning&#160; Pain and Pleasure&#160; &#160;Most cognitive scientists view cognition as something that is computational.&#160; Cognition = Computational System&#160; By computational system they mean a system that manipulates symbols, in the sense that [...]]]></description>
				<content:encoded><![CDATA[<div id="dE_H" style=";width:100%; height:100%; ;">What is <b>cognition</b>?&nbsp;
<div>Basically, it is a group of mental processes.&nbsp; </div>
<div></div>
<div>Cognition requires:&nbsp;</div>
<div>
<ol>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Perception&nbsp;</span></li>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.296875); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Attention&nbsp;</span></li>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.296875); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Anticipation&nbsp;</span></li>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.296875); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Reasoning&nbsp;</span></li>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.296875); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Learning&nbsp;</span></li>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.296875); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Inner Speech&nbsp;</span></li>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.296875); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Imagination&nbsp;</span></li>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.296875); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Memory&nbsp;</span></li>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.296875); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Emotions&nbsp;</span></li>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.296875); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Planning&nbsp;</span></li>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.296875); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Pain and Pleasure&nbsp;</span></li>
</ol>
<p><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">&nbsp;Most cognitive scientists view cognition as something that is <b>computational</b>.&nbsp;</span></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); "><br /></span></div>
<div style="text-align: center;"><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); "><b><font class="Apple-style-span" size="5">Cognition = Computational System</font><font class="Apple-style-span" size="6">&nbsp;</font></b></span></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); "><br /></span></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">By computational system they mean a system that manipulates <b>symbols</b>, in the sense that symbols without meaning are manipulated by means of the application of rules, generating by hence new symbols and conclusions.&nbsp; </span></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); "><br /></span></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">How a living system gets the symbols?&nbsp;</span></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">A sensor gathers meaningful data. This data is converted into symbols. Then the brain uses the symbols to generate a (symbolic) response. The response is translated into meaningful action data that is executed by the actuators of the living system.&nbsp;</span></div>
<div></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); "><br /></span></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">This explanation is perfect for artificially intelligent systems developers because it indicates that the brain doesn&#8217;t need a body. Then the scientists can concentrate on generating intelligence in any physical system that allows manipulation of symbols, and forget about the hardware.&nbsp; </span></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); "><br /></span></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">There is though a small problem. How do the symbols acquire and release their meaning? The process by which meaningful data is translated into symbols, and symbols are translated back into meaningful data is called <b>reification</b> and up to date, nobody knows how it is performed (it is called the grounding problem) &#8230; at least in a non-embodied cognitive framework&nbsp; </span></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); "><br /></span></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Let&#8217;s make then an hypothesis:&nbsp;</span></div>
<div style="text-align: center;"><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); "><b><font class="Apple-style-span" size="5">meaning arises from the nature of the body</font></b></span><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">&nbsp;</span></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); "><br /></span></div>
<div><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">What this means?&nbsp;</span></div>
<div>
<ul>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">Living things generate a basic set of meaningful concepts based on their interaction with the world&nbsp;</span></li>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">More complex concepts can be generated by applying metaphores over previous concepts.&nbsp;</span></li>
</ul>
<p><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">This approach requires of a body to generate intelligence. That is the approach of the embodied cognition paradigm, and &nbsp;has the following implications:</span></div>
<div>
<ul>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); "><b>Conceptualization</b>: the properties of an organism body constrain the concepts it can acquire (this has big implications for making artificial systems understand)&nbsp;</span></li>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); "><b>Replacement</b>: interaction with the environment replaces the need for representations (this has big implications in the number of resources needed to create an A.I.)&nbsp;</span></li>
<li><span class="Apple-style-span" style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); "><b>Constitution</b>: body is constitutive of cognitive processes rather than causal (this has big implications in how perception should be made in artificial systems)</span></li>
</ul>
</div>
</div>
]]></content:encoded>
			<wfw:commentRss>https://www.ouroboros.org/2012/12/12/a-10-minutes-introduction-to-embodied-cognition/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>More Cognition, Less CPU</title>
		<link>https://www.ouroboros.org/2012/10/29/more-cognition-less-cpu/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=more-cognition-less-cpu</link>
		<comments>https://www.ouroboros.org/2012/10/29/more-cognition-less-cpu/#comments</comments>
		<pubDate>Mon, 29 Oct 2012 21:19:34 +0000</pubDate>
		<dc:creator>ricardo</dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Talks]]></category>

		<guid isPermaLink="false">http://www.ouroboros.org/?p=597</guid>
		<description><![CDATA[The following post is a straight transcription of the speech with the same title I gave during the Robobusiness 2012 in Pittsburgh. You can find other details of the speech in this link. You can use the text and images at your will but give credit to the author. ================================================ What is preventing us from [...]]]></description>
				<content:encoded><![CDATA[<p>The following post is a straight transcription of the speech with the same title I gave during the <a href="http://www.robobusiness.com/program-overview/speakers-list/" target="_blank">Robobusiness 2012 in Pittsburgh</a>. You can find other details of the speech <a href="http://www.robobusiness.com/program-overview/conference-sessions/tpd06/" title="Robobusiness speakers list" target="_blank">in this link</a>.<br />
You can use the text and images at your will but give credit to the author.</p>
<p>================================================</p>
<p>What is preventing us from having humanoid robots like these at home?</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/I-robot.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/I-robot-300x168.jpg" alt="" title="I robot" width="300" height="168" class="aligncenter size-medium wp-image-598" /></a></p>
<p>What is preventing us from having one at home? </p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/I-robot-2.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/I-robot-2-300x198.jpg" alt="" title="I robot-2" width="300" height="198" class="aligncenter size-medium wp-image-600" /></a></p>
<p>What is preventing us from selling those robots?</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/robots.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/robots-300x129.jpg" alt="" title="robots" width="300" height="129" class="aligncenter size-medium wp-image-601" /></a></p>
<p>We are still years away from those robots.<br />
And there are many reasons for that, being one of them that robots are not intelligent enough. Some researchers say that the reason of this lack of intelligence is just a lack of CPU power and good algorithms. They believe that big supercomputers will allow us to run all the complex algorithms required to make a robot safely move through a crowd,  make a robot recognize us from other people, or understand speech commands. For those researchers, it is  just a matter of CPU which will be solved by year 2030 when CPU power will reach that of a human brain (according to some predictions).</p>
<p><a href="http://www.transhumanist.com/volume1/moravec.htm"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/cpu_power-300x240.jpg" alt="" title="cpu_power" width="300" height="240" class="aligncenter size-medium wp-image-602" /></a></p>
<p>But I do not agree.</p>
<p>This is a gnat.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/gnat.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/gnat-267x300.jpg" alt="" title="gnat" width="267" height="300" class="aligncenter size-medium wp-image-604" /></a></p>
<p>A gnat is an insect that doesn&#8217;t even has a brain just a few nervous cells distributed along the body. Its CPU power (if we can measure that for an animal) is very small. However, the gnat can live in the wild, flying, finding food, finding mates, avoiding dangerous situations and finding the proper place for their siblings&#8230;and all of this along four months of life. Up to date there are no robots able to do what a gnat can do with the same computational resources.</p>
<p>I believe that considering artificial intelligence just a matter of CPU is a brute force solution to the problem of intelligence and discards an important part of it.</p>
<p>I call the CPU approach the <strong>easy path to A.I.</strong></p>
<p>I believe there is another approach to artificial intelligence, one where CPU has its relevance, but is not the core. One where implementing cognitive skills is the key concept.</p>
<p>I call this approach the <strong>hard approach to A.I.</strong> I think that is the approach that is required in order to build those robots that we&#8217;d love to have.</p>
<p>In this post I want to show you:<br />
– What is what I call the easy path to A.I<br />
– Why do I believe this path will eventually fail in bringing the A.I required for service robots<br />
– What is what I call the hard path to A.I. and why it is needed</p>
<h2>The easy approach to A.I.</h2>
<p>So, what is what I call the easy path to A.I.?</p>
<p>Imagine that you have a large database at your service to store all the data you compute about a subject,  in the following way: if this happens then do that, if this else happens then do that.<br />
There will be, though, some data that you cannot know for sure because there are some uncertainties in the information you have, or require very complex calculations.<br />
For those cases you calculate probabilities of happening in the sense, if something like this happens then is very likely that this is the best option.<br />
Then you take decisions based on those tables and probabilities. If you apply this method to your daily life, you will decide if what you are looking at is a book or is an apple, based on the data in the table or the probability. If you could compute the table and probabilities for all your world around you can take the best decision at any moment  (actually the table is taking the decision for you, you are just following it!)</p>
<p>That is exactly the approach followed in computer board games, one of the first subjects where artificial intelligence was applied. For example in tic-tac-toe</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/Tic_Tac_Toe.gif"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/Tic_Tac_Toe.gif" alt="" title="Tic_Tac_Toe" width="240" height="180" class="aligncenter size-full wp-image-605" /></a></p>
<p>You all know tic-tac-toe. You may also know that there exists the complete solution for the game. I mean, there is a table that indicates which one is the best movement for each configuration in the board.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/Tic-tac-toe-full-game-tree-x-rational.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/Tic-tac-toe-full-game-tree-x-rational-300x200.jpg" alt="" title="Tic-tac-toe-full-game-tree-x-rational" width="300" height="200" class="aligncenter size-medium wp-image-607" /></a></p>
<p>When the table of best movements for each combination is discovered/computed, it is said that <strong>the game has been solved</strong>. Tic-tac-toe is a solved game since the very beginning of AI, because of its simplicity. Another game that has been solved is checkers. The full <em>table</em> was calculated in 2007.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/checkers.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/checkers-300x225.jpg" alt="" title="checkers" width="300" height="225" class="aligncenter size-medium wp-image-608" /></a></p>
<p>There are however some games that have not been solved yet, like for example go&#8230; </p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/Go_board.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/Go_board-300x178.jpg" alt="" title="Go_board" width="300" height="178" class="aligncenter size-medium wp-image-609" /></a></p>
<p>&#8230; or chess.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/photo.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/photo-300x206.jpg" alt="" title="photo" width="300" height="206" class="aligncenter size-medium wp-image-610" /></a></p>
<p>Those games have a lot more possible combinations of board configuration.  They have so many that cannot be computed with the best supercomputers of the today. For instance, the game of chess has as many possible combinations as:</p>
<p>– 16+16 = 32 chess pieces and 64 fields, so 64!32! ≈ 4.8·10^53 combinations</p>
<p>Due to that huge number of possible combinations it is impossible (up to date) to build the complete table for all chess board combinations. There is not enough CPU power to compute such tables. However, complete tables already exist for board with any combination of 6 or less pieces.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/chess2.png"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/chess2-300x188.png" alt="" title="chess2" width="300" height="188" class="aligncenter size-medium wp-image-624" /></a></p>
<p>So a computer playing chess can use the tables to know the best move when there are only 6 pieces on the board. What does the computer do when it is not in any of those situations of 6 pieces or less?. It builds another kind of table&#8230; a probabilistic one. It calculates probabilities based on some cost functions. In those cases, the machine doesn&#8217;t know which one is the best movement at any moment, but has a probability of which one is the best. The probabilities are built based on the knowledge of a human.</p>
<p>Using this approach it has been possible for a computer to win the best human player in the world.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/kasparov-deep-blue.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/kasparov-deep-blue-300x202.jpg" alt="" title="kasparov-deep-blue" width="300" height="202" class="aligncenter size-medium wp-image-611" /></a></p>
<p>However, for me this approach makes no sense if what we are talking is about a system that knows what is doing and uses this knowledge to perform better.</p>
<p>You can tell me that who cares about it, if at the end they do correctly the job (and even better than us!). And you may be right, for only this particular example of chess.</p>
<p>The table approach is like if you provide to a student the table with all the answers for all the exams that he will have to study. Of course he will do it perfectly! Any person would do it. But at the end, the student knows nothing and understands less.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/test.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/test-214x300.jpg" alt="" title="test" width="214" height="300" class="aligncenter size-medium wp-image-612" /></a></p>
<h2>Why more CPU (the easy approach) may not be the solution</h2>
<p>But anyway, the CPU approach is exactly what most A.I. scientist think about when talking about constructing intelligent systems.</p>
<p>Given the success obtained with board games, they started to apply the same methodology to many other A.I. problems like speech recognition or object recognition. Hence the problem of constructing an AI has become then a race for resources that allow bigger tables and more calculation, attacking the problem in two front lines: in one side, developing algorithms that can construct those tables in a more efficient way (<a href="http://norvig.com/chomsky.html">statistic methods</a> are winning at present). In other side, constructing more and more powerful computers (or using networks of them) to provide more CPU power for those algorithms. This methodology is exactly what I call the easy AI.</p>
<p>But do not misunderstand me, even if I call it the easy approach, it is not easy at all. This approach is taking some of the best minds in the world to solve those problems. But I call like this because is a kind of brute force approach, and because it has a clear and well defined goal.</p>
<p>And because it is successful, this approach is being used in many real systems.<br />
The easy AI solution is the approach used for example in the Siri voice recognition system.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/siri.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/siri-300x175.jpg" alt="" title="siri" width="300" height="175" class="aligncenter size-medium wp-image-616" /></a></p>
<p>Or the approach of the <a href="http://www.google.com/mobile/goggles/#text">Google goggles</a> (object recognizer system).</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/goggles_wine.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/goggles_wine-300x118.jpg" alt="" title="goggles_wine" width="300" height="118" class="aligncenter size-medium wp-image-617" /></a></p>
<p>Both cases use the power of a network of computers connected together to calculate what has been said or what has been shown to recognize.</p>
<p>And the idea of using networks of computers is so successful, that a new trending following those ideas has appeared for robots called <a href="http://www.getrobo.com/getrobo/cloud-robotics/">cloud robotics</a>, where robots are connected to the cloud and have the CPU of the whole cloud for its calculations (among other advantages, like shared information between robots, extensive databases, etc&#8230;)</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/cloud-robotics.png"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/cloud-robotics-300x200.png" alt="" title="cloud robotics" width="300" height="200" class="aligncenter size-medium wp-image-619" /></a></p>
<p>That is exactly how driverless cars of Google can be driven, by using the cloud as a computer resource.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/Google_Car.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/Google_Car-300x225.jpg" alt="" title="Google_Car" width="300" height="225" class="aligncenter size-medium wp-image-618" /></a></p>
<p>And that is why cloud robotics is being seen as a holy grial. A lot of expectations are put in this CPU capacity, and it looks (again) like A.I. is just a matter of having enough resources for complex algorithms.</p>
<p>I don&#8217;t think so.</p>
<p>I don&#8217;t think that the same approach that works for chess will work for a robot that has to handle real life situations. The table approach worked with chess  because the world of chess is a very limited world compared with what we have to live when handling speech recognition, face recognition or object recognition.</p>
<p>Google cars do not drive at the high way during rush hour (up to my knowledge).</p>
<p>Google goggles makes as many mistakes as correct detections when yo use it in a real live situation.</p>
<p>And Siri doesn&#8217;t handle the problem of speech recognition in real life because it uses a mic close to your mouth.</p>
<p>Last February I was at the <a href="http://www.eucognition.org/index.php?page=challenges-ii-general-information">euCognition meeting in Oxford</a>. There I attended to a talk given by professor <a href="http://www.dcs.shef.ac.uk/~roger/">Roger Moore</a>, an expert in speech recognition systems.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/speech-recog.png"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/speech-recog-300x227.png" alt="" title="speech recog" width="300" height="227" class="aligncenter size-medium wp-image-620" /></a></p>
<p>In his talk, professor Moore suggested that in the last years, even if some improvement was made due to increase in CPU power, speech recognition seems to have reached a plateau of error rate, between a 20% and a 50%. This means, even if CPU power has been increasing along the years, and the algorithms for speech recognition been made more efficient, no significant improvement has been obtained, and worst of all, some researchers are starting to think that some speech problems will never be solved.<br />
After all, those speech algorithms are only following their tables and statistics, and leave all the meaning outside of the equation. They do not understand what they are doing, hearing or viewing!.<br />
He ended his talk indicating that a change in the paradigm may be required.</p>
<p>Of course, professor Moore was talking about including more cognitive abilities in those systems. And from all cognitive abilities I suggest that the key one is <strong>understanding</strong>.</p>
<h2>Understanding as a solution, the hard approach</h2>
<p>What do I mean by understanding? That is a very difficult question that I can&#8217;t answer straight. What I can do is to show what I mean by using a couple of examples.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/chess-example.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/chess-example.jpg" alt="" title="chess example" width="172" height="174" class="aligncenter size-full wp-image-621" /></a></p>
<p>This is a real case that was proposed to Deep Thought, the machine that almost won Gary Kasparov in the 90&#8242;s (from Williamd Harston, J. Seymore&#038; D. Norwood, at New Scientist, n.1889, 1993). In this situation, the computer pays white and has to move. When this situation was presented to Deep Thought, it took the tower. After that, the computer lost the game due to its inferior number of pieces.</p>
<p>When this situation is presented to an average human player, it clearly recognizes the value of the paw barrier. It is the only protection he has against the superior number of pieces of black. The human and avoids breaking it, leading the match to a draw. The person understands its value, but the computer not.</p>
<p>Of course you can program the machine to learn o recognize the pattern of the barrier. That doesn&#8217;t mean that the computer has grasped the meaning of the barrier, but that it has a new set of data in its table for which it has a better answer. But the demonstration that the machine doesn&#8217;t understand anything is given when you present to it the next situation.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/photo-1.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/photo-1-300x293.jpg" alt="" title="photo (1)" width="300" height="293" class="aligncenter size-medium wp-image-622" /></a></p>
<p>In this situation, again computer plays white. There is no barrier, but white can generate one by moving the bishop and constructing one. When the situation is presented to a computer chess program, it takes the Tower.  (from a Turing test by William Harston and David Norwood)</p>
<p>What this situation is showing is that the machine has no understanding at all. It cannot grasp the meaning of a barrier, unless you specify in the code all the conditions for it. But even if you achieve to encode the conditions, there will be no REAL understanding at all. Some variations of the same concept are not grasped by the machine, because it doens&#8217;t have any concept at all.</p>
<p>The only solution to that problem that the easy approach is proposing is to have enough computational power to encode all the situations. Of course, those situations have to be previously detected by somebody in order to make available the information to the machine.</p>
<p>The only problem I can see with that approach when applied to real life situations like in a service robot, is that it may be not possible to achieve such comprehension having understanding out of the picture&#8230; mainly because you will never have enough resources to deal with reality.</p>
<p>Another example of what I mean by understanding.<br />
Imagine that we ask our artificial system to find a natural number that is not the sum of three squared numbers.</p>
<p>How would easy AI solve the problem?. It would start checking all the number starting from zero and increasing up.</p>
<p>0 = 0^2 + 0^2 + 0^2<br />
1 = 0^2 + 0^2 + 1^2<br />
2 = 0^2 + 1^2 + 1^2<br />
…<br />
7 ! = 0^2 + 0^2 + 0^2<br />
…<br />
7 ! = 1^2 + 1^2 + 2^2 = 6<br />
7 ! = 1^2 + 2^2 + 2^2 = 9 ← here it is! Seven is the number!</p>
<p>For this example, the AI found a proof that there is a number that is not the sum of 3 squared numbers. Easy and just a few resources used. As you can see, this is a brute force approach, but it works.</p>
<p>Now imagine that we want the same system to find a number that is not the sum of 4 squared numbers.</p>
<p>The easy AI would follow the same approach, but now it would require more resources. After having checked the first 2 million numbers are the sum of 4 squared numbers, you may start thinking that more resources are going to be needed in order to demonstrate it. You can add faster computers and better algorithms to compute the additions and squares, but the A.I. would never find it because it doesn&#8217;t exist.</p>
<p><center>There is no such a number!</center></p>
<p>How do I know that it doesn&#8217;t exist?</p>
<p>Because there is a theorem by Lagrange that demonstrates just that.<br />
The human approach to solve the problem is different.  We try to understand the problem, and based on this understanding find a proof instead of trying every single natural numb. That is what Lagrange did. And he did not required all the resources of the Universe!</p>
<p>And that is my definition of understanding and I cannot put it into better words.</p>
<p>Now the next question is, if I say that understanding is what is missing, how can we include it in our robots. Also how can we measure that the system has understanding?</p>
<p>Provided that there is no clear definition of what  exactly is understanding, we know less how to embed it into a machine That is why I call this the hard approach to A.I. Hence I can only provide you with my own ideas about it.</p>
<p>I would say that a system has understanding about a subject when it is able to predict about the subject. It is able to predict how the subject would change if some input parameters change<br />
I think that you understand something when you are able to predict about that something plus you are aware that you can predict it. I cannot tell you more.</p>
<p>How can we test if a machine has understanding? This is also not clear. At present the standard test to discern an artificial machine is the Turing test that you all know. This means, if the machine can do the job at least as well as a person, then it is OK for us.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/10/Turing-Test-Scheme.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/10/Turing-Test-Scheme-300x160.jpg" alt="" title="Turing-Test-Scheme" width="300" height="160" class="aligncenter size-medium wp-image-623" /></a></p>
<p>However this test can be fooled in the same way as the teacher was fooled by the student that knew the answers provided by someone.</p>
<p>And the reason is that the Turing is focusing in one single part of intelligence: <strong>the what part</strong>. From my point of view, intelligence has to be divided into two parts: <strong>the what and the how</strong></p>
<p><center>intelligence = what + how </center></p>
<p>The what indicates what the system is able to do, for example playing chess, speak a language or grasp a ball. The how indicates how, in which way, the systems performs such thing and how many resources uses. Either looking at a table, calculating the probabilities or just reasoning about meanings.</p>
<p>Examples of a system with a lot of what but a few of how: the chess player, or the student at the test exam. Example of high how but low what: the gnat&#8217;s life.</p>
<p>The problem with current artificial intelligence is that it is only concentrated in the what part. Why? Because it is easier and provides quicker results. But also because is the one that can be measured with a Turing test.</p>
<p>But the how part is as important as the what. However, we have no clue about how to measure it in a system. An idea would be to use similar experiments as the ones used by psychologists. However this would only allow to measure systems up to a human level and not beyond or different (because we cannot even mentally conceive them).</p>
<p>To conclude,<br />
I think that at some point in our quest for artificial intelligence we got confused about what intelligence is. Of course natural intelligence uses tables, and also calculates probabilities in order to be efficient. But it also uses understanding, something that we cannot define very well, and that we cannot measure.</p>
<p>Time and resources need to be dedicated to study the problem of understanding, and not just pass by it as has happened up to now.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ouroboros.org/2012/10/29/more-cognition-less-cpu/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Cognitive Notes: Why red doesn&#8217;t sound like a bell</title>
		<link>https://www.ouroboros.org/2012/08/29/books-in-three-pages-why-red-doesnt-sound-like-a-bell/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=books-in-three-pages-why-red-doesnt-sound-like-a-bell</link>
		<comments>https://www.ouroboros.org/2012/08/29/books-in-three-pages-why-red-doesnt-sound-like-a-bell/#comments</comments>
		<pubDate>Wed, 29 Aug 2012 11:43:34 +0000</pubDate>
		<dc:creator>ricardo</dc:creator>
				<category><![CDATA[Books]]></category>

		<guid isPermaLink="false">http://www.ouroboros.org/?p=526</guid>
		<description><![CDATA[Today I am starting a new section: Cognitive Notes. This section will include summaries of Artificial Cognition books that I find interesting and related to the development of intelligent Service Robots. All the summaries will share a common feature: they must cover the main ideas of the book in just five pages. Summaries included here [...]]]></description>
				<content:encoded><![CDATA[<p>Today I am starting a new section: <em>Cognitive Notes</em>. This section will include summaries of Artificial Cognition books that I find interesting and related to the development of intelligent Service Robots. All the summaries will share a common feature: they must cover the main ideas of the book in just five pages.</p>
<p>Summaries included here are published with Creative Commons license. You can use them and distributed at will, but please give credit to the author.</p>
<h3>Why Red Doesn&#8217;t Sound Like a Bell</h3>
<p>And starting today, with the book entitled <a href="http://nivea.psycho.univ-paris5.fr/BookWebPage/index.html" target="_blank"><em>Why red doesn&#8217;t sound like a bell</em></a> (2011), written by <a href="http://nivea.psycho.univ-paris5.fr/" target="_blank">Kevin O&#8217;Regan</a>.</p>
<p>This book describes and interesting approach to <a href="http://www.scholarpedia.org/article/Hard_problem_of_consciousness" target="_blank">the hard problem of consciousness</a>, called the <a href="http://nivea.psycho.univ-paris5.fr/CogSysZurich2010/How%20to%20build%20a%20robot%20that%20feels.htm" target="_blank">sensorimotor approach</a>. </p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/08/9780199775224_1401.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/08/9780199775224_1401.jpg" alt="" title="9780199775224_140" width="140" height="211" class="alignleft size-full wp-image-549" /></a> Basically, what his theory says is that feelings/sensations are not something that happens to us, but rather a thing that we do, and, what actually defines the object/color/sound/small/taste or feeling that one is having are the laws that govern how one interacts with it. As a conclusion: the brain is not the place where the feel is generated, but is in the sensorimotor interaction  that is generated. The brain only enables the sensorimotor interaction that constitutes the experience of feel.</p>
<p>Here there is the Cognitive Note of the book: it contains a summary of the book chapter by chapter, including the main ideas of each chapter. You can download it <a href="http://www.ouroboros.org/wp-content/uploads/2013/07/why_red_doesnt_sound_like_a_bell.pdf">here</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ouroboros.org/2012/08/29/books-in-three-pages-why-red-doesnt-sound-like-a-bell/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Intelligence Is Also About How We Do Things</title>
		<link>https://www.ouroboros.org/2012/08/24/intelligence-is-also-about-how-we-do-things/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=intelligence-is-also-about-how-we-do-things</link>
		<comments>https://www.ouroboros.org/2012/08/24/intelligence-is-also-about-how-we-do-things/#comments</comments>
		<pubDate>Fri, 24 Aug 2012 12:14:11 +0000</pubDate>
		<dc:creator>ricardo</dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>

		<guid isPermaLink="false">http://www.ouroboros.org/?p=481</guid>
		<description><![CDATA[Since the creation of the artificial intelligence field, an AI has been judged for what it is able to do. Programs that can follow a conversation, that can predict where there is oil underground, that can drive autonomously a car&#8230; all of them are judged intelligent only based on their functional behavior. To achieve the [...]]]></description>
				<content:encoded><![CDATA[<p>Since the creation of the artificial intelligence field, an AI has been judged for <strong>what it is able to do</strong>. Programs that can follow a conversation, that can predict where there is oil underground, that can drive autonomously a car&#8230; all of them are judged intelligent only based on their functional behavior.</p>
<p>To achieve the desired functionality, all types of tricks have been (and are) used and accepted by the AI community (and in most of the cases, they were not qualified as tricks): <a href="http://spectrum.ieee.org/computing/software/cracking-go">use large data structures that cover most of the search space</a>, reduction of the set of words the AI has to recognize, or <a href="http://www.pitzer.de/benjamin/_media/pdf%3Apitzer11_icra.pdf">even asking the answer to a human through internet</a>.</p>
<p><center><img alt="" src="http://upload.wikimedia.org/wikipedia/commons/thumb/e/e4/Turing_Test_version_3.png/220px-Turing_Test_version_3.png" title="Turing test" class="alignnone" width="220" height="282" /><br />
The Turing Test: C cannot see neither A nor B. A and B both claim that they are humans. Can C discover that A is an AI?</center></p>
<p>The intelligence of those systems could be <em>measured</em> by using the <a href="http://en.wikipedia.org/wiki/Turing_test">Turing test</a> (adapted to each particular AI application): <strong>if a human cannot distinguish the machine from a person performing the same job, then it would be said that the machine is a successful AI</strong>. What this means is, if it does the job, then it is intelligent. And this kind of measure has lead to the kind of AI that we have today, the one that can win at chess to the best human player, but that it is not able to recognize a mug due to small changes in the light of the room. </p>
<p>But not everybody agrees on that definition of AI. For instance, the <a href="http://en.wikipedia.org/wiki/Chinese_room">Chinese room argument exposed by Searle</a>, criticizes that such AI shows no real intelligence and never will if based on that paradigm. </p>
<p><center><img src="http://www.ouroboros.org/wp-content/uploads/2012/08/Chinese-room-experiment-300x162.jpg" alt="" title="Chinese room experiment" width="300" height="162" class="aligncenter size-medium wp-image-487" /><br />
In the Chinese room experiment, a man inside a room answers questions in Chinese by following the instructions provided on a book, without understanding a piece of Chinese</center></p>
<p>I do agree with Searle&#8217;s argument. I think that the problem here is that we are missing one important component of intelligence. From my point of view, an intelligent behavior can be divided into two components: <em>what is done by the behavior</em> and <em>how the behavior is done</em></p>
<p><center> <em>intelligence = what + how</em> </center></p>
<p>The <em>what</em>: the intelligent behavior that one is able to do<br />
The <em>how</em>: the way this intelligent behavior is achieved.</p>
<p>Turing test only measures one part of the equation: the <em>what</em>. 99.99% of AI programs today are based on providing a lot of weight to the <em>what</em> part of the equation. They think that, if enough weight is provided to that part, nobody will notice the difference (in terms of intelligence).</p>
<p>The reason for providing weight to the <em>what</em> part is because it is easier to implement, and furthermore we can measure it (that is, we can use the Turing test for the <em>what</em> measure). Since there is no way of measuring the <em>how</em>, and nobody has a clue about how humans actually do intelligent things, people just prefers to concentrate on the part that provides results in the short term: the <em>what</em> part. After all, the equation can have a large value by working on any of the two constituents&#8230;</p>
<p>However, I think that a real intelligent behavior requires of weights in both constituents of that equation. Otherwise we obtain unbalanced creatures that are far away from what we as humans are able to do in terms of intelligent behavior.</p>
<p>Here there are two examples of unvalanced intelligences:</p>
<li><strong>The case of an intelligent behavior with only score on the <em>what</em>:</strong> this is the case of a guy who has to do an exam about quantum mechanics. He has not studied at all so he doesn&#8217;t know anything about the subject. He has, though, a cheat sheet (provided by the secretary of the teacher) that allows him to answer correctly all the exam questions. After having evaluated the exam of the guy, the teacher would say that he has mastered the subject. His knowledge about the subject is only being judged by <strong>what</strong> he has done (to answer the exam). We would say that he is very intelligent in the field of quantum mechanics, but actually, by observing how he answered the exam questions, we can see that he has no knowledge at all. He looks intelligent, but he is not.</li>
<p><center><a href="http://www.ouroboros.org/wp-content/uploads/2012/08/Cheat_Sheet_Exam_13161.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/08/Cheat_Sheet_Exam_13161-300x187.jpg" alt="" title="Cheat_Sheet_Exam_13161" width="300" height="187" class="aligncenter size-medium wp-image-503" /></a></center></p>
<li><strong>The case of an intelligent behavior with most of its value on the <em>how</em>:</strong> this would be the case of the animals, any of them, ranging from the smallest ones to the closest to us in terms of intelligence. Animals do have a lot of <em>how</em> intelligence, related to the tasks that they are able to do, and they are not able to do so many things as us because their score in the <em>what</em> is lower.</li>
<div id="attachment_505" class="wp-caption aligncenter" style="width: 310px"><a href="http://www.ouroboros.org/wp-content/uploads/2012/08/smart_dog.png"><img src="http://www.ouroboros.org/wp-content/uploads/2012/08/smart_dog-300x200.png" alt="" title="smart_dog" width="300" height="200" class="size-medium wp-image-505" /></a><p class="wp-caption-text">Animals have low intelligence in the &#8216;what&#8217; part but quite a lot in the &#8216;how&#8217;</p></div>
<p></p>
<p>Now, the question is how can we measure the <em>how</em> part?<br />
That is a difficult matter. Actually, we do not have any kind of reliable measure for that even for humans. I would propose, that the <em>how</em> can be measured by <strong>measuring understanding</strong>. We decide how to do something based on our understanding of that thing and all the things related to it. When we understand something we are able to use/perform/communicate that something in different situations and contexts. It is our understanding the one what drives <em>how</em> we do things.</p>
<p>In this sense, sociologists have created experiments with infants that try to figure out what they understand and until which point they do understand [1][2]. Based on that, a scale based on the different stages of an infant development could be created and applied to AIs to measure their understanding. I would take human development as the metric for this scale, starting for zero equivalent to the understanding of a new born child, and ranging to 10 for the understanding of an adult. Then, the same tests can be applied to the machine in order to know its level of intelligence in the how part. </p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/08/artificial-intelligence-artificialintelligence-chess-demotivational-posters-1334329954.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/08/artificial-intelligence-artificialintelligence-chess-demotivational-posters-1334329954-300x242.jpg" alt="" title="artificial-intelligence-artificialintelligence-chess-demotivational-posters-1334329954" width="300" height="242" class="aligncenter size-medium wp-image-509" /></a></p>
<p>As a conclusion, I believe that intelligence is not about looking at a table and reading the correct answer (like in the case of chess, Go or the guy who cheats at the exam). Intelligence involves finding solutions with a limited amount of resources in a very wide range of different situations. This stresses the importance of <em>how</em> things are done in order to perform an intelligent behavior.</p>
<p><strong>References</strong>:<br />
[1] Jean Piaget, <em>The origins of intelligence in children</em>, International University Press, 1957<br />
[2] George Lakoff and Rafael Nuñez, <em>Where mathematics comes from</em>, Basic Books, 2000</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ouroboros.org/2012/08/24/intelligence-is-also-about-how-we-do-things/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Talk at RoboBusiness Leadership Summit 2012</title>
		<link>https://www.ouroboros.org/2012/07/20/talk-at-robobusiness-leadership-summit-2012/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=talk-at-robobusiness-leadership-summit-2012</link>
		<comments>https://www.ouroboros.org/2012/07/20/talk-at-robobusiness-leadership-summit-2012/#comments</comments>
		<pubDate>Fri, 20 Jul 2012 18:52:39 +0000</pubDate>
		<dc:creator>ricardo</dc:creator>
				<category><![CDATA[Talks]]></category>

		<guid isPermaLink="false">http://www.ouroboros.org/?p=437</guid>
		<description><![CDATA[Next October 24th I give will a talk at the RoboBusiness Summit in Pittbursgh, USA, entitled More Cognition, Less CPU. In this talk, I will explain why I do believe that bringing more cognition to our robots is the correct approach to mid term Service Robots, and why focusing on providing more computer power (like [...]]]></description>
				<content:encoded><![CDATA[<p>Next October 24th I give will a talk at the <a href="http://www.robobusiness.com/">RoboBusiness Summit</a> in Pittbursgh, USA, entitled <em>More Cognition, Less CPU</em>.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/07/robobusiness2012.png"><img src="http://www.ouroboros.org/wp-content/uploads/2012/07/robobusiness2012-300x224.png" alt="" title="robobusiness2012" width="300" height="224" class="aligncenter size-medium wp-image-565" /></a></p>
<p>In this talk, I will explain why I do believe that bringing more cognition to our robots is the correct approach to mid term Service Robots, and why focusing on providing more computer power (like cloud robotics projects are doing) is not going to help us so much.</p>
<p>More details can be found <a href="http://www.robobusiness.com/program-overview/speakers-list/">here</a></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ouroboros.org/2012/07/20/talk-at-robobusiness-leadership-summit-2012/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Robot Robustness</title>
		<link>https://www.ouroboros.org/2012/06/23/robot-robustness/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=robot-robustness</link>
		<comments>https://www.ouroboros.org/2012/06/23/robot-robustness/#comments</comments>
		<pubDate>Sat, 23 Jun 2012 15:10:04 +0000</pubDate>
		<dc:creator>ricardo</dc:creator>
				<category><![CDATA[Research]]></category>
		<category><![CDATA[Work]]></category>

		<guid isPermaLink="false">http://www.ouroboros.org/?p=414</guid>
		<description><![CDATA[Along the last three days I have been attending to the Robocup competition held in Mexico. This is a competition were robots confront to each other on a dynamic scenario, out of the more controlled ones of the labs. There are several leagues in which robots can compete. Concretely, I have been attending the Robocup@Home [...]]]></description>
				<content:encoded><![CDATA[<p>Along the last three days I have been attending to the <a href="http://www.robocup2012.org/" title="Robocup in Mexico" target="_blank">Robocup competition held in Mexico</a>. This is a competition were robots confront to each other on a dynamic scenario, out of the more controlled ones of the labs. There are several leagues in which robots can compete. Concretely, I have been attending the <a href="http://www.ai.rug.nl/robocupathome/" target="_blank">Robocup@Home competition league</a>, devoted to the test of Service Robots on a home scenario.</p>
<p><center><a href="http://www.ouroboros.org/wp-content/uploads/2012/06/20120629-171042.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/06/20120629-171042.jpg" alt="20120629-171042.jpg" class="alignnone size-full" /></a></p>
<p><strong>The Robocup@Home arena during training periods</strong></center></p>
<p>In the Robocup@Home, robots must perform tasks on a home environment. They have to follow the orders of humans and help them in common situations of dayly live activities. Tests, for example, include following the owner across a chaotic environment, bring to the owner some stuff from some place, or help him clean a room.</p>
<p>During the competition of this year, even if the tests are very simple for a human, most of the robots failed from its very beginning. They were not able to perform what they were (suposedly) trained to do.</p>
<p><center><a href="http://www.ouroboros.org/wp-content/uploads/2012/06/20120629-171224.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/06/20120629-171224.jpg" alt="20120629-171224.jpg" class="alignnone size-full" /></a></p>
<p><strong>Classification table after the first stage of the competition</strong><br />
</center></p>
<p>I am sure that most of the teams had their robots working perfectly at their labs before comming to the competition. But their performance at the competition arena was very bad. The question is then, what happenned between their working situation at the lab and the failure condition during the competition?</p>
<p>When the teams are asked about why their robot failed, they report that their robot just had a failure in one of its <em>mechanical</em>, <em>electronical</em> or <em>software</em> component. Usually they indicate that they performed a last minute change in order to adapt the robot to the new environment, and that change, triggered those errors.</p>
<p>There are two interesting points in that answer there:</p>
<li>First, robots are not able to adapt very well to changes in the environment. </li>
<li>Second, a last minute change makes the robot fail, which is a consequence of a lack of robusteness in the robot.</li>
<p></p>
<p>In this post, I&#8217;m going to concentrate on the second point. The robots of the competition are not robust. This means that small changes in the conditions of working, make the robot fail.Those conditions may include last minutes changes in the robot code or hardware, but also, and more important than that, changes in conditions include differences between the testing situation at the lab and the testing situation during the competition.</p>
<p><center><br />
<a href="http://www.ouroboros.org/wp-content/uploads/2012/06/20120629-171508.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/06/20120629-171508.jpg" alt="20120629-171508.jpg" class="alignnone size-full" /></a></p>
<p><strong>Cosero robot (one of the more robust) training how to identify and grasp objects at the Robocup@Home</strong></center></p>
<p>At the Universities, people is more concentrated on doing <em>proof of concepts</em>. This means that researchers and students work in order to show that something is possible at least once. Once this is demonstrated, they move to another subject to try to demonstrate that it is also possible in principle. After all, they get recognition after any new discovery or demo of possibility. So they are not interested in robustness as much as they can keep on doing more proofs of concepts.</p>
<p>Companies, instead, they need to have <strong>robust products</strong> in order to sell them. By robust products, I mean products that provide all the time what is expected from them. In the case of robots, the robust robot must be able to follow its master 99% of the times, in different places and locations, be able to grasp objects or understand language in almost any situation.</p>
<p>To achieve robust products, companies have developed all a bunch of quality assurance mechanisms that can be applied to all mechanical, electronic and software parts. They also dedicate the time to implement those mechanisms, which include unit testing, massive test of hardware under limit conditions, testing under noisy conditions, testing in simulated environments, etc.<br />
However, companies do not feel yet attractive the Robocup competition, hence they do not use their products to participate and make the competition more interesting.</p>
<p><center><iframe width="420" height="315" src="http://www.youtube.com/embed/i49rrAzRzr8" frameborder="0" allowfullscreen></iframe><br />
<strong>Robot engages into cyclic behavior during Robocup competition, due to lack of robustness</strong></center></p>
<p>Since researchers do not have access to all that bunch of techniques (not because they don&#8217;t know but because they do not have time and money to implement them), the solution they have found is something intermediate. They buy as much off-the-shelf hardware as they can (Kinect cameras, Hokuyo lasers, Pioneer mobile bases, Katana arms&#8230;), and they use as much already made software as they can (let&#8217;s say ROS and other open source libraries). However, there are still some robot parts that are not available in the market, so participants must construct them themselves. And hence, a possiblitiy of failure appears&#8230;</p>
<p>I presume that when companies participate in the competition the level will increase since most of current failures will be avoided, and the competition will concentrate on skills development rather than in robustness achievement. Next question is then, how can we make interesting the competition to companies&#8230; </p>
<p><center><a href="http://www.ouroboros.org/wp-content/uploads/2012/06/20120629-171710.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/06/20120629-171710.jpg" alt="20120629-171710.jpg" class="alignnone size-full" /></a><br />
<strong>Reem-B, a product of Pal Robotics</strong></center></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ouroboros.org/2012/06/23/robot-robustness/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>To calibrate or not to calibrate&#8230;</title>
		<link>https://www.ouroboros.org/2012/03/24/to-calibrate-or-not-to-calibrate/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=to-calibrate-or-not-to-calibrate</link>
		<comments>https://www.ouroboros.org/2012/03/24/to-calibrate-or-not-to-calibrate/#comments</comments>
		<pubDate>Sat, 24 Mar 2012 13:00:43 +0000</pubDate>
		<dc:creator>ricardo</dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Research]]></category>

		<guid isPermaLink="false">http://www.ouroboros.org/?p=340</guid>
		<description><![CDATA[Robot calibration. I would define it as the process by which a robot takes knowledge of the actual position in its body of a given part that is important for it (usually, the sensors), in relation to a given frame of reference (usually the body center). For example, where in the robot body is exactly [...]]]></description>
				<content:encoded><![CDATA[<p>Robot calibration. I would define it as the process by which a robot takes knowledge of the actual position in its body of a given part that is important for it (usually, the sensors), in relation to a given frame of reference (usually the body center). For example, where in the robot body is exactly located the stereo camera, from the center of the robot.</p>
<p>In a perfect world, calibration would never be necessary. The mechanical engineers would design the position of each part and piece of the robot, and hence, everybody would have access to that information just by asking the engineers where did they put that part in the robot.<br />
However, real life is more interesting than that. Designed positions of robot parts NEVER correspond to the actual position in the real robot. This is due to errors performed during construction, tolerances between parts, or even errors in the plan.</p>
<p>To handle this uncertainty, the process of <strong>calibration</strong> was invented. So, EACH ROBOT has to be calibrated after construction, before starting to use it.</p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/03/pr2_calibration.png"><img src="http://www.ouroboros.org/wp-content/uploads/2012/03/pr2_calibration-300x145.png" alt="" title="PR2 hand-eye calibration" width="300" height="145" class="aligncenter size-medium wp-image-351" /></a><strong>PR2 robot uses a checkboard pattern to calibrate its camera</strong></p>
<p>All type of robots suffer from having to be calibrated, but in the case of Service Robots the situation is more complex, because its number of sensors and parts involved is larger (a different calibration system must be designed for the calibration of each part).<br />
Furthermore, given that current AI systems that control the robot relay on a very precise calibration to work correctly (they cannot handle very well noise and error), having a good calibration system is crucial for a Service Robot.</p>
<p>Current approaches to calibration follow more or less the same approach: the robot is set on a controlled specific environment, and performs some measurements with some of the sensors that need to be calibrated. This is the process followed in hand-eye calibration [1], odometry calibration [2], or laser calibration [3]. </p>
<p>All those processes require the robot to perform some specific actions with a very specific setup.<br />
The problem arises when, you have many systems to calibrate (for example in a Service Robot), and also, the robot has to be recalibrated from time to time due to changes in the robot structure (robots suffer changes just by using them!). </p>
<p><a href="http://www.ouroboros.org/wp-content/uploads/2012/03/Reem-robot-at-IJCAI-conference.jpg"><img src="http://www.ouroboros.org/wp-content/uploads/2012/03/Reem-robot-at-IJCAI-conference-225x300.jpg" alt="" title="Reem robot calibrating itself" width="225" height="300" class="aligncenter size-medium wp-image-357" /></a> <strong>Reem robot performs some specific movements to calibrate its arms</strong></p>
<p>So, a more general approach to calibration has to be designed, that avoids the definition of specific calibrators for each part.<br />
And this process has also to be a long life calibration system, that allows the robot calibrate by itself without having to use specific set ups (usually only available at specific locations). Summarizing, the robot must learn its sensorymotor space and adapt it as it changes through its whole life.</p>
<p>Theory towards this end has already been put in place in the work of Philipona and O&#8217;Regan [4][5].<br />
In their work, Philipona and O&#8217;Regan propose an algorithm that would allow any robot to learn any sensorimotor system, the relations between sensors and motors, and how they related to the physical world&#8230; without any previous knowledge of its body or the space around it!.</p>
<p>Applying this theory to the calibration of a robot would allow any robot to calibrate itself, independently on where it is (not necessarily at the factory, but may be at the owner&#8217;s home), the sensorymotor configuration, and also adapt itself to changes along its whole life, without having to return to the factory or requiring any specific action from the owner.</p>
<p>At present, such type of calibration system is almost science fiction. I am not aware of anybody using it, but who knows if somebody is already working on this somewhere in the world&#8230; may be at <a href="http://www.pal-robotics.com" target="_blank">Pal Robotics</a>?&#8230; </p>
<p>If you are interested just contact me.</p>
<p><em>References:</em><br />
[1] <a href="http://www.robotic.dlr.de/fileadmin/robotic/stroblk/publications/strobl_2006iros.pdf" title="Optimal hand-eye calibration" target="_blank">Optimal hand-eye calibration</a>, Klaus H. Strobl and Gerd Hirzinger, ICRA 2006<br />
[2] <a href="http://www.fieldrobotics.org/users/alonzo/pubs/papers/iros04.pdf" target="_blank">Fast and Easy Systematic and Stochastic Odometry Calibration</a>, A. Kelly, IROS 2004<br />
[3] <a href="http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=132014" target="_blank">Laser rangefinder calibration for a walking robot</a>, E. Krotkov, ICRA 1991<br />
[4] <a href="http://nivea.psycho.univ-paris5.fr/Philipona/space.pdf" target="_blank">Is there something out there? Inferring space from sensorimotor dependencies</a>, D. Philipona, J.K. O’Regan, J.-P. Nadal, 2003<br />
[5] <a href="http://books.nips.cc/papers/files/nips16/NIPS2003_CS06.pdf" target="_blank">Perception of the structure of the physical world using unknown multimodal sensors and effectors</a>  D. Philipona, J.K. O’Regan, NIPS 2003</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ouroboros.org/2012/03/24/to-calibrate-or-not-to-calibrate/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
