Archive for Research

Robot Robustness

// June 23rd, 2012 // No Comments » // Research, Work

Along the last three days I have been attending to the Robocup competition held in Mexico. This is a competition were robots confront to each other on a dynamic scenario, out of the more controlled ones of the labs. There are several leagues in which robots can compete. Concretely, I have been attending the Robocup@Home competition league, devoted to the test of Service Robots on a home scenario.

20120629-171042.jpg

The Robocup@Home arena during training periods

In the Robocup@Home, robots must perform tasks on a home environment. They have to follow the orders of humans and help them in common situations of dayly live activities. Tests, for example, include following the owner across a chaotic environment, bring to the owner some stuff from some place, or help him clean a room.

During the competition of this year, even if the tests are very simple for a human, most of the robots failed from its very beginning. They were not able to perform what they were (suposedly) trained to do.

20120629-171224.jpg

Classification table after the first stage of the competition

I am sure that most of the teams had their robots working perfectly at their labs before comming to the competition. But their performance at the competition arena was very bad. The question is then, what happenned between their working situation at the lab and the failure condition during the competition?

When the teams are asked about why their robot failed, they report that their robot just had a failure in one of its mechanical, electronical or software component. Usually they indicate that they performed a last minute change in order to adapt the robot to the new environment, and that change, triggered those errors.

There are two interesting points in that answer there:

  • First, robots are not able to adapt very well to changes in the environment.
  • Second, a last minute change makes the robot fail, which is a consequence of a lack of robusteness in the robot.
  • In this post, I’m going to concentrate on the second point. The robots of the competition are not robust. This means that small changes in the conditions of working, make the robot fail.Those conditions may include last minutes changes in the robot code or hardware, but also, and more important than that, changes in conditions include differences between the testing situation at the lab and the testing situation during the competition.


    20120629-171508.jpg

    Cosero robot (one of the more robust) training how to identify and grasp objects at the Robocup@Home

    At the Universities, people is more concentrated on doing proof of concepts. This means that researchers and students work in order to show that something is possible at least once. Once this is demonstrated, they move to another subject to try to demonstrate that it is also possible in principle. After all, they get recognition after any new discovery or demo of possibility. So they are not interested in robustness as much as they can keep on doing more proofs of concepts.

    Companies, instead, they need to have robust products in order to sell them. By robust products, I mean products that provide all the time what is expected from them. In the case of robots, the robust robot must be able to follow its master 99% of the times, in different places and locations, be able to grasp objects or understand language in almost any situation.

    To achieve robust products, companies have developed all a bunch of quality assurance mechanisms that can be applied to all mechanical, electronic and software parts. They also dedicate the time to implement those mechanisms, which include unit testing, massive test of hardware under limit conditions, testing under noisy conditions, testing in simulated environments, etc.
    However, companies do not feel yet attractive the Robocup competition, hence they do not use their products to participate and make the competition more interesting.


    Robot engages into cyclic behavior during Robocup competition, due to lack of robustness

    Since researchers do not have access to all that bunch of techniques (not because they don’t know but because they do not have time and money to implement them), the solution they have found is something intermediate. They buy as much off-the-shelf hardware as they can (Kinect cameras, Hokuyo lasers, Pioneer mobile bases, Katana arms…), and they use as much already made software as they can (let’s say ROS and other open source libraries). However, there are still some robot parts that are not available in the market, so participants must construct them themselves. And hence, a possiblitiy of failure appears…

    I presume that when companies participate in the competition the level will increase since most of current failures will be avoided, and the competition will concentrate on skills development rather than in robustness achievement. Next question is then, how can we make interesting the competition to companies…

    20120629-171710.jpg
    Reem-B, a product of Pal Robotics

    To calibrate or not to calibrate…

    // March 24th, 2012 // 1 Comment » // Artificial Intelligence, Research

    Robot calibration. I would define it as the process by which a robot takes knowledge of the actual position in its body of a given part that is important for it (usually, the sensors), in relation to a given frame of reference (usually the body center). For example, where in the robot body is exactly located the stereo camera, from the center of the robot.

    In a perfect world, calibration would never be necessary. The mechanical engineers would design the position of each part and piece of the robot, and hence, everybody would have access to that information just by asking the engineers where did they put that part in the robot.
    However, real life is more interesting than that. Designed positions of robot parts NEVER correspond to the actual position in the real robot. This is due to errors performed during construction, tolerances between parts, or even errors in the plan.

    To handle this uncertainty, the process of calibration was invented. So, EACH ROBOT has to be calibrated after construction, before starting to use it.

    PR2 robot uses a checkboard pattern to calibrate its camera

    All type of robots suffer from having to be calibrated, but in the case of Service Robots the situation is more complex, because its number of sensors and parts involved is larger (a different calibration system must be designed for the calibration of each part).
    Furthermore, given that current AI systems that control the robot relay on a very precise calibration to work correctly (they cannot handle very well noise and error), having a good calibration system is crucial for a Service Robot.

    Current approaches to calibration follow more or less the same approach: the robot is set on a controlled specific environment, and performs some measurements with some of the sensors that need to be calibrated. This is the process followed in hand-eye calibration [1], odometry calibration [2], or laser calibration [3].

    All those processes require the robot to perform some specific actions with a very specific setup.
    The problem arises when, you have many systems to calibrate (for example in a Service Robot), and also, the robot has to be recalibrated from time to time due to changes in the robot structure (robots suffer changes just by using them!).

    Reem robot performs some specific movements to calibrate its arms

    So, a more general approach to calibration has to be designed, that avoids the definition of specific calibrators for each part.
    And this process has also to be a long life calibration system, that allows the robot calibrate by itself without having to use specific set ups (usually only available at specific locations). Summarizing, the robot must learn its sensorymotor space and adapt it as it changes through its whole life.

    Theory towards this end has already been put in place in the work of Philipona and O’Regan [4][5].
    In their work, Philipona and O’Regan propose an algorithm that would allow any robot to learn any sensorimotor system, the relations between sensors and motors, and how they related to the physical world… without any previous knowledge of its body or the space around it!.

    Applying this theory to the calibration of a robot would allow any robot to calibrate itself, independently on where it is (not necessarily at the factory, but may be at the owner’s home), the sensorymotor configuration, and also adapt itself to changes along its whole life, without having to return to the factory or requiring any specific action from the owner.

    At present, such type of calibration system is almost science fiction. I am not aware of anybody using it, but who knows if somebody is already working on this somewhere in the world… may be at Pal Robotics?…

    If you are interested just contact me.

    References:
    [1] Optimal hand-eye calibration, Klaus H. Strobl and Gerd Hirzinger, ICRA 2006
    [2] Fast and Easy Systematic and Stochastic Odometry Calibration, A. Kelly, IROS 2004
    [3] Laser rangefinder calibration for a walking robot, E. Krotkov, ICRA 1991
    [4] Is there something out there? Inferring space from sensorimotor dependencies, D. Philipona, J.K. O’Regan, J.-P. Nadal, 2003
    [5] Perception of the structure of the physical world using unknown multimodal sensors and effectors D. Philipona, J.K. O’Regan, NIPS 2003

    Challenges for Artificial Cognitive Systems

    // January 29th, 2012 // No Comments » // Artificial Intelligence, Research

    Along the last weekend (from 20 to 22 February of 2012), an extra workshop organized by the euCognition research network was held at Oxford: the second edition of Challenges for Artificial Cognitive Systems.

    I was interested to attend this workshop in order to figure out answers to the following questions:

    1. Is it necessary to include cognition in artificial systems in order to create one (Service Robot) that can be useful in dynamic human environments?
    2. Which kind of cognitive abilities do we have to include for a particular type of robot?
    3. What are cognitive abilities anyway?

    The workshop was planned as a set of discussion sessions. The environment allowed discussion and debate. People that attended really wanted to find answers. The result, was a very dynamic and interesting debate about all those questions and other related.

    First, we discussed about the kind of cognitive skills an artificial system would need. This lead to the following basic list:

    1. Ability to interact with humans in a natural way (whatever this means)
    2. Adapt and learn from the environment
    3. Able to achieve a goal autonomously

    This list answered on a broad sense the third question in my mind (what are cognitive abilities?). However, I did not feel completely satisfied with those definitions and still need something more concrete…

    We discussed then how can we measure progress in those goals without allowing to cheat. This means, how can we create a kind of scale that measures how much of those points has been achieved and up to which degree, identifying at the same time that the system showing those abilities is not using something else that is not cognitive (like for example a table with all the possible states of the system and the answer to provide to each one).

    Related to this last point, we saw the necessity to identify which kind of tasks require cognition in order to be solved, and which ones don’t. This point was related to my two first questions, and can be otherwise stated as which classes of problems require cognitive abilities in order to be solved?. Of course, no satisfactory answer was provided and the issue remains open for next meetings (and years).

    Apart from what I have mentioned above, I found the following point a very important observation made at the meeting:
    current artificial intelligent systems have reached a plateau of improvement. It looks like a way to move away from that plateau is to incorporate more cognitive abilities to our artificial intelligent systems.

    I did not feel quite happy with neither the definition of cognitive system nor of the list of abilities required for such a system. And the reason is that for me, understanding is the only and basic cognitive ability upon anything else a robot must construct. By means of understanding, a system is able to acquire meaning through its interaction with the environment, and use this meaning to survive, adapt, learn and generate its own goals (you can read what I mean by understanding at this blog post). I think that understanding, and only this is the special characteristic that defines a system as cognitive. Unfortunately, current artificial systems have almost no understanding.

    If you are interested, the official results are published at the wiki of the euCognition project.

    Compliance: trending topic at the Humanoids 2011

    // November 7th, 2011 // No Comments » // Artificial Intelligence, Research, Work

    Compliant robot: a robot with the ability to tolerate and compensate for misaligned parts. Or otherwise stated, the ability of the robot to gracefully absorve an external force that tries to modify its position.

    At the last Humanoids conference (www.humanoids2011.org) everybody was talking about how to control a compliant arm, how to build compliant legs and how to move a compliant humanoid.
    We introduced our latest Reem robot to the scientific community and, besides the typical question about how much the robot costs, the top number one of the questions was: is your robot compliant?. Some people even crashed their bodies against the robot in order to check if the robot had compliant arms!.
    No, our robot is not compliant… yet.

    Of course, compliance is a very important feature for a service robot because we must be sure that a robot that works with humans will not harm a person. Hence, if someone crashes against the robot (or viceversa), we, as builders of the robots, must ensure that nobody gets hurt.

    Some other robots in the world have already shown very nice compliant characteristics. This is the case of the Meka robots. You can watch a nice video here, where the robot shows its compliance.

    Another case is the omnipresent PR2, where in this video shows how compliance can be useful for cooperation.

    However, at present, compliance has its dark side. Due to the fact that a compliant robot must be able to absorb forces, a compliant joint cannot distinguish between situation of crash or a situation of carrying a heavy load. A compliant joint will react in the same way to both situations, that is, letting the joint move on the opposite direction of the force. If the robot were carrying a weight, it would fall off.

    This reminds me the training of Chi Sao while doing Wing chun kung fu. In this training, two opponents try to feel the force one of them is doing against the arms of the other, and use it to generate a better attack. The basics of this trainning is to learn to differentiate when you have to push and when you have to diminish.

    We suffered the same kind of training when we were babies in order to understand the differences between carrying or being pushed.

    The compliant robot is still far from encoding that knowledge. The point is more delicated that just using a flag that indicates when the robot is in carrying mode or when in free mode to absorb collisions (that would be the GOFAI solution). It is necessary to embed into robots a more complex ability that makes them know when they are in one situation or the other.

    And that ability is understanding. The robot needs to understand when a force in its body is due to a crash or when is due to an object been carried.

    Understanding is the most important feature for a robot, and not only for compliance but for everything. At present, no robot in the world understands a … eemm… anything…

    Though work in front of us!

    Organising a Humanoid Robot Navigation Workshop

    // September 24th, 2011 // No Comments » // Research, Work

    The next 26th October 2011 the Humanoids conference will be held in Bled, Slovenia. This is a conference about the current status of humanoid robots.
    In this conference, together with some colleagues, I am organising a workshop about the current problems that humanoid robots have when moving around in human environments. The workshop is entitled Humanoid service robot navigation in crowded and dynamic environments. There we will discuss about those problems and our robot REEM will perform a demonstration of its navigation abilities.
    More information at the workshop website.

    Thesis defense

    // January 24th, 2011 // No Comments » // Research

    Finally I did the defence of my thesis. More than five years of research summarized in less than 40 minutes… fiuuu, nobody can ask for more!.
    Still, a complete description of the thesis as well as the whole document can be read here.

    Developed system for autonomous humanoid navigation

    // December 23rd, 2008 // No Comments » // Research, Work

    Legged humanoid robots face a big problem when trying to move autonomously on an indoor environment. Up to date, most of approaches have focused on using vision to persom such behavior. However, those approaches are still on its infance and do not allow the robot to move on real environments without crashing with obstacles.

    In this work, we are using classical techniques based on laser and odometry to successfully achieve the complete suite of navigation abilities: mapping, localization, path planning and obstacle avoidance.

    Motivation
    The main motivation for this research is to deploy a humanoid service robot on a home environment. A strong navigation system will allow the robot move on the environment and help people inside.

    Method
    For the laser data, we use two small lasers on each feet of the robot. The odometry is obtained by an inverse kinematic computation of the walking algorithm.

    Localization and mapping abilities of the robot are based on the use of particle filters, based on the DP-SLAM algorithm for mapping, and MonteCarlo particle filter for localization. Path planning uses the A* algorithm to calculate trajectories, and the obstacle avoidance is based on a potential field implementation.

    Results
    We applied the method to a human tall legged humanoid robot Reem-B. The results obtained can be seen on the following video.

    Autonomous navigation skills of Reem-B humanoid robot

    Organizing a special session on cognition in embodied systems

    // January 2nd, 2007 // No Comments » // Research

    We are organizing a special session about cognition in embodied systems during the next International
    Work-conference on Artificial Neural Networks (IWANN2007).
    Papers are accepted for this session until the next February the 4th. More information about the special session and instructions on how to submit can be found here.