I am walking strangely. About a week ago, I pulled something to my left ankle, which now hurts during the part of each step just before the foot leaves the ground. As a result, my other muscles are compensating for this to minimise the pain and my gait has shifted to something subtly different from the norm. In similar ways, all animal brains can compensate for injuries by computing new ways of moving that are often very different. This isn’t a conscious process and as such, we often take it for granted.
But we can get a sense of how hard it actually is by trying to program a robot to do the same thing. It’s far from straightforward. Robots have been used for years to perform structured, repetitive tasks and as engineering has advanced, their movements have become more life-like and more stable. But they still have severe limitations, not the least of which is inflexibility in the face of injury or changes to their body shape. If a robot’s leg falls off, it becomes as useful as so much scrap metal.
So for robots, adaptiveness is a desirable virtue, especially if they are to be used in the field. Modern bots can independently develop complex behaviours without any previous programming but usually, this requires trial and error and lots of time. But not always. Josh Bongard and colleagues at Cornell University have developed an adaptable bot that’s programmed to continuously assesses its body structure and develop new ways of moving if anything changes.
It differs from other models in that it has no built-in redundancy plans, no strategies for dealing with anticipated problems. It’s simply programmed to examine itself and adapt accordingly. The concept of a robot that can adapt to new situations is often the precursor to nightmare scenarios in many a science-fiction film. So it is fortunate that Bongard’s robot isn’t armed or threatening, but instead looks more like a four-armed starfish.
Each arm has two joints, and sensors that record the angle of these joints, and the tilt of the arms. At first, Starfish performs some experiments to get a sense of its own body. Humans have an instinctive understanding of how our body parts connect with each other, but this sense, called kinaesthesia, must be programmed into most robots. Starfish, however, doesn’t need that – it can work out its structure by itself.
It does this by performing random actions and using an array of sensors to see what these do to its body. It then creates several ‘self-models’ – representations of how its body is joined together – in the same way that a forensic scientist pieces together a crime based on the evidence.
Starfish then compares these self-models and performs actions designed to distinguish between them. After several rounds of this, the robot has a fairly accurate idea of how it’s built, what sorts of things it can do, and which parts it needs to move in order to do them. If it’s given an instruction like ‘move forward’, it can plan the best way of doing that. Watch it doing this here.
If Starfish detects something funny that goes against its self-model, it initiates the whole process again. If its leg falls off, it notices, re-creates its picture of itself, and plans new behaviours to cope with the ‘injury’.
These abilities will be instrumental in the future of robotics. Robots will become exponentially more useful if they can respond to new environments, or cope with the bodily changes that happen when they grasp a tool or suffer damage. They can be deployed to unstable disaster sites to help with recovery, or to the depths of space for exploration. They may even give us an insight into how the human brain develops self-awareness and adapts to new situations.
Reference: J. Bongard, V. Zykov, H. Lipson (2006). Resilient Machines Through Continuous Self-Modeling Science, 314 (5802), 1118-1121 DOI: 10.1126/science.1133687