Although animals are born with muscle coordination networks located in their spinal cord, learning the precise coordination of leg muscles and tendons takes some time. At the beginning, baby animals rely heavily on hard-wired spinal cord reflexes that help them avoid falling during their first walking attempts. The more advanced and precise muscle control that characterizes adult animals must be practiced, until the nervous system is well-adapted to the leg muscles and tendons.
In order to better understand how animals learn to walk, a research team from the Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart has built a four-legged, dog-sized robot that gradually learned to walk with the help of a Bayesian optimization algorithm which continuously compared sent and expected sensor information, ran reflex loops, and adapted the robot’s motor control patterns.
“As engineers and roboticists, we sought the answer by building a robot that features reflexes just like an animal and learns from mistakes,” said study lead author Felix Ruppert, a former doctoral student at MPI-IS. “If an animal stumbles, is that a mistake? Not if it happens once. But if it stumbles frequently, it gives us a measure of how well the robot walks.”
The learning algorithm that Dr. Ruppert and his team constructed simulated the control parameters of a Central Pattern Generator (CPG), as well as the reflexes that characterize living beings. In animals, these CPGs are networks of neurons in the spinal cord which produce periodic muscle contractions without input from the brain and aid the generation of rhythmic tasks such as walking, digestion, or blinking.
As long as an animal walks over flat surfaces, the CPGs are generally sufficient to control the movement signals from the spinal cord. However, if it stumbles, reflexes kick in and adjust the movement patterns to keep the animal from falling. While in newborn animals, CPGs are not yet adjusted well enough and the animals frequently stumble around, they rapidly learn how their CPGs and reflexes control leg muscles and tendons. By imitating these biological mechanisms, the robot dog that the scientists constructed surpassed animals in how quickly it learned to walk – in about one hour.
“Our robot is practically ‘born’ knowing nothing about its leg anatomy or how they work,” Dr. Ruppert explained. “The CPG resembles a built-in automatic walking intelligence that nature provides and that we have transferred to the robot. The computer produces signals that control the legs’ motors, and the robot initially walks and stumbles.”
“Data flows back from the sensors to the virtual spinal cord where sensor and CPG data are compared. If the sensor data does not match the expected data, the learning algorithm changes the walking behavior until the robot walks well, and without stumbling. Changing the CPG output while keeping reflexes active and monitoring the robot stumbling is a core part of the learning process.”
“We can’t easily research the spinal cord of a living animal. But we can model one in the robot,” added study co-author Alexander Badri-Spröwitz, an expert in Biomechanics at MPI-IS.
“We know that these CPGs exist in many animals. We know that reflexes are embedded; but how can we combine both so that animals learn movements with reflexes and CPGs? This is fundamental research at the intersection between robotics and biology. The robotic model gives us answers to questions that biology alone can’t answer,” he concluded.
The study is published in the journal Nature Machine Intelligence.
Image Credit: Felix Ruppert, Dynamic Locomotion Group at MPI-IS