Article image
03-29-2024

Household robots: Now with added "common sense" and ability to self-correct

We’ve all seen household robots in movies and TV shows, performing tasks ranging from simple cleanup to complex problem-solving, often equipped with a kind of “common sense” that makes them seem almost human.

But how “smart” are robots in the real world?

Researchers at Massachusetts Institute of Technology (MIT) are working on making our household bots a little more intelligent by giving them the ability to self-correct.

After all, real life can be messy, and robots need to be prepared to take those disruptions in stride.

Household robots are great mimics

Many household robots operate using a style of learning called imitation learning. Essentially, they mimic a human who physically guides them through a task.

This works well enough for basic actions, but it leaves a major gap: what happens when the robot gets bumped or something in the environment changes?

Unless engineers have meticulously planned for every hiccup, the bot won’t handle it well.

Think of a robot programmed to put a book on a shelf; if the book is moved while it’s in progress, it won’t know what to do.

Robots + Language models = Common sense

MIT engineers are developing a way for robots to “understand” their environments, giving them the flexibility to adjust when the unexpected happens.

How? By combining traditional robot learning techniques with cutting-edge language models, giving them something resembling common sense.

Let’s break down the key components:

Large language models (LLMs)

These advanced computer programs are masters at processing and understanding massive amounts of text. They are similar to super-powered language learners. They can generate text, translate between languages, and even break down complex problems.

Imitation learning

Imitation Learning is a subset of machine learning where a robot or agent learns to perform tasks by mimicking human actions. This approach is particularly useful in robotics, where manual programming of every possible action and scenario a robot might encounter is impractical. Here’s how it works:

Direct guidance

A human operator physically guides the robot through the desired task. This could involve moving a robotic arm to pick up objects or navigating a robot through an environment.

Learning through observation

The robot observes the actions taken by humans (or other robots) and learns the sequences of actions required to achieve a specific goal.

Advantages

Imitation learning allows robots to acquire complex skills without the need for explicit programming of those skills. It’s an efficient way to transfer human knowledge to machines.

Challenges

A major challenge with imitation learning is dealing with variations or unexpected situations not covered during the training phase. Robots may struggle to adapt to new scenarios without additional guidance.

Subtasks

Breaking down complex actions into subtasks is a fundamental principle in both robotics and software engineering. This approach simplifies problem-solving and task execution. Here’s how it applies:

Simplification

Complex tasks, like scooping marbles, are decomposed into simpler, manageable actions (reaching, scooping, moving, pouring). This makes it easier for robots to understand and perform the task.

Flexibility

By identifying and separating subtasks, robots can more easily adapt to disruptions. If an error occurs in one subtask, the robot can attempt to correct it without restarting the entire process.

Efficiency

Decomposing tasks into subtasks allows for more efficient planning and execution. Robots can optimize their actions for each subtask, improving overall performance and success rates.

Putting common sense in the robots

The language model’s job 

Given a task like “scoop marbles from one bowl and put them in another”, the language model breaks it down into those logical subtasks: “reach”, “scoop”, “transport”, and “pour”.

Mapping words to actions

Special algorithms help the robot learn which subtask it’s performing by connecting the language model’s words to its position in space or to a camera image showing its current state. It’s like the robot learns what the words for its actions actually mean!

Self-correction in action

Researchers physically knocked a robot mid-task. Instead of freezing or continuing blindly, the robot recognized the problem and successfully adapted, completing each subtask before moving forward.

Significance of robot common sense

“Imitation learning is a mainstream approach enabling household robots. But if a robot is blindly mimicking a human’s motion trajectories, tiny errors can accumulate and eventually derail the rest of the execution,” says Yanwei Wang, a graduate student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “With our method, a robot can self-correct execution errors and improve overall task success.”

Household robots hold a lot of promise, but they need to be able to operate in unpredictable environments, handling dropped objects, changed locations, or unexpected obstacles.

Currently, programming a robot with the vast array of possible disruptions would be tedious.

This technique empowers the robots to learn a new degree of flexibility that makes them far more functional.

Smarter robots with common sense

In summary, the researchers from MIT have developed a fascinating approach that combines imitation learning with the common sense knowledge of large language models to create more adaptable and robust household robots.

By enabling robots to parse tasks into subtasks and self-correct when faced with disruptions, this method eliminates the need for extensive programming or additional human demonstrations.

The team’s algorithm has the potential to revolutionize the way household robots are trained, converting existing teleoperation data into robust robot behavior capable of handling complex tasks in the face of external perturbations.

The study is published in ArXiv.org.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe