Robots can now visually interpret objects and identify them later
While robots have been developed with the capability to distinguish between objects, these machines do not really understand the structure or shapes of items so they cannot do much with them. However, researchers from the Massachusetts Institute of Technology (MIT) have now developed a system that allows robots to inspect random objects and use them to accomplish tasks that they have never even observed.
“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said study co-author Lucas Manuelli. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”
Working in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the experts created the “Dense Object Nets” (DON) system to allow robots to gain a visual understanding of objects by focusing on a collection of points that serve as “visual roadmaps.”
The DON method not only allows the robots to manipulate items, but also to select a specific object out of a random group of similar objects. This is a valuable skill for the kinds of machines that companies like Amazon and Walmart use in their warehouses.
“In factories robots often need complex part feeders to work reliably,” said Manuelli. “But a system like this that can understand objects’ orientations could just take a picture and be able to grasp and adjust the object accordingly.”
The researchers also envision potential applications in homes for cleaning tasks such as putting away clean dishes. Incredibly, none of the data used by the robots is actually labeled by humans. Instead, the system is “self-supervised,” and does not require any human explanations.
The team, which also includes study lead author Pete Florence and Professor Russ Tedrake, will present the research next month at the Conference on Robot Learning in Zürich, Switzerland.
Image Credit: Tom Buehler