Researchers at MIT have developed a new optical sensor which gives robots unprecedented precision in manipulating objects.
Robots that are able to seize hold of fixed objects have been around for a long time, but the challenge that research teams from the Massachusetts Institute of Technology (MIT) and Northeastern University in the United States have set themselves is to take robot manipulation to a new level and enable robots to handle items whose positions vary. Sure enough, their work, based on a new type of optical sensor plus a dedicated algorithm and control system, has resulted in a robot being able to plug a USB cable, which is not rigid but can twist and turn in the hand, into a socket, effectively requiring millimetre precision. In their demonstration video, the researchers show the enormous improvement their technique can deliver by pitting it against a robot equipped with what has hitherto been state-of-the-art technology, which proved to be very clumsy by comparison. This unprecedented degree of precision offers exciting new potential for harnessing robots to perform more complex physical tasks.
Robotic touch based on GelSight technology
The starting point for this ground-breaking work was the GelSight technology developed a few years ago at MIT. The GelSight technique enables extremely precise 3D imaging. This time around the MIT researchers worked with a computer science team at Northeastern University to adapt the technology for use with robots, inter alia reducing the size of the sensor so that it can be embedded in the palm of a robotic hand. Whereas most tactile sensors use mechanical measurements to gauge mechanical forces, based on barometric principles, GelSight is based on optics and computer-vision algorithms and its visual imaging process enables a far higher level of precision in the robot’s ability to manipulate objects. Edward Adelson, a Professor of Vision Science at MIT’s Department of Brain and Cognitive Sciences, who is one of the original GelSight developers, explains that as it is easier to work with visual input for this kind of activity, “the most sensible thing was to figure out how to transform a tactile mechanical signal into a visual signal.” The technique uses a gel mounted in a cubic plastic housing, with just the paint-covered face exposed. The four walls of the cube adjacent to the sensor face are translucent, and each conducts a different colour of light – red, green, blue, or white – generated by light-emitting diodes at the opposite end of the cube. The researchers reckon that the new GelSight sensor is one hundred times more sensitive than a human finger, which will allow robots to react much more accurately to their surroundings in future. The new sensor – whose main components are diodes and a mini-camera – also costs less than the original GelSight sensor.
What further applications for the new sensor?
So the question is now in how many different fields, robotics aside, this scientific breakthrough can be usefully applied. The relative handiness and low cost of the gel-based sensor looks certain to appeal to manufacturers, and although no-one has yet suggested exactly how the sensor could be used, aeronautics, medicine and any industrial process where precision tools are required are of course obvious candidates for applying the new technology. Meanwhile engineers at Harvard have already succeeded in improving robots’ object recognition using new types of sensors, and piezoelectricity is now also being used in robotics. So if the new GelSight sensor is to make its mark alongside these other ground-breaking technologies, its inventors will need to convince potential partners of its precise benefits.