Joining the growing list of wearable connected objects, the EyeRing takes the logic of visual recognition of multi-channel information a step forward.
The development of the smartphone, followed by the tablet, has demonstrated that the digital advance is based on two main requirements: mobility and intuitiveness. It is no longer enough just to provide efficient working tools: the developer needs to offer a real experience. And although touchscreen devices have almost become de rigeur, the potential for using one’s fingers – which is a prime means of human interaction – for IT purposes still remains underexplored. Now a team of researchers from the Singapore University of Technology and Design and the MIT Media Lab in Cambridge, Massachusetts, USA, starting out from the observation that the act of pointing one’s finger comes even before speech as a means of communication, has developed a smart ring. Equipped with a visible light spectrum camera, the EyeRing enables people who are visually impaired or even blind to interact with their environment.
A finger-worn visual assistant
The research team set out to develop a device which allows visually impaired people to obtain an audio description of an object at which they are pointing. Their brainchild, the EyeRing, is an information input device made up of a camera mounted on a ring, an embedded processor and wireless connection, coupled with a speech recognition service to enable voice commands, plus both speech and screen output functionality. The camera is directly linked to a computational device, typically a smartphone. Various computer vision techniques are then applied to help the user recognise objects or locations based on one or more images taken by the ring camera. The system can in fact name an object in response to a question asked out loud, and will provide such additional services as recognising and warning the user of an obstacle in the vicinity or signalling that s/he has reached a given street address or specific destination. Moreover, the system is designed to build up an image database over time, using images and photographs from the Internet in addition to those already recorded by the camera, which means that it can provide the user with objective recognition of generic objects, supplemented by personalised item recognition. For example it will differentiate an item belonging to the user from a generic article of the same type.
Rich layers of information
Although EyeRing has been developed specifically as an assistant for blind and visually impaired people, the results it has obtained in testing so far could well lead to much wider use. Within the ecosystem of connected objects we can well imagine that a somewhat smaller version of this ring could to some degree replace the smartphone for the purposes of gathering information from multiple channels. Rather than having to pull out your mobile device to scan a QR code, you might use the EyeRing to read the information in a smoother and more intuitive way. It might also provide an easier way to incorporate rich, complex information into documents, for instance ‘copy-pasting’ in objects from your environment without the overall content of the document becoming too heavy. The research team seems to be ahead of the game here as they have already incorporated software into the EyeRing system that allows you to project recorded images directly on to a screen and then overlay them with additional information. For example, when you point your finger at a monument, the application would display the history of and key facts about the building on your tablet screen or alternatively provide an audio description.