Tools are now being developed which are powerful enough to access information on the Internet on the basis of open questions. These ‘smart’ tools interpret implicit logical connections and draw on responses to questions you have previously asked.
For a long time the Semantic Web has held out the promise of a search engine that is able to respond to complex questions. However it is still difficult for systems using artificial intelligence (AI) to understand and respond to implicit questions or cryptic remarks that crop up in people’s everyday conversations. Although a search engine can find out for you the birthplace of Abraham Lincoln and is quite capable of coming up with a population figure for any city you name, it is not able for instance to answer the question: “What is the population of the city where Abraham Lincoln was born?” Whereas a search engine only knows how to obtain answers to those separate specific questions, the idea behind the Semantic Web is to combine such data. Microsoft and Apple are now close to perfecting new tools which up to now have been highly ‘theoretical’. Microsoft’s Bing makes web searches more intuitive, while Siri, acquired by Apple a few years ago, is a ‘smart’ personal assistant that responds to voice commands. These two tool promise the same thing: to provide users with a more intuitive approach to sifting the mass of available information, which Google originally helped to index. Behind both these technologies are the advantages – and the constraints – of AI, which ‘learns’ from people but requires time to get into sync with a given user’s habits, needs and interests.
Conversations with a smart search engine
In the field of search engines, Google has made its mark largely due to its unique indexing system. The idea behind the Mountain View giant’s PageRank algorithm was to list pages according to how popular they were with users. This was already considered to be a smart, evolutionary indexation system, but requests still have to be made one after the other, and each of them starts up the search again from scratch. This means that interaction between the user and the PageRank algorithm has hitherto remained very limited. Since then the computational knowledge engine Wolfram Alpha has set itself the task of going further than Google’s one-off approach to search. Wolfram Alpha offers an approach to available information which can sift the actual contents of a web page, i.e. go beyond indexation on the basis of HTML tags and key words. Meanwhile Bing is now at the stage of treating a request as part of a longer ‘discussion’. This means that Bing has to memorise a user’s questions in order to set up a history of ‘web requests’ and allow the search engine to isolate a given topic from a series of questions without it being specifically mentioned by name each time. So, having unearthed for us the birthplace of Abraham Lincoln, Bing subsequently knows that you are referring to the same Lincoln when you ask: “When did he become President?”
From Siri to Viv, the future of personal assistants
The creators of Siri, the voice-controlled personal assistant installed on the iPhone since 2011, are now working on the concept of a successor of a rather different nature. By focusing on memorising past searches, Viv will be empowered to supply responses to the most cryptic-seeming remarks. Simply announcing “I’m drunk” will give Viv to understand that your geolocation function needs to be activated, and that your preferred car service should be sent asap to the address where you are on the point of passing out. Siri 2.0 thus combines traditional machine learning with algorithms able to isolate separate semantic chunks and deal with them one by one. So for example a car driver on his way to have dinner with friends would eventually be able to ask Viv which wine would go with a given dish. Viv would then isolate each of the elements – locating the car, directing it to the nearest liquor store and meanwhile searching for advice on the right wine to buy. For the moment intelligent personal assistants on current smartphones – from Siri to Google Now – still cannot provide this type of complex response but Viv is being developed with a view to closing the gap between the way we currently interact with AI and our normal way of holding a conversation with other people. This work is along the same lines as Microsoft’s efforts to improve the quality of access to information through Bing.