At the EmTech Digital event hosted by the MIT Technology Review in San Francisco in late March, Gary Marcus, a Professor of Cognitive Science at New York University and founder of Geometric Intelligence, a startup acquired late last year by Uber, ended a decidedly downbeat presentation on the current state of Artificial Intelligence on a slightly more enthusiastic note. Referring to cancer research, Professor Marcus pointed out that “We’re still doing chemotherapy, a technique that’s fifty years old. 100 years from now, they’ll probably look back and think our methods are as primitive as the blood-letting used a hundred years earlier. Underlining that “we have thousands of genes floating around inside us, millions of proteins that react with each other – that level of complexity is too hard for our brains to compute,” he held out the hope that the risk factors which might indicate the likelihood of such a disease could be uncovered by an algorithm. And although “the machines we have today are not able to understand all the intricacies of what’s going and really reason about it (…) if there is to be a ‘killer app’ for AI, it’s probably that – improving understanding of medical problems and helping to provide better treatment,” he suggested. And the NYU professor saw one field in particular where Artificial Intelligence is already starting to really be useful to the medical profession – medical imagery.

Kimberly powell

at the emtech event

Very much of the same opinion was Kimberly Powell, Senior Director for Business Development at Nvidia, who also spoke at the EmTech Digital event. The company, originally a specialist in graphics processing units, designs computer hardware used for ‘deep learning’ techniques. This branch of Artificial Intelligence uses neural networks, broadly inspired by the way the human brain works, to enable a software tool to improve its performance all by itself by training up on large volumes of data. Kimberly Powell is working closely with medical researchers in a range of fields who are looking to harness deep learning in their work. Like Gary Marcus, she sees a rich vein of opportunity for AI in medical imaging for better disease detection, diagnosis and choices of treatment.

From detection to diagnosis

Kimberley Powell outlined how deep learning is opening up new prospects for detecting illnesses such as cancers, pointing out that: “Screening and detection is the first stage, and we’re all familiar with the idea that the sooner you detect cancer, the better the outcome is for the patient.” Artificial Intelligence is now able to help clinicians detect anomalies on patient’s x-ray images. She described how a research team at the National Institutes of Health (NIH) had used Nvidia’s CUDA programming model and Graphics Processing Units (GPUs) to train their neural network on a large database of x-rays,underlining that “it has rapidly become able to detect disease inside the images just as good as clinicians can.” The algorithm is also capable of creating captions for each image with contextual data showing the exact location of the anomaly, its characteristics and the disease that might be indicated. As radiologists often have to examine hundreds of x-ray images every day and take a decision in just a few seconds, this kind of contextual aid and a second diagnosis provided by the machine could help to speed up the process and considerably reduce the risk of errors.

Deep learning to save time?

Following detection, we move to the diagnostic phase. “In screening, you use such things as x-rays and ultrasound, whereas in diagnostics you’re relying on more advanced techniques such as CT and MRI, which come with very rich datasets,” said Ms Powell, explaining: “Modern-day MRIs can create 4-D images, that is 3-D volume plus some kind of time component – for instance with the heart you’re looking at the blood flow. So these images become more and more complex for the radiologist to read and to quantify. It takes a trained radiologist upwards of 20 minutes to (…) make an accurate quantification of the problem.” Ms Powell referred to a startup called Arcturus, which is applying deep learning to create automatic quantification of the blood-flow. This will save time and “enable the clinician to spend time with the patient learning about his or her lifestyle and the world around them and determine the proper course of treatment,” underlined the Nvidia Business Development Director.

Google Research use of Deep Learning

Martin Stumpe, a technical research lead at Google, outlined the work being done at the company to apply Artificial Intelligence to medical diagnosis. He explained that in the field of cancer, “biopsy is the key step in the diagnosis and in determining the treatment. So you would hope that this process is very accurate...,but it’s not. One out of twelve breast cancer biopsies in the United States today is misdiagnosed. For prostate cancer, the rate is one out of seven misdiagnosed.” So why is it so difficult? One reason is that “doctors look for tiny things in very large images: it’s a bit like looking for a needle in a haystack. Secondly, a tumour can often look very similar to benign tissue, so it’s very hard to distinguish.” Similarly to the National Institutes of Health, the Google research team have now trained an image recognition algorithm to spot incidences of cancer, with highly promising results. The team claim that their algorithm, when applied to breast cancer, is able to detect 92.4% of all tumours, versus a 73.2% successful diagnosis rate among clinicians. Martin Stumpe also presented a similar technique applied to detecting diabetic retinopathy This condition is also frequently misdiagnosed, with tragic consequences –blindness usually follows if the condition remains undiagnosed. Here again, the Google algorithm has obtained promising results: 90% accuracy, versus a 60% rate among human ophthalmologists. The technique is currently being tested out at a number of hospitals in India, where there is a serious shortage of trained ophthalmologists.

Deep Learning: from detection to diagnostic

Shutterstock

A broad impact on healthcare

1/12

breast cancer

is misdiagnosed

Kimberly Powell argues that eventually Artificial Intelligence might even enable the medical profession to do away with biopsies altogether. “When dealing with brain cancer today, once you’ve detected that it’s there and quantified it, there’s always a biopsy to determine the right treatment –whether that’s chemotherapy, radiotherapy or removal of the tumour. For that you need to know about its genetic makeup, the genomic characteristics, how aggressive it is.” She drew attention to the work of Professor Bradley Erickson at the Mayo Clinic, who is using Artificial Intelligence to extract this kind of information from an MRI image. His algorithm “has been able to detect certain texture on the tumour, which can then be correlated with specific biomarkers that will then determine the appropriate course of action.” Professor Erickson is “using Artificial Intelligence ‘to see the unseen’,” she told the audience.

However, the potential of AI goes far beyond this. Ms Powell pointed to electronic health records, “something we all have, but this data has been locked up, inaccessible for far too long.” Now however, Researchers at Mount Sinai have created an application called Deep Patient, designed to predict the diseases their patients might have, based upon the data from their electronic health records. Another promising area is genomics. This is not an area where human researchers are likely to make much headway because “the data is too complex, there are so many variations they make your head explode (…)but genomics will probably be one of the keys that unlocks the dream of precision medicine,” she argued. In this field, a startup called Deep Genomics is now using machine learning to make predictions based on genetic variations. Kimberly Powell also looked forward to AI being used to make progress in ‘under-served’ countries. Her example here was Butterfly Network, which is creating a new ultrasound device designed to help democratise medical imaging in less well-off countries. Meanwhile Gary Marcus divulged his fantasy that one day computers will be built that are sufficiently intelligent to read masses of medical literature and come up with new ideas for treatment. “We need to build machines capable of going beyond memorised cases, that have the ability to reason and make inferences.” The NYU professor invited the EmTech Digital audience to imagine a machine “able to read the latest 20 scientific publications on, say, bladder cancer, absorb the data and figure out a new innovative technique for it.

By Guillaume Renouard