The MIT approach is based on monitoring patients’ radio wave signals – which indicate both their breathing and heart rates – and applying an AI algorithm to the data. This enables researchers to distinguish the different stages of sleep: light sleep, deep sleep and Rapid Eye Movement (REM) sleep. The researchers presented their paper, entitled ‘Learning Sleep Stages from Radio Signals: A Conditional Adversarial Architecture’, at the International Conference on Machine Learning held in Sydney from 6 to 11 August.
In hospitals today, the traditional way of examining sleep patterns is to use electroencephalography, which involves placing electrodes on the surface of the scalp. However, a patient may well find this method disconcerting, which may in turn lead to skewed results.
The new approach, developed in collaboration with the Massachusetts General Hospital, is far less intrusive: it does not use any physical sensors, not even a connected wristband.
The algorithms developed by MIT enable the researchers to analyse the raw data collected via the radio waves. To date, the technology has proved to be 80% reliable. Moreover, in addition to treating chronic sleep disorders – which currently affect at least 40 million Americans – it could also be used to help people suffering from depression, Alzheimer’s disease and Parkinson’s disease.