
Researchers are developing a machine learning model aimed at early detection of Alzheimer’s dementia. This model, potentially accessible via smartphones, focuses on speech patterns rather than content, and could potentially initiate earlier treatment to slow disease progression.
The model has been able to distinguish Alzheimer’s patients from healthy controls with 70 to 75 percent accuracy. Alzheimer’s dementia can be challenging to detect at early stages, because the symptoms often start out quite subtly and can be confused with memory-related issues, typical of advanced age. But as the researchers note, the earlier potential issues are detected, the sooner patients can begin to take action.
“Before, you’d need lab work, and medical imaging to detect brain changes. This takes time, it’s expensive, and nobody gets tested this early on,” says Eleni Stroulia, a professor in the Department of Computing Science who was involved in creating the model.
“If you could use mobile phones to get an early indicator that would inform both the patient and their physician, it could potentially start treatment earlier. We could even start with simple interventions at home, also with mobile devices, to slow the progression down.”
A screening tool would not take the place of health-care professionals. However, in addition to aiding in earlier detection, it would create a convenient way to identify potential concerns via Telehealth for patients who may face geographic or linguistic barriers to accessing services in their area, explains Zehra Shah, a master’s student in the Department of Computing Science and first author of the paper.
“We can think about triaging patients using this sort of technology based entirely on speech alone,” says Shah.
While the research group previously looked at language used by Alzheimer’s dementia patients, for this project they examined language-agnostic acoustic and linguistic speech features rather than specific words.
“The original work involved listening to what the person says, understanding what they say, and its meaning. That’s an easier computational problem to solve,” says Stroulia.
“Now we’re saying, listen to the voice. There are some properties in the way people speak that transcend language.”
“It’s much more powerful than the version of the problem we were solving before,” adds Stroulia.
The researchers started with speech characteristics that doctors noted were common in patients with Alzheimer’s dementia. These patients tended to speak more slowly, with more pauses or disruptions in their speech. They typically used shorter words, and often had reduced intelligibility in their speech.
Researchers found ways to translate these characteristics into speech features the model could screen for.
Though the researchers focused on English and Greek speakers, “this technology has the potential to be used across different languages,” says Shah.
And though the model itself is complex, the eventual user experience for a tool that incorporates it couldn’t be simpler.
“A person talks into the tool, it does an analysis and makes a prediction: either yes, the person has Alzheimer’s, or no they don’t,” says Russ Greiner, a contributor on the paper and professor in the Department of Computing Science. That information can then be brought to a health-care professional to determine the best course of action for the person.
Both Greiner and Stroulia are leading the computational psychiatry research group, whose members have crafted similar AI models and tools to detect osychiatric disorders such as PTSD, schizophrenia, depression and bipolar disorder.
“Anything we can do to amplify the clinical processes, inform treatments and manage diseases sooner with less cost is great,” says Stroulia.