New York Times: Can A.I.-driven voice analysis identify mental disorders?

Medical experts have tools like thermometers and x-rays to easily gauge physical, objective data about a patient's condition. But currently, there are no comparable tools or biomarkers to accurately assess a patient's mental health.

Some researchers are studying if artificial intelligence can fill that void by assessing mental health based on the sound of a patient's voice.

Maria Espinola, PsyD, assistant professor in the Department of Psychiatry and Behavioral Neuroscience at the University of Cincinnati College of Medicine, told The New York Times that professionals can often detect certain mental health issues by listening both to what a person is saying and how they say it.

"[Depressed patients'] speech is generally more monotone, flatter and softer. They also have a reduced pitch range and lower volume. They take more pauses. They stop more often," Espinola said. "[Patients with anxiety] tend to speak faster. They have more difficulty breathing.”

With machine learning algorithms, researchers hope to use these types of vocal cues to predict mental health issues like depression, anxiety, schizophrenia and post-traumatic stress disorder. The artificial intelligence has the potential to pick up on vocal features the human ear can't.

As researchers progress in this area of study, concerns of avoiding bias and protecting the privacy of voice data will need to be top of mind, experts say.

Read The New York Times article.

Lead photo of Maria Espinola/Colleen Kelley/UC Creative + Brand

Related Stories

Debug Query for this