Acoustic changes in speech of people with depression could be used to help monitor mental health via a digital app in future.

Carol Espy-Wilson, PhD, Department of Electrical & Computer Engineering, University of Maryland

Carol Espy-Wilson, PhD, Department of Electrical & Computer Engineering, University of Maryland

 

At the 168th meeting of the Acoustical Society of America (ASA) held October 27-31 in Indianapolis, researchers from the University of Maryland presented findings from their work on a system of digital speech analysis that assesses depression through changes in speech patterns. 

According to the UM research team, certain vocal features change as patients’ feelings of depression worsen. The research is part of an interdisciplinary initiative at the University of Maryland to engineer patient-focused mental health monitoring systems. The aim is that, rather than relying solely on patients’ self-reports, these systems could monitor both physical and psychological symptoms of mental illness on a regular basis and provide both patients and their mental health providers with feedback about their status.

To conduct a quantitative experiment on the vocal characteristics of depression acoustician Carol Espy-Wilson, Phd and her colleagues repurposed a dataset collected from a 2007 study from an unaffiliated lab also investigating the relationship between depression and speech patterns. In the earlier study, researchers assessed patients’ depression levels each week by using the Hamilton Depression Scale (a standard clinical evaluation tool to measure the severity of depression) and then recorded them speaking freely about their day.

The University of Maryland researchers used data from six patients who, over the six-week course of the 2007 study, had registered as depressed some weeks and not depressed other weeks. They compared these patients’ Hamilton scores with their speech patterns each week, and found a correlation between depression and certain acoustic properties.

According to the researchers, when patients’ feelings of depression worsened, their speech tended to be breathier and slower. The team also found increases in jitter and shimmer, two measures of acoustic disturbance that measure the frequency and amplitude variation of the sound, respectively. Speech high in jitter and shimmer tends to sound hoarse or rough.

Researcher Saurabh Sahu,

Researcher Saurabh Sahu, Department of Electrical & Computer Engineering, University of Maryland

The researchers propose that a phone app could eventually be developed to use this information to analyze patients’ speech, identify acoustic signatures of depression and provide feedback and support. Dr Espy-Wilson hopes the interactive technology will appeal to teens and young adults, a group that is particularly vulnerable to mental health problems.

“Their emotions are all over the place during this time, and that’s when they’re really at risk for depression. We have to reach out and figure out a way to help kids in that stage,” she said.

Dr Espy-Wilson explained that sometimes, patients might not recognize or be willing to admit that they are depressed. By receiving regular feedback based on acoustical and other measurements, they might learn to self-monitor their mental states and recognize when they should seek help. The technology could also promote communication between therapists and patients, allowing for continuous, responsive care in addition to regular in-person appointments.

A webcast of the study presentation will be archived for one year, and readers interested in more information about this may contact: [email protected].

Presentation #5aSC12, “Effects of depression on speech,” by Saurabh Sahu and Carol Espy-Wilson. The abstract can be found by searching for the presentation number on the ASA 2014 meeting site.

Source: The Acoustical Society of America (ASA)