线上赌场 researchers report they have developed the first smartphone application that uses artificial intelligence paired with facial-image processing software to reliably detect the onset of depression before the user even knows something is wrong.
Called MoodCapture, the app uses a phone鈥檚 front camera to capture a person鈥檚 facial expressions and surroundings during regular use, then evaluates the images for clinical cues associated with depression. In a study of 177 people diagnosed with major depressive disorder, the app correctly identified early symptoms of depression with 75% accuracy.
These results suggest the technology could be publicly available within the next five years with further development, say the researchers, who are based in the and.
The team on Feb. 27 in advance of presenting it at the Association of Computing Machinery鈥檚 CHI 2024 conference in May. Papers presented at CHI are peer-reviewed prior to acceptance and will be published in the conference proceedings.
鈥淭his is the first time that natural 鈥榠n-the-wild鈥 images have been used to predict depression,鈥 says , the paper鈥檚 corresponding author and the Albert Bradley 1915 Third Century Professor of Computer Science. 鈥淭here鈥檚 been a movement for digital mental-health technology to ultimately come up with a tool that can predict mood in people diagnosed with major depression in a reliable and nonintrusive way.鈥
鈥淧eople use facial recognition software to unlock their phones hundreds of times a day,鈥 says Campbell, whose phone recently showed he had done so more than 800 times in one week.
鈥淢oodCapture uses a similar technology pipeline of facial recognition technology with deep learning and AI hardware, so there is terrific potential to scale up this technology without any additional input or burden on the user,鈥 he says. 鈥淎 person just unlocks their phone and MoodCapture knows their depression dynamics and can suggest they seek help.鈥
For the study, the application captured 125,000 images of participants over the course of 90 days. People in the study consented to having their photos taken via their phone鈥檚 front camera but did not know when it was happening.
A first group of participants was used to program MoodCapture to recognize depression. They were photographed in random bursts using the phone鈥檚 front-facing camera as they responded to the statement, 鈥淚 have felt down, depressed, or hopeless.鈥 The prompt is from the eight-point Patient Health Questionnaire, or PHQ-8, which is used by clinicians to detect and monitor major depression.
The researchers used image-analysis AI on these photos so that MoodCapture鈥檚 predictive model could learn to correlate self-reports of feeling depressed with specific facial expressions鈥攕uch as gaze, eye movement, positioning of the head, and muscle rigidity鈥攁nd environmental features such as dominant colors, lighting, photo locations, and the number of people in the image.
The concept is that every time a user unlocks their phone, MoodCapture analyzes a sequence of images in real time. The AI model draws connections between expressions and background details found to be important in predicting the severity of depression. Over time, MoodCapture identifies image features specific to the user. For example, if someone consistently appears with a flat expression in a dimly lit room for an extended period, the AI model might infer that person is experiencing the onset of depression.
To test the predictive model, the researchers had a separate group of participants answer the same PHQ-8 question while MoodCapture photographed them. The software analyzed these photos for indicators of depression based on the data collected from the first group. It is this second group that the MoodCapture AI correctly determined were depressed or not with 75% accuracy.
鈥淭his demonstrates a path toward a powerful tool for evaluating a person鈥檚 mood in a passive way and using the data as a basis for therapeutic intervention,鈥 says Campbell, noting that an accuracy of 90% would be the threshold of a viable sensor. 鈥淢y feeling is that technology such as this could be available to the public within five years. We鈥檝e shown that this is doable.鈥
MoodCapture meets major depression on the irregular timescale on which it occurs, said, a study co-author and assistant professor of and in 线上赌场鈥檚.
鈥淢any of our therapeutic interventions for depression are centered around longer stretches of time, but these folks experience ebbs and flows in their condition. Traditional assessments miss most of what depression is,鈥 said Jacobson, who directs the.
鈥淥ur goal is to capture the changes in symptoms that people with depression experience in their daily lives,鈥 Jacobson says. 鈥淚f we can use this to predict and understand the rapid changes in depression symptoms, we can ultimately head them off and treat them. The more in the moment we can be, the less profound the impact of depression will be.鈥
Jacobson anticipates that technologies such as MoodCapture could help close the significant gap between when people with depression need intervention and the access they actually have to mental health resources. On average, less than 1% of a person鈥檚 life is spent with a clinician such as a psychiatrist, he says. 鈥淭he goal of these technologies is to provide more real-time support without adding an additional pressure on the care system,鈥 Jacobson says.
An AI application like MoodCapture would ideally suggest preventive measures such as going outside or checking in with a friend instead of explicitly informing a person they may be entering a state of depression, Jacobson says.
鈥淭elling someone something bad is going on with them has the potential to make things worse,鈥 he says. 鈥淲e think that MoodCapture opens the door to assessment tools that would help detect depression in the moments before it gets worse. These applications should be paired with interventions that actively try to disrupt depression before it expands and evolves. A little over a decade ago, this type of work would have been unimaginable.鈥
The study stems from a National Institutes of Mental Health grant Jacobson leads that is investigating the use of deep learning and passive data collection to detect depression symptoms in real time. It also builds off that collected passive and automatic data from the phones of participants at 线上赌场 to assess their mental health.
But the advancement of smartphone cameras since then allowed the researchers to clearly capture the kind of passive photos that would be taken during normal phone usage, Campbell says. Campbell is director of emerging technologies and data analytics in the Center for Technology and Behavioral Health, where he leads the team developing mobile sensors that can track metrics such as emotional state and job performance based on passive data.
The new study shows that passive photos are key to successful mobile-based therapeutic tools, Campbell says. They capture mood more accurately and frequently than user-generated selfies and do not deter users by requiring active engagement. 鈥淭hese neutral photos are very much like seeing someone in-the-moment when they鈥檙e not putting on a veneer, which enhanced the performance of our facial-expression predictive model,鈥 Campbell says.
, a PhD candidate in Campbell鈥檚 research group who, along with Guarini PhD student Arvind Pillai is co-lead author of the study, says the next steps for MoodCapture include training the AI on a greater diversity of participants, improving its diagnostic ability, and reinforcing privacy measures.
The researchers envision an iteration of MoodCapture for which photos never leave a person鈥檚 phone, Nepal says. Pictures would instead be processed on a user鈥檚 device to extract facial expressions associated with depression and convert them into code for the AI model. 鈥淓ven if the data ever does leave the device, there would be no way to put it back together into an image that identifies the user,鈥 he says.
Meanwhile, the application鈥檚 accuracy could be enhanced on the consumer end if the AI is designed to expand its knowledge based on the facial expressions of the specific person using it, Nepal says.
鈥淵ou wouldn鈥檛 need to start from scratch鈥攚e know the general model is 75% accurate, so a specific person鈥檚 data could be used to fine-tune the model. Devices within the next few years should easily be able to handle this,鈥 Nepal says. 鈥淲e know that facial expressions are indicative of emotional state. Our study is a proof of concept that when it comes to using technology to evaluate mental health, they鈥檙e one of the most important signals we can get.鈥