
America has long faced a mental health crisis, one that was greatly exacerbated by COVID-19. Underlying indicators of depression increased four-fold during the pandemic, affecting nearly one-third of all Americans. Overdose deaths reached crisis levels, while more than 49,000 Americans died by suicide in 2022, an all-time high.
It doesn’t have to be that way. Timely, evidence-based mental health care has been proven to help people get and stay well. But under our current health care system, depression and other mental illnesses are neither detected nor treated until an average of eight to 10 years after symptoms first emerge. Consequently, many individuals receive their first diagnosis of mental illness in an emergency room or jail cell.
That represents a massive, missed opportunity. A large part of the problem is that roughly half of Americans live in areas with mental health professional shortages. More than 60% of rural counties have no psychiatrists, and roughly half have no psychologists.
With such limited options, it’s no surprise that Americans are increasingly turning on their own to A.I. for help. Open AI reports that of its more than 800 million regular users, one in four submits a prompt about health care every week. According to a study from the Rand Corporation, one in eight U.S. adolescents and young adults use A.I. chatbots for mental health advice.
All of us have read the disturbing headlines about the dangers of A.I. companies failing to take the mental health needs of their users seriously. That is a tragedy that can and must be solved.
But even as we address those failings, I’m more excited than I have ever been in my 25-year career about the ways A.I. can and already is positively disrupting the mental health field.
As I recently told the American Academy of Arts and Sciences, A.I. encompasses more than chatbots, and it’s important to draw a distinction between general-purpose A.I. chatbots and purpose-built therapeutic tools. While A.I. should not be used as a substitute for therapy, it can and should be leveraged to augment care and extend our workforce.
The truth is that the mental health workforce is overwhelmed by paperwork and administrative tasks. A.I. tools from makers such as Eleos Health, SupaNote, Quill Therapy Notes, are already being used to transcribe and summarize session notes and automate intake data, including symptoms, history, and screening results, greatly relieving provider workload and allowing practitioners to spend more time with patients.
A.I. tools can assist with other essential administrative tasks, such as obtaining records and facilitating care coordination through record sharing, referrals, patient registration, billing and scheduling.
Beyond helping to relieve administrative burden, A.I. tools can help improve outcomes through specific feedback and coaching, and realistic, adaptive, patient-simulated training. The U.S. Department of Veterans Affairs (VA) is using A.I. to ensure that staff members who answer the Veterans Crisis Line are ready to effectively respond to the many scenarios they face daily.
The VA partnered with technology firm ReflexAI to create eight simulated AI “personas” designed to reflect the diverse demographics and experiences of veterans who call in for help. As trainees communicate with each persona, they learn how to engage with veterans of different backgrounds, provide crisis intervention, and connect with veterans with culturally competent care.
Another bright spot is A.I.’s ability to support clinical decision-making.
A.I. tools can stratify patients by risk level to prioritize the urgency and level of care needed. That could revolutionize the efficiency and effectiveness of our workforce by ensuring they are focusing on the highest and best use of their skills.
For example, NeuroFlow’s suite of services leverages large volumes of aggregated historical data from electronic medical records, claims, and pharmaceutical sources, alongside self-reported patient data, to help health systems identify which patients are likely at the highest risk of suicide and need immediate care.
A.I. can also facilitate measurement and outcomes monitoring – long staples of physical health care – to enable improvements in quality of care. For example, when a patient opts in, A.I. can analyze data from smartphones or wearables to screen and monitor for mental health issues, response to treatment, and relapse. By measuring progress, clinicians can truly understand how a patient is responding to treatment.
The possibilities for the future of research are also tantalizing. For example, A.I. can enhance computing power to analyze large volumes of data, such as images of the brain, to better understand potential causes and impacts of mental health conditions and predict responses to treatment.
We are living in a brave new world in which A.I. is rapidly reshaping health care, and it offers an unprecedented opportunity to transform mental health services. First and foremost, we must protect the health and safety of our children and communities with strong safeguards for the use of A.I. to support mental health care. But we cannot allow those valid concerns to blind us to the value that AI can and must provide. The risks of inaction for the 43 million Americans experiencing a mental health condition each year are simply too great.
Like any technology, A.I. for mental health will require thoughtful implementation, regulation, and human oversight, but A.I. is a part of the prescription for fixing a system that has failed us and our loved ones for far too long. We must harness its promise.
Kacie Kelly is the chief innovation officer of the Meadows Mental Health Policy Institute.