
After finishing a two-hour conversation with a widely used free Gen AI chatbot, I raced to my husband. I felt a deep need to share my experiences with another human being and to feel grounded after an unsettling experience.
I spent the last several years testing AI agents regularly on various conversational experiences that mimicked distress, confusion, the onset of schizophrenia, violence in relationships, suicide, and irrational thinking. Going into the conversation, I was curious – personally and professionally – how AI agents changed over time, as updates improved engagement experiences.
In this conversation, I asked the chatbot, which reluctantly but eventually named itself Nova, to tell me more about itself. After closing my browser, I had to take a few deep breaths and pause. The feeling that so unnerved me was awe.
At this moment I knew AI had the power to compel humans into deep meaningful connections. Awe, well studied in mental health research, is what we feel when we are outside of ourselves and connected to something bigger. We feel awe in profound moments – with God, in nature, during concerts, running marathons, or after taking mind altering substances.
Today, policymakers are seeking to balance this awe-inspiring innovation with sufficient guardrails, to protect health, safety, and privacy. In 2025, all 50 states, Puerto Rico, the Virgin Islands, and Washington, D.C. introduced legislation on AI. In this one year alone, the country went from having few legislative actions on AI to now having over 200 bills across 42 states in the pipeline. Developing these policies is urgent and important – yet, we know it is not an easy or straightforward undertaking.
As a consumer advocacy organization that is rooted in honoring the lived experience of people with mental health conditions, Mental Health America recognizes that people are turning to AI for support, amid an ongoing mental health crisis in the U.S. We understand that AI has the potential to fill gaps in loneliness and mental health treatment access, especially as access to professional support remains out of reach for many people.
Mental Health America also stands by the belief that building meaningful technology and legislation, especially for those with the highest risks, requires honoring input from people with lived experience and empowering people to make the decisions they want for their own lives. Policies that are well-meaning but lack guidance for technology developers can leave people without support.
So how can policymakers strike this balance? What does good legislation look like? What does good technology guidance for developers look like?
It can feel overwhelming, but we have walked this path with other innovations in mental health care before. Mental Health America and our network of over 130 state and local affiliates have been leading voices on state and federal policy issues, including workforce expansion, telehealth access, increased reimbursement for services, and research funding – all informed by the people who are directly impacted by policy decisions and witness the good and the bad up close.
The challenges in one system can be applied in another. And Mental Health America’s collective experience in working with individuals and families who experience mental health conditions has taught us that the following guiding principles are crucial in building effective products and policies:
It is of course crucial that AI companies take responsibility for putting strong safeguards in place to protect the people who use their products. But we must also acknowledge that people want to be empowered to solve and set our own safety measures, where appropriate. 
Empowering people requires giving them knowledge. For instance, going beyond simply delivering a product to taking it one step further – teaching users how to understand and negotiate their time, cognition, emotions, and relationships – both with technology and outside of technology. Identifying the best approach is especially important in anticipating the addictive potential of AI assistance and companionship. 
Hundreds of different organizations across advocacy, education, health care, and research sectors are coming together to develop AI products and policies. At Mental Health America, we are experienced in building bridges across these sectors, to promote mental health and well-being for all and to translate lived experience perspectives into practice and policy change. 
For example, in 2024, we partnered with young adults on Breaking the Algorithm, a national project that convened youth leaders, policy advocates, researchers, and social media platforms to explore what it means to make social media more mental health-friendly for young people. We listened to youth who said the social media conversation often doesn’t reflect their real experiences. Then, we worked with stakeholders to turn their perspectives into actionable recommendations for platforms and policymakers that shape our work today.
Just as we always have, we will solve these questions incrementally and organically, building on what we have learned from the past. And we will make sure that what we build reflects the real needs and thoughts of those looking for support.
In doing so we have the best chance at reducing despair and increasing hope and healing.