Wysa, launched in 2016, has provided mental health support to seven million people across 100 countries. That’s more patients than most therapists will see in multiple lifetimes. AI mental health tools — from therapy chatbots to automated diagnosis systems — are proliferating rapidly. According to Grand View Research, the market will top $5.08 billion in 2030 — up from $1.13 billion in 2023.
The rapid growth has sparked concerns. Researchers at Stanford University recently warned that the technology posed “dangerous consequences” to public health due to underlying “biases and failures.” The question isn't whether AI can help with mental health — it's where to draw the line between innovation and risk.
Coding empathy
“We built Wysa because traditional systems simply weren’t scaling, and technology can break down barriers,” said Jo Aggarwal, founder of Wysa. The company's AI-powered chatbot uses conversational prompts to help users identify mental health issues and develop coping strategies. Wysa reported a 31% reduction in depression and anxiety symptoms in a pilot study.
In India, Rocket Health is disrupting mental health care through an AI-powered journaling app. Using their voice, people can discuss their worries and concerns with the bot, which then analyzes this information for mood tracking and suggesting coping strategies.
Describing the app’s user reception since launch as “very strong,” Rocket Health CEO and founder Abhineet Kumar said people value how “empathetic” the app is. They would use it in situations where they don't want to speak to a person or to a psychologist.
Another mental health startup, Kintsugi, has developed AI software that identifies common symptoms of depression and anxiety in patient voice clips.
Grace Chang, CEO and founder at Kintsugi, said voice analysis is a more effective diagnostic method than patient questionnaires. “Decades of scientific literature show that the voice can reveal signs of a number of health conditions — from depression to Alzheimer’s — often long before other signs become apparent,” she said.
Kyan Health, based in Switzerland, believes AI can help prevent many causes of mental illness. Its AI-powered mental health platform helps employers identify early signs of declining mental health among employees using data and analytics. Individual staff can also use it as a space to discuss things they may be struggling with.
“We wanted to treat mental health as health, focusing on prevention, said Vlad Gheorghiu, CEO at Kyan Health. “AI is now the enabler that allows prevention at scale, analyzing data early, predicting risks, and guiding people to the right care path before issues escalate.”
Natural limits of artificial intelligence
But building AI systems that can safely operate at this scale presents formidable technical and ethical challenges. Aggarwal of Wysa describes it as a “technological mountain” due to the requirement for “natural language understanding” combined with the need to ensure “clinical safety” and “ethical use.” She said that ”there were skeptics who didn’t believe in the hybrid model or doubted the evidence base from our AI.”
Aggarwal and her team overcame these obstacles through “persistent research.” The company has published more than 36 peer-reviewed studies. That scientific rigor and early engagement with regulators paid off. In 2022, Wysa received FDA Breakthrough Device Designation after a clinical trial found it as effective as in-person counseling for chronic pain and associated mental health conditions.
Yet Wysa is explicit about where its AI should — and shouldn't — be used. The tool is “not recommended for use in crisis situations, nor is it suitable for use by those with severe and enduring mental health problems,” according to the company.
Other developers face similar challenges in defining AI's appropriate scope. For Rocket Health, Kumar said the most pressing issue was selecting a large language model empathetic enough to provide mental health support.
Chang at Kintsugi said developing an effective AI vocal screening tool that could understand individual vocal differences amid background sounds was technically difficult. “We engineered models robust enough for real-world conditions — busy call centers, telehealth visits, or mobile apps — while maintaining strict privacy and bias safeguards,” she explained.
Tackling algorithmic bias — when AI outputs show signs of sexism or prejudice — is another major challenge. Fraser Carey, a member of the British Psychological Society, said the best defense is ensuring the datasets used to train AI algorithms are “diverse” and “transparent.”
Privacy protections are equally critical. Jake Moore, global cybersecurity advisor at antivirus maker ESET, urges mental health technology companies to protect user data using robust encryption and to ensure it can only be accessed by those who genuinely need it for their roles.
A new clinical partnership
In the coming years, Aggarwal expects AI mental health services to become less of a “standalone solution” focused on driving user engagement and more of a “personalized” intervention with greater consistency. For this to happen, she said clinicians will work “side-by-side” with AI systems to “deliver scalable, actionable care.”
Carey said that future innovations in this industry will center around “human-AI collaboration” and offer “better emotional responsiveness.” As this technology becomes more mainstream, Carey predicted “tighter integration with health care systems” too.
People becoming overdependent on this tech is a concern, though. Kumar said people will use AI "more proactively" for recovery from mental illness. But Carey warned that overdependence could result in mental health care becoming less, not more personalized. “As I always say when speaking about this topic: AI is best used as a complement, it is not a solution,” Carey said.
Expressing similar thoughts, Chang said that AI “can’t and won’t replace human clinicians” in the future. Instead, she said the technology would “empower care teams to offer truly holistic care to their patients.”





