Xiaotian: Designing AI as a Mental Health Infrastructure, Not a Substitute for Care
Xiaotian was created in response to a structural crisis rather than a technical one.
In China, more than 300 million people are estimated to experience mental health challenges, yet access to affordable, professional psychological support remains limited. For adolescents and university students in particular, emotional distress often goes unaddressed—not because help is unnecessary, but because it is unavailable, unaffordable, or difficult to identify amid an uneven counseling landscape. Xiaotian emerged from this gap: a question of how technology might expand access to care without eroding its ethical foundations.
Developed between July 2021 and July 2022 through a collaboration between Westlake University’s AI Lab and Scietrain, Xiaotian explored how generative AI could function as a supporting layer within mental health systems. I led the project’s product research and real-world implementation, working closely with psychologists, AI researchers, and counselors to ensure that technical design remained grounded in clinical reality.
Rather than positioning AI as a replacement for human therapists, Xiaotian was intentionally designed as a dual system. On one side, it serves professional counselors by generating structured response drafts aligned with established therapeutic frameworks, allowing them to focus their attention on complex, high-risk cases. On the other, it provides general users with immediate emotional support—helping them articulate feelings, identify needs, and connect with appropriate professionals when necessary.
The system integrates principles from cognitive behavioral therapy, narrative therapy, and humanistic psychology, supported by long-term dialogue memory and emotional recognition. Xiaotian can sustain conversations over time, recognize emotional patterns, and offer empathetic, non-intrusive responses. Crucially, it includes safety mechanisms such as sensitive language screening and suicide risk detection, ensuring that cases requiring human intervention are escalated promptly.
During the COVID-19 pandemic, the project’s role expanded. As isolation intensified emotional distress among teenagers, Xiaotian opened its API to social institutions and offered pro-bono services, providing round-the-clock emotional support during lockdowns. In total, the system supported over 15,000 students, many of whom would otherwise have had no access to psychological assistance during that period.
What made Xiaotian meaningful was not its scale alone, but its stance. Throughout the project, we treated mental health not as a problem to be “optimized,” but as a domain requiring restraint, care, and ethical clarity. AI was positioned as an early listener, a triage layer, and a connector—never as an authority. This design choice reflected a broader belief: mental health systems need infrastructure, not shortcuts.
Looking forward, Xiaotian points toward a future in which AI supports psychological well-being longitudinally—tracking patterns, enabling early intervention, and reducing stigma—while remaining accountable to human judgment. Its long-term potential lies not in automation, but in extending care to places and moments where care is currently absent.
For me, Xiaotian was not just a product, but a lesson. It demonstrated that when AI enters deeply human domains, the most important design decisions are not technical ones, but ethical and structural ones. The question is no longer whether AI can participate in mental health support, but how carefully—and how humbly—we allow it to do so.