OpenAI announced a significant new funding initiative that will award up to $2 million to support independent research at the intersection of artificial intelligence and mental health, marking one of the organization’s most significant external safety investments to date. The program, which is now accepting applications through December 19, 2025, seeks to generate new insights, resources, and evaluation tools that can strengthen both OpenAI’s internal safety work and the broader field.
The announcement comes as the use of AI systems accelerates across more personal and emotionally sensitive areas of people’s lives. OpenAI noted that it continues to improve how its models recognize and respond to signs of emotional and mental distress, working closely with experts to refine model behavior and publicly share performance updates. Yet the company emphasized that mental health–related AI safety remains an early-stage research domain that requires broader participation beyond its internal teams.
With this program, OpenAI aims to catalyze new foundational work that explores the risks, benefits, cultural nuances, clinical perspectives, and lived experiences of mental health interactions with AI systems. The grants prioritize interdisciplinary proposals that combine technical research with mental health expertise or the insights of individuals who have first-hand experience navigating mental health challenges.
Selected projects will be expected to produce clear deliverables, such as datasets, evaluation frameworks, rubrics, synthesized qualitative perspectives, cultural analyses of symptom expression, or studies of linguistic patterns that AI systems may overlook. The company will review proposals on a rolling basis and notify selected applicants by January 15, 2026.
OpenAI highlighted prior internal research—including studies on affective use, emotional well-being, and the Healthbench evaluation suite—and said it intends to deepen its own understanding while also expanding support for independent researchers who can help advance the field. The organization expressed that continued external research is essential to improving collective knowledge around AI and mental health and ultimately ensuring that AGI develops in ways that benefit everyone.
The request for proposals outlines numerous potential areas of exploration, spanning cultural variability in distress expression, perspectives from individuals with lived experience, how clinicians use AI tools today, the ability of AI to encourage positive behavior and reduce harm, the robustness of safeguards across languages, age-appropriate interaction guidelines for adolescents, how stigma presents in AI outputs, multimodal research on body dysmorphia and eating disorders, and approaches for AI systems to more compassionately support people experiencing grief.
Applications can be submitted online, and researchers are encouraged to propose innovative, interdisciplinary, and culturally grounded studies that can improve safety, well-being, and trust in AI systems.