Goodfire: $150 Million Series B At $1.25 Billion Valuation Raised For Interpretability AI Lab

By Amit Chowdhry ● Yesterday at 10:34 PM

Goodfire, an AI research lab focused on interpretability for understanding and designing neural networks, has raised a $150 million Series B round at a $1.25 billion valuation, according to the company. The financing was led by B Capital, with participation from existing investors Juniper Ventures, Menlo Ventures, Lightspeed Venture Partners, South Park Commons, and Wing Venture Capital, as well as new investors including DFJ Growth, Salesforce Ventures, and Eric Schmidt.

The San Francisco-based company says the new capital, raised less than a year after its Series A, will be used to advance frontier research, build the next generation of its core product, and scale partnerships spanning AI agents and life sciences. Goodfire positions interpretability as a way to inspect how models represent concepts internally and to modify those mechanisms to shape behavior, with the broader goal of making advanced AI systems more understandable, debuggable, and intentionally engineered.

Goodfire argues that most model development still treats systems as black boxes, limiting the ability to predict and control behavior as capabilities scale. The company says its work aims to shift AI development toward approaches that resemble traditional engineering disciplines, where understanding precedes reliable design and model behavior can be precisely modified without unintended side effects.

The company highlighted two primary areas where it believes interpretability can deliver near-term value: scientific discovery and model design. On the scientific discovery side, Goodfire says it is working with partners including Mayo Clinic, Arc Institute, and Prima Mente to analyze scientific foundation models and extract insights that exceed current human intuition in complex domains. As an example of AI-to-human knowledge transfer, Goodfire cited its identification of a novel class of Alzheimer’s biomarkers by applying interpretability techniques to an epigenetic model built by Prima Mente.

On the model design side, Goodfire says it has developed methods to retrain behavior by targeting specific internal components, describing this as a more efficient route to reliability improvements than broad retraining alone. The company pointed to one application where hallucinations were reduced by half in a large language model, framing the result as evidence that model behavior can be adjusted with finer control and fewer off target effects.

Looking ahead, Goodfire says the Series B will support development of a model design environment, a platform intended to help users understand, debug, and intentionally design AI models at scale. The company says the system will use frontier interpretability techniques to identify which internal parts drive specific behaviors and to enable targeted interventions or training.

Goodfire describes itself as part of an emerging category of research-first AI companies, it calls neolabs, which it says are pursuing breakthroughs in training and model understanding that have been deprioritized by scaling-oriented labs. The company says its team includes researchers and engineers with backgrounds from OpenAI, Google DeepMind, and major academic institutions, including Nick Cammarata, co-founder Tom McGrath, and Leon Bergen.

Goodfire is structured as a public benefit corporation and says it is dedicated to building safe and powerful AI through understanding rather than scaling alone. The company has raised more than $200 million in total backing from a mix of venture firms and individual investors.

KEY QUOTES

“We are building the most consequential technology of our time without a true understanding of how to design models that do what we want. At Weights & Biases, I watched thousands of ML teams struggle with the same fundamental problem: they could track their experiments and monitor their models, but they couldn’t truly understand why their models behaved the way they did. Bridging that gap is the next frontier. Goodfire is unlocking the ability to truly steer what models learn, make them safer and more useful, and extract the vast knowledge they contain.”

Yan-David “Yanda” Erlich, Former COO and CRO, Weights & Biases; General Partner, B Capital

“Interpretability, for us, is the toolset for a new domain of science: a way to form hypotheses, run experiments, and ultimately design intelligence rather than stumbling into it. Every engineering discipline has been gated by fundamental science—like steam engines before thermodynamics—and AI is at that inflection point now.”

Eric Ho, Chief Executive Officer, Goodfire

 

 

Exit mobile version