Gentrace – a developer platform for testing and monitoring generative AI applications – announced an $8 million Series A funding round led by Matrix Partners, with participation from Headline and K9 Ventures. The funding round, bringing Gentrace’s total funding to $14 million, comes as the company launches its groundbreaking tool, Experiments, enabling the first truly collaborative LLM product testing environment that extends beyond engineering teams.
With many industry sectors racing to integrate generative AI into their products, development teams face a critical challenge: ensuring AI applications are reliable, safe, and deliver consistent value. The generative AI engineering market is projected to reach tens of billions over the next few years.
Even though most generative AI testing tools remain confined to code and engineering teams, Gentrace’s platform uniquely enables product managers, subject matter experts, designers, and quality assurance teams to participate directly in AI evaluation through its comprehensive three-pillar approach.
The platform’s end-to-end solution enables a purpose-built testing environment that models real-world applications, comprehensive analytics to assess AI model performance, and Gentrace’s newly launched Experiments.
Dozens of early adopters such as companies like Webflow, Quizlet, and a Fortune 100 retailer, report significant improvements in their ability to predict and prevent AI-related issues before they impact users. Quizlet saw an increase in testing by 40 times and now iterates and receives testing results in less than a minute instead of hours.
The Series A funding round will accelerate product development and expand Gentrace’s engineering, product, and go-to-market functions to meet growing enterprise demand for AI development tools. And the team also plans to develop its Experiments tool to continue democratizing generative AI testing workflows with future plans for threshold-based experimentation and auto-optimization.
KEY QUOTES:
“Generative AI represents a paradigm shift in software development, but the reality is there’s way too much noise and not enough signal on how to test and build them easily or correctly. We’re not just creating another dev tool – we’re reimagining how entire organizations can collaborate and build better LLM products.”
- Doug Safreno, co-founder and CEO of Gentrace
“As generative AI reshapes the software landscape, Gentrace is addressing the crucial need for robust, systematic testing. The potential of AI is immense but only if we need to ensure outputs are reliable, safe, and actually useful. We believe Gentrace’s innovative approach will set a new standard for AI quality assurance and are proud to support their mission to make AI applications trustworthy and effective.”
- Kojo Osei, Partner at Matrix
“Gentrace was the right product for us because it allowed us to implement our own custom evaluations, which was crucial for our unique use cases. The ability to easily visualize what was going wrong and dig into the results with different types of views has been invaluable. It’s dramatically improved our ability to predict the impact of even small changes in our LLM implementations.”
- Madeline Gilbert, Staff Machine Learning Engineer at Quizlet
“Every LLM product needs evals. Gentrace makes evals a team sport at Webflow. With support for multimodal outputs and running experiments, Gentrace is an essential part of our AI engineering stack. Gentrace helps us bring product and engineering teams together for last-mile tuning so we can build AI features that delight our users.”
- Bryant Chou, co-founder and chief architect at Webflow
“We are thrilled to continue to back Gentrace as they support a fast-growing list of customers across industries like education, knowledge tools, ecommerce, health, banking, and more, including in the Fortune 100.”
- Jett Fein, partner at Headline
“Testing LLM products for Fortune 100 companies demands a robust system and coordination across many stakeholders. Gentrace gives us the best of both worlds: it integrates seamlessly with our complex enterprise environments and provides intuitive workflows that many teams can easily adopt.”
- Tim Wee, enterprise AI engineering consultant
“Gentrace allows our ML engineers to work cohesively with other engineering teams, product managers, and education coaches. It really helps us move faster and be more confident in our deployment of models.”
- Anna X. Wang, Head of AI at Multiverse