Respan, an AI observability platform focused on improving the performance and reliability of AI agents, announced it has raised $5 million in funding. The round included backing from Gradient, Y Combinator, Hat-Trick Capital, XIAOXIAO FUND, Antigravity Capital, Alpen Capital, and several prominent angels and AI founders. The company said it will use the capital to expand hiring and scale its platform.
The company, previously known as Keywords AI, is launching what it describes as the first proactive AI observability platform designed to close the loop between evaluation and production. Its system continuously evaluates AI agent behavior in real time and converts those insights into actionable improvements such as prompt updates, regression checks, and automated alerts when performance declines.
Respan reports that it already supports more than 100 startups and enterprise teams, processing over 1 billion logs and more than 2 trillion tokens per month across 6.5 million end users. The company also noted it achieved more than 8x year over year revenue growth in 2025.
The platform is designed to go beyond traditional observability and evaluation tools such as LangSmith or Braintrust, which primarily provide retrospective insights. Respan instead aims to create a continuous feedback loop that integrates observability, evaluation, decision making, and iteration, enabling teams to actively improve AI agents as they operate.
Respan’s system is built around three core components: logging every agent session in production, evaluating performance using key metrics, and automatically optimizing prompts using live production data. Its platform captures full execution traces, including messages, tool calls, routing decisions, memory, and outcomes, and uses this data to identify failures, diagnose root causes, and recommend improvements.
The company also introduced an automated evaluation agent that triggers assessments when meaningful changes occur, such as updates to prompts, workflows, or models. These evaluations help teams understand agent behavior over time and transition successful capabilities into regression tests while intelligently sampling production traffic for review.
One customer, Apten, reported improved debugging efficiency and faster issue resolution after adopting the platform, highlighting the system’s ability to quickly surface problems in AI agent behavior.
Respan is headquartered in San Francisco and positions itself as a next-generation solution for teams building and scaling AI agents, particularly as complexity and non-determinism increase in production environments.
KEY QUOTES:
“Our evals platform gives developers critical metrics and evaluation scores to see how well their AI agents are performing and to view where agents hallucinate so they can take immediate actions to prevent future failures. We are very excited to officially launch and grateful for the ongoing support of our investors who all bring a deep level of AI experience to the equation.”
Andy Li, Co-founder of Respan
“Respan is addressing that gap that exists between evaluations and optimization. As AI agents scale, it’s not enough to just look at scores; developers need to take action on this insight. Respan built its platform to do this including connections to its own gateway, and we are impressed with how quickly Andy and the team have scaled, working with more than 100 startups and enterprise teams in just a year.”
Denise Teng, Partner at Gradient
“We tried Respan after using another eval platform for over a year. We found Respan to be much faster and more intuitive, and their support is amazing, they listen to feedback and always respond instantly. I used to spend at least an hour trying to reproduce and fix every agent bug. Now, whenever we have a problem with an agent, the first place I check is Respan and I can instantly see what’s going on. Any company that’s building on LLMs and doesn’t want to be in the dark on what agents are doing needs Respan.”
Roshan Kumaraswamy, Co-founder and CTO of Apten