LiveKit: $100 Million Series C At $1 Billion Valuation Raised To Advance Voice AI Infrastructure Push

By Amit Chowdhry ● Yesterday at 11:08 PM

LiveKit announced it has raised a $100 million Series C round led by Index Ventures, valuing the company at $1 billion, as it doubles down on building infrastructure for real-time, voice-native applications. Alongside Index Ventures, the round included participation from Salesforce Ventures, Hanabi Capital, and existing backers Altimeter and Redpoint Ventures.

In its announcement, LiveKit cited voice as the next major shift in how people interact with software, arguing that recent advances have moved voice AI beyond a single flagship use case and into a fast-expanding set of applications across industries. The company said enterprises are increasingly evaluating voice agents to automate workflows, improve customer experiences, and create new revenue streams, and that 2026 will be a breakout year for broader deployment.

LiveKit positioned the funding as support for building what it called a new kind of application stack—one designed for long-lived, stateful, and latency-sensitive conversations. The company contrasted these systems with traditional web apps built on stateless HTTP requests, saying voice agents require software to continuously listen, reason, and respond while maintaining context throughout a session, which changes how developers build, test, deploy, and monitor production systems.

To address those differences, LiveKit outlined a full development lifecycle to guide agents from design through production operations. On the build side, it highlighted client SDKs for multiple platforms and its LiveKit Agents framework, which it described as providing orchestration control, broad model integrations, and built-in support for conversational dynamics such as turn detection and interruptions. LiveKit also pointed to a newer Agent Builder product designed to help teams start from templates, tune prompts, and share workflows without starting from scratch in code.

For testing and evaluation, LiveKit emphasized the challenge of validating non-deterministic AI behavior, describing a shift from simple assertions to statistical evaluation methods. The company said LiveKit Agents supports writing unit tests and integrating traces with OpenTelemetry, and it noted partnerships with Bluejay, Hamming, and Roark to support simulation-style testing that can run thousands of conversations across variations in prompts, languages, and voice attributes.

In deployment and runtime operations, LiveKit highlighted the unique load and reliability challenges of live conversations, where sessions can run for unpredictable durations and demand can spike unexpectedly. It cited “serverless agents” as a step toward more turnkey deployment. It also described building a global network of data centers optimized for routing real-time voice and video traffic, and said the network handles billions of calls per year across web and mobile apps as well as phone calls.

The company also described efforts to reduce latency for phone-based voice agents by linking directly into the public switched telephone network through partnerships with telephony carriers. Beyond transport, LiveKit said end-to-end voice agent performance depends on reliably chaining multiple models—speech-to-text, turn/interruption detection, LLM inference, and text-to-speech—and that regional distance and provider outages or backlogs can create delays that degrade user experience.

To address those orchestration issues, LiveKit highlighted LiveKit Inference, describing it as an approach to routing inference traffic between model providers using the same real-time monitoring and routing concepts it applies to communications. The company said it has also begun hosting models across its own data centers so inference can be colocated with agents deployed on LiveKit Cloud.

For production monitoring, LiveKit pointed to its Agent Observability capabilities, arguing that voice agents need tooling that can answer operational questions such as call answer time, what the agent heard, whether a user attempted to reach a human, how latency changed across a session, and whether the agent invoked the right tools. It described using session replays, traces, time-aligned transcripts, and logs to close the loop between production behavior and iteration in development.

LiveKit noted that its roadmap is for an emerging ecosystem where voice-first experiences appear first in channels where speech is already the default interface—phone calls, cars, and smart speakers—before becoming more broadly voice-native as models improve at turn-taking, tool use, and reliability. With the Series C, the company said it intends to move faster toward its goal of making it as easy to build and scale voice AI as it is to build and scale web applications.

Exit mobile version