Tavus announced that it has raised $40 million in Series B funding to advance the future of human computing. The round was led by CRV, with participation from Scale Venture Partners, Sequoia Capital, Y Combinator, HubSpot Ventures, and Flex Capital. This new capital will accelerate the company’s development of PALs—emotionally intelligent, multimodal AI humans capable of seeing, hearing, understanding, and acting like people.
The San Francisco-based company is pioneering what it calls “human computing,” combining perception, memory, and emotion in artificial intelligence to create natural human-like interactions. With this funding, Tavus plans to scale its suite of proprietary foundational models that power PALs, enabling AI that can engage through voice, text, and face-to-face presence.
For decades, human-computer interaction has been limited to command-line and graphical interfaces. Tavus believes this paradigm is outdated and is now pursuing the creation of computers that can genuinely understand and interact with humans on an emotional and contextual level. The company aims to make conversations with AI feel as natural as talking to another person.
PALs—short for Personal Affective Links—are designed to communicate the way people do. They recognize facial expressions, gestures, and emotions in real-time. They can recall prior interactions, infer intent from subtle cues, and transition smoothly between text, video, and voice interfaces. Unlike typical virtual assistants, PALs can take initiative, acting on behalf of users to send emails, manage calendars, or complete follow-up tasks autonomously.
Several in-house foundational models power each PAL. Phoenix-4 drives lifelike rendering, enabling nuanced expression and precise control over head pose. Sparrow-1 focuses on audio comprehension and conversational timing through deep emotional and semantic understanding. Raven-1 provides contextual perception, interpreting the environment, people, and emotions in real time. Together with Tavus’s orchestration and memory management systems, these technologies form the core of AI entities that can learn, adapt, and act with human-like depth.
Tavus’ leadership envisions this as a turning point in the evolution of AI—one where technology no longer forces humans to adapt to its logic, but instead meets users more intuitively and humanly. The company’s research team includes experts in rendering, perception, and affective computing from leading universities and AI labs, including Professor Ioannis Patras and Dr. Maja Pantic. Over 100,000 developers and enterprises already use Tavus for applications in recruitment, sales, education, and customer service.
KEY QUOTE:
“We’ve spent decades forcing humans to learn to speak the language of machines. With PALs, we’re finally teaching machines to think like humans—to see, hear, respond, and look like we do. To understand emotion, context, and all the messy, beautiful stuff that makes us who we are. It’s not about more intelligent AI, it’s about AI that actually meets you where you are.”
Hassaan Raza, CEO of Tavus

