SambaNova has introduced its SN50 AI chip, which the company says delivers up to 5x the maximum speed of competitive chips, and has also announced a planned multi-year collaboration with Intel and more than $350 million in strategic Series E financing.
Positioned as an infrastructure platform built for agentic AI, the SN50 is designed to deliver up to 3x lower total cost of ownership, enabling large-scale inference deployments and production-grade autonomous AI agents. The company said the chip will begin shipping to customers later this year.
To support the scaling and distribution of SN50, SambaNova is collaborating with Intel and has secured new capital to expand manufacturing capacity and cloud infrastructure. The funding round was led by Vista Equity Partners and Cambium Capital, with participation from Intel Capital and a mix of new and existing investors.
According to SambaNova, the SN50 delivers five times more compute per accelerator and four times more network bandwidth than its prior generation. The chip links up to 256 accelerators over a multi-terabyte-per-second interconnect, reducing time-to-first-token and supporting larger batch sizes. The company said this architecture allows enterprises to deploy larger, longer-context AI models with higher throughput and lower latency while maintaining cost efficiency.
The SN50 is built on SambaNova’s Reconfigurable Data Unit architecture and includes a three-tier memory system that the company says enables support for models with more than 10 trillion parameters and context lengths exceeding 10 million tokens. Additional features include agentic caching, multi-model resident memory, and higher hardware utilization to reduce cost-per-token for large-scale deployments.
SoftBank Corp. will be the first customer to deploy SN50 within its next-generation AI data centers in Japan. The deployment is expected to power low-latency inference services for sovereign and enterprise customers across Asia-Pacific, supporting both open-source and proprietary frontier models. SoftBank already hosts SambaCloud in the region, and the SN50 deployment expands that relationship as part of broader sovereign AI initiatives.
SambaNova and Intel also outlined plans for a multi-year strategic collaboration to deliver high-performance, cost-efficient AI inference solutions for enterprises, model providers, and government organizations. As part of the arrangement, Intel plans to make a strategic investment in SambaNova to accelerate development of an Intel-powered AI cloud.
The collaboration is expected to focus on three core areas: expanding SambaNova’s vertically integrated AI cloud built on Intel Xeon-based infrastructure; integrating SambaNova systems with Intel CPUs, accelerators, and networking technologies for production-ready inference; and joint go-to-market efforts across Intel’s enterprise and cloud partner ecosystem. The companies said the partnership is designed to offer an alternative to GPU-centric AI infrastructure and target a multi-billion-dollar inference market opportunity.
SambaNova said the new capital will be used to expand SN50 production, scale SambaCloud, and deepen enterprise software integrations. The company, founded in 2017 and headquartered in San Jose, provides chips, systems, and cloud services designed to deliver high-performance AI inference for enterprises, AI labs, NeoCloud providers, and sovereign AI programs worldwide.
KEY QUOTES
“AI is no longer a contest to build the biggest model. With the SN50 and our deep collaboration with Intel, the real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud.”
Rodrigo Liang, Co-Founder And CEO, SambaNova
“Customers are asking for more choice and more efficient ways to scale AI. By combining Intel’s leadership in compute, networking, and memory with SambaNova’s full-stack AI systems and inference cloud platform, we are delivering a compelling option for organizations looking for GPU alternatives to deploy advanced AI at scale.”
Kevork Kechichian, EVP And General Manager, Data Center Group, Intel
“AI is moving from a software story to an infrastructure story. SN50 is engineered for the real-world latency and economic requirements that will determine who successfully deploys agentic AI at scale.”
Landon Downs, Co-Founder And Managing Partner, Cambium Capital
“The new SambaNova SN50 RDU changes the tokenomics of AI inference at scale. By delivering both high performance and high throughput with a chip that uses existing power and is air cooled, SambaNova is changing the game.”
Peter Rutten, Research Vice-President Performance Intensive Computing, IDC
“With SN50, we are building an AI inference fabric for Japan that can serve our customers and partners with the speed, resiliency and sovereignty they expect from SoftBank. By standardizing on SN50, we gain the ability to deliver world-class AI services on our own terms — with the performance of the best GPU clusters, but with far better economics and control.”
Hironobu Tamba, Vice President And Head Of The Data Platform Strategy Division Of The Technology Unit, SoftBank Corp.
“We’re proud to be investing in SambaNova at such a pivotal time in the company’s growth. SN50 is engineered for agentic AI systems that orchestrate multiple models and process requests in near real-time, and more efficiently than traditional GPU-centric systems.”
Monti Saroya, Partner, Vista Capital

