d-Matrix – a leader in high-efficiency generative AI computing for data centers – recently announced it has closed $110 million in a Series-B funding round led by Temasek. The goal for this funding round is to enable d-Matrix to begin commercializing Corsair – which is the world’s first Digital-In Memory Compute (DIMC), chiplet-based inference compute platform, after the successful launches of its prior Nighthawk, Jayhawk-I, and Jayhawk II chiplets.
d-Matrix’s recent silicon announcement Jayhawk II is the latest example of how the company is working to change the physics of memory-bound compute workloads typical in generative AI and large language model (LLM) applications. And with the explosion of this revolutionary technology over the past nine months, there has never been a greater need to overcome the memory bottleneck and current technology approaches that limit performance and drive up AI computing costs.
The company has architected an elegant DIMC engine and chiplet-based solution to enable inference at a lower total cost of ownership (TCO) than GPU-based alternatives. And this new chiplet-based DIMC platform coming to market in 2024 will redefine the category, further positioning d-Matrix as the frontrunner in efficient AI inference.
Founded in 2019, d-Matrix is looking to solve the memory-compute integration problem, which is the final frontier in AI compute efficiency. And d-Matrix has invested in groundbreaking chiplet and digital in-memory computing technologies with the goal of bringing to market a high-performance, cost-effective inference solution in 2024.
Since being founded, d-Matrix has grown substantially in headcount and office space. The company is based in Santa Clara, California, with offices in Bengaluru, India, and Sydney, Australia. With this Series-B funding, d-Matrix plans to invest in its product recruitment and commercialization to satisfy the immediate customer need for lower cost and more efficient compute infrastructure for generative AI inference.
KEY QUOTES:
“The current trajectory of AI compute is unsustainable as the TCO to run AI inference is escalating rapidly. The team at d-Matrix is changing the cost economics of deploying AI inference with a compute solution purpose-built for LLMs, and this round of funding validates our position in the industry.”
— Sid Sheth, co-founder and CEO at d-Matrix
“d-Matrix is the company that will make generative AI commercially viable. To achieve this ambitious goal, d-Matrix produced an innovative dataflow architecture, assembled into chiplets, connected with a high-speed interface, and driven by an enterprise-class scalable software stack. Playground couldn’t be more excited and proud to back Sid and the d-Matrix team as it fulfills the demand from eager customers in desperate need of improved economics.”
— Sasha Ostojic, Partner at Playground Global
“We’re entering the production phase when LLM inference TCO becomes a critical factor in how much, where, and when enterprises use advanced AI in their services and applications. d-Matrix has been following a plan that will enable industry-leading TCO for a variety of potential model service scenarios using a flexible, resilient chiplet architecture based on a memory-centric approach.”
— Michael Stewart from M12, Microsoft’s Venture Fund