Multiverse Computing: Interview With Co-Founder & CTO Sam Mugel About The AI Model Compression Company

By Amit Chowdhry ● Yesterday at 12:40 PM

Multiverse Computing is an AI model provider and leader in quantum-inspired AI compression, significantly reducing the size and energy consumption of models for applications across various industries. Pulse 2.0 interviewed Multiverse Computing co-founder and CTO Sam Mugel to learn more.

Sam Mugel’s Background

Could you tell me more about your background? Mugel said:

“I’m the co-founder and CTO of Multiverse Computing. My background is in physics – I have a joint PhD from The Institute of Photonic Sciences and the University of Southampton, with expertise in quantum computing and quantum machine learning. Prior to Multiverse Computing, I was a computational physicist at Cortirio, helping build low-cost, portable brain imaging tools. I also previously worked as Technical Director for The Quantum Revolution Fund and founder / CTO of Groundstate Consulting. I am an Advisor at the McKinsey Tech Council, Mentor at Creative Destruction Lab, and I sit on the Board of French Tech Toronto.”

Idea For CompactifAI

How did the idea for CompactifAI emerge? Mugel shared:

“Multiverse Computing didn’t actually start out in AI. We founded the company in 2019 to build quantum computing solutions for complex problems in the finance, energy, and manufacturing sectors. But in 2023, we discovered that our quantum expertise could also solve one of the biggest challenges facing AI – its exploding energy and compute demands. Our underlying software could be used to compress LLMs without sacrificing their performance. We built a compression algorithm, CompactifAI, that weaved quantum and AI techniques together to identify and retain the most information-rich parts of an LLM while shrinking it dramatically.”

Core Products

What are the company’s core products and features? Mugel explained:

“Last year, we unveiled our Slim model series, CompactifAI-compressed versions of top open-source models. Our goal with these models is to remove the barriers that hold organizations back from adopting AI. Because of the high costs associated with implementing and running LLMs, businesses often have to choose between breaking the bank or compromising with more limited models. CompactifAI and the Slim series take this tradeoff out of the equation and open up new possibilities for deploying full-power, resource-efficient AI. Developers can instantly plug the Slim models into any application – whether it’s on the cloud, on private data centers, or even at the edge.”

“Currently, we have Slim versions of Llama, Mistral, Phi, Qwen, and DeepSeek models available through private offers and on the AWS Marketplace. These models bring improved energy efficiency, faster inference times, and reduced costs all while outperforming the originals across benchmarks and without sacrificing accuracy. We’re actively continuing to expand the Slim series further so that businesses can reap the benefits of CompactifAI tailored to the models that best fit their needs.”

“We also have our Model Zoo family of powerful nano-models, ChickenBrain and SuperFly. ChickenBrain is a compressed version of Meta’s Llama 3.1 model with reasoning capabilities that could theoretically run on hardware the size of a chicken’s brain. Tested on a Macbook Pro and Raspberry Pi, it actually outperforms the original across benchmarks. Meanwhile, SuperFly is a compressed version of SmolLM2 135 and can run locally on any device without an internet connection. These models are available by private request and demonstrate just how small a sophisticated model could physically get – debunking the assumption that bigger models are always better models.”

Industry Impact

How will this technology impact the industry? Mugel noted:

“CompactifAI lowers the barrier of entry for implementing AI. AI has incredible potential across all industries, and everyone is eager to use the technology in their organizational workflows. But for many, the energy and compute costs are simply too high. Our method of compression makes it possible for businesses to deploy AI without sacrificing their bottom line.” 

“It also unlocks new possibilities for AI at the edge. Ultra-compressed models that are just as powerful as large models could be installed directly onto devices like smartphones, drones, home appliances, or cars. And beyond the benefits for businesses themselves, CompactifAI charts a path for more sustainable and resource-efficient AI that can reach its full potential without consuming immense amounts of electricity and putting stress on the power grid.”

Evolution Of The Company’s Technology

How has the company’s technology evolved since launching? Mugel noted:

“Since unveiling CompactifAI, we’ve continued to improve and expand the algorithm to better compress models for performance and efficiency. At the same time, each model that we compress is different and comes with its own set of characteristics and challenges. Through countless hours of adjustment and refinement, our team is able to tailor CompactifAI to each specific LLM, allowing us to optimize our models and deliver the best results for customers.”

Significant Milestones

What have been some of the company’s most significant milestones? Mugel cited:

“Last April, we began the rollout of our Slim series with two Llama LLMs. Since then, we’ve continued to release more compressed versions of popular open-source models. In June 2025, we partnered with AWS and launched our CompactifAI API to make our models available on AWS Bedrock. A month after that, we launched our Model Zoo, the world’s smallest, high-performance models.”

“As Multiverse Computing has grown and caught the attention of companies across a wide range of sectors, we’ve expanded past Europe to North America and opened a new San Francisco office in October 2024. And on June 12, 2025 we announced a $215 million Series B funding round with support from global investors including Bullhound Capital, HP Tech Ventures, Forgepoint Capital International, CDP Venture Capital, Santander Climate VC, Quantonation, Toshiba, Capital Riesgo de Euskadi – Grupo SPRI, and the Spanish government. With this world-class support, we’re advancing our mission to continue delivering powerful compressed LLMs and pave the way for more efficient, sustainable, and affordable AI.”

Differentiation From The Competition

What differentiates the company from its competition? Mugel affirmed:

“Traditional compression methods used by AI companies like quantization, pruning, and knowledge distillation lose a significant amount of precision in the process. CompactifAI, on the other hand, can shrink a model up to 95% with only a 2-3% loss in precision. Our models not only retain the accuracy of the original models, but are 4-12x faster and yield a 50-80% reduction in inference costs.”

“Multiverse Computing takes a completely novel approach to AI compression built not just on our AI expertise, but also on our deep quantum expertise. Foundational to our CompactifAI algorithm are Tensor Networks, a framework for simplifying neural networks inspired by quantum mechanics and pioneered by my co-founder and CSO, Dr. Román Orús. Our bench of specialized quantum and AI talent provides us with a unique perspective into the AI industry that powers our innovation.”

Future Company Goals

What are some of the company’s future goals? Mugel concluded:

“Following our Series B funding round, we’re focusing on accelerating the rollout of affordable, energy-efficient AI models to customers. The capital is also helping us continue to develop ultra-compressed nano models that can run locally on edge devices, whether it’s smartphones or satellites. More broadly, Multiverse Computing is continuing to expand across industries and worldwide, particularly in the U.S. and Canada.”

 

 

Exit mobile version