Fastino: AI Model Provider Company Secures $7 Million (Pre-Seed)

By Amit Chowdhry • Nov 18, 2024

Fastino, a new foundation AI model provider, launched recently to provide a family of task-optimized language models that are more accurate, faster, and safer than traditional LLMs. And the company also announced its $7 million pre-seed funding round led by global software investor Insight Partners and M12, Microsoft’s Venture Fund, with participation from NEA, CRV, Valor, Github CEO Thomas Dohmke, and others.

Even though Generative AI deployments steadily increased year over year, even early adopters continue to face significant challenges when implementing the new technology. While conventional LLMs offer significant innovation potential, technological and operational complexities hinder companies from fully realizing this value. Fastino offers a differentiated approach to help enterprises of all sizes accelerate the adoption and deployment of generative AI technology tailored to solve their business challenges.

Some of the key features of Fastino include:

1.) Fit-for-purpose architecture for consistent, accurate outputs – Fastino offers task-optimized models for critical enterprise use cases like structuring of textual data, RAG systems, text summarization, task planning, and more.

2.) CPU-level inferencing for swifter results – Fastino’s novel architecture operates up to 1000 times faster than traditional large language models. Its optimized computation enables flexible deployment on CPUs or NPUs, minimizing the reliance on high end GPUs.

3.) Task-optimized models for safer AI systems – Fastino’s models enable new distributed AI systems, which are less vulnerable to adversarial attacks, hallucinations, and privacy risks.

KEY QUOTES:

“Fastino aims to bring the world more performant AI with task-specific capabilities. Whereas traditional LLMs often require thousands of GPUs, making them costly and resource-intensive, our unique architecture requires only CPUs or NPUs. This approach enhances accuracy and speed while lowering energy consumption compared to other LLMs.”

– Ash Lewis, CEO and co-founder of Fastino

“We’re proud to announce our initial funding round, led by Insight Partners and M12, Microsoft’s Venture Fund. This pre-seed funding allows us to continue pioneering LLM architecture, developing accurate, secure solutions that bring AI to the enterprise. Global enterprises are facing increasing difficulty in accessing computing power while achieving the precision and speed necessary to integrate AI effectively. Fastino aims to fix this with scalable, high-performance language models, optimized for enterprise tasks.”

– George Hurn-Maloney, COO and co-founder of Fastino

“Fastino’s approach to solving contemporary AI challenges presents one of the most exciting developments in the trillion-dollar enterprise AI opportunity. We see a bright future in tunable, high-performance, low-latency foundation models that empower firms to use the most accurate generative AI available while reducing their risk exposure to data leakage and inaccurate outputs.”

– George Mathew, Managing Director at Insight Partners

“Fastino’s innovative architecture enables high performance while addressing critical challenges like safety, data leakage, accuracy and efficiency. Our investment will accelerate Fastino’s development of secure and performant Foundation AI, tunable to address enterprise challenges, from the banking to the consumer electronics sectors.”

– Michael Stewart, Managing Partner at M12, Microsoft’s Venture Fund

“I’m excited to be an early investor in Fastino, a company on a mission to bring the world accurate, fast, and safe task-specific LLMs that solve organizations’ most pressing challenges. Their novel approach involves a new architecture that runs on CPUs, making AI more accessible for a future with 1B developers.”

– Thomas Dohmke, CEO of GitHub