Mirai announced it has raised $10 million in seed funding to build an on-device AI capability layer designed to make local inference accessible and efficient for developers. The round was led by Uncork Capital.
Modern mobile and personal computing devices increasingly ship with dedicated AI silicon, yet most AI applications continue to rely on cloud backends because running models locally requires deep systems expertise, including memory management, kernel optimization and hardware-aware execution. Mirai aims to remove that complexity by providing a hardware-aware runtime that enables developers to deploy models on devices with minimal integration work, without requiring low-level systems engineering.
The company has built a proprietary inference engine optimized for Apple Silicon that it says delivers performance improvements of up to 37% faster generation and 59% faster prefill on certain model-device combinations compared with alternative runtimes. Mirai also supports hybrid routing between on-device and cloud execution, allowing developers to maintain existing cloud pipelines while benefiting from local performance and enhanced privacy.
Mirai describes its approach as solving the interaction between model, runtime and hardware. The company argues that existing AI teams often optimize for only one of these variables, while its runtime layer focuses on efficiently binding models to hardware. By optimizing that layer, Mirai says even smaller models can achieve lower latency, reduced cost and improved performance when executed directly on device.
The startup positions on-device inference as the next major shift in AI software, enabling ultra-low latency, access to local context, privacy by default and offline continuity. With the new funding, Mirai plans to expand beyond Apple Silicon and extend support across text, voice, vision and multimodal workloads, aiming to make on-device AI the default execution path for AI applications.
KEY QUOTE:
“Modern devices already ship with dedicated AI silicon. Most AI applications still default to the cloud because running models locally requires deep systems expertise across memory management, kernel optimization, and hardware-aware execution. Mirai removes that barrier.”
– Mirai team on a company blog post