Monster API announced it is launching its platform to provide developers access to GPU infrastructure and pre-trained AI models at a far lower cost than other cloud-based options, delivering extraordinary ease of use and scalability. The company’s technology utilizes decentralized computing to enable developers to quickly and efficiently create AI applications, saving up to 90% over traditional cloud options. And the company secured $1.1 million in pre-seed funding led by Carya Ventures, with participation from Rebright Partners.
Generative AI is disrupting every industry ranging from content creation to code generation. But the application development process in the machine learning world is very expensive and complex, and it is hard to scale for all but the most sophisticated and largest ML teams. This new platform allows developers to access the latest and most powerful AI models (like Stable Diffusion) ‘out-of-the-box’ at one-tenth the cost of traditional cloud ‘monopolies’ like AWS, GCP, and Azure. And in just one example, an optimized version of the Whisper AI model on Monster API led to a 90% reduced cost as compared to running it on a traditional cloud like AWS.
Utilizing Monster API’s full stack (an optimization layer, a compute orchestrator, a massive GPU infrastructure, and ready-to-use inference APIs), a developer can create AI-powered applications in a matter of minutes. In addition, one can fine-tune these large language models with custom datasets.
Monster API is the result of the personal experience of the two brothers who founded the company, Saurabh Vij and Gaurav Vij. Gaurav had faced a significant challenge in his computer vision startup when his AWS bills skyrocketed, putting immense financial strain on his bootstrapped venture. And at the same time, Saurabh, formerly a particle physicist at CERN (European Council for Nuclear Research), recognized the potential of distributed computing in projects like LHC@home and folding@home.
Inspired by these experiences, the brothers sought to harness the computing power of consumer devices like PlayStations, gaming PCs, and crypto mining rigs for training ML models. And after multiple iterations, they optimized GPUs for ML workloads, leading to a 90% reduction in Gaurav’s monthly bills.
Monster API enables developers to access, fine-tune and train ML models. For example, it enables:
— Access to the latest AI models: Integrates the latest and most powerful AI models–like Stable Diffusion, Whisper AI, StableLM, etc.–with scalable, affordable, and highly available REST APIs through a one-line command interface, without the complex setup.
— Fine-Tuning: Enables developers to enhance LLMs with Monster API’s no-code fine-tuning solution, simplifying the development by specifying hyperparameters and datasets. Developers can fine-tune the open source models like Llama and StableLM to improve response quality for tasks like instruction answering and text classification to achieve ChatGPT-like response quality.
— Training: The Monster API decentralized computing approach enables access to tens of thousands of powerful GPUs like A100s–on demand–to train breakthrough models at substantially reduced costs. (This is unlike traditional model training that is very expensive, hard to access, and limited to a few well-funded businesses. The containerized instances come pre-configured with CUDA, AI frameworks, and libraries for a seamless managed experience.)
Monster API Benefits:
— Predictable API billing: Unlike (the standard) “pay by GPU time,” Monster API offers billing based on the API calls, leading to a more straightforward, more predictable, and better expense management.
— Auto-scalable, reliable APIs: APIs automatically scale to handle increased demand, ensuring reliable service (from one to 100 GPUs).
— Affordable global scalability: The decentralized GPU network enables geographic scaling.
KEY QUOTES:
“By 2030, AI will impact the lives of 8 billion people. With Monster API, our ultimate wish is to see developers unleash their genius and dazzle the universe by helping them bring their innovations to life in a matter of hours.”
“We eliminate the need to worry about GPU infrastructure, containerization, setting up a Kubernetes cluster, and managing scalable API deployments as well as offering the benefits of lower costs. One early customer has saved over $300,000 by shifting their ML workloads from AWS to Monster API’s distributed GPU infrastructure. This is the breakthrough developers have been waiting for: a platform that’s not just highly affordable but it is also intuitive to use.”
“Just as the first browser opened up a portal and allowed masses to interact with the internet, we believe this innovation will bring a tsunami of AI-powered applications to the world.”
— Saurabh Vij, CEO and co-founder, Monster API
“Generative AI is one of the most powerful innovations of our time, with far-reaching impacts. It’s very important that smaller companies, academic researchers, competitive software development teams, etc. also have the ability to harness it for societal good. Monster API is providing access to the ecosystem they need to thrive in this new world.”
— Sai Supriya Sharath, Carya Venture Partners managing partner