DynamoFL recently announced that it had closed a $15.1 million Series A funding round to meet demand for its privacy- and compliance-focused generative AI solutions. Fresh off a recent $4.2 million seed round, the company has raised $19.3 million to date. And DynamoFL’s flagship technology, which enables customers to safely train Large Language Models (LLM) on sensitive internal data, is already in use by Fortune 500 companies in finance, electronics, insurance, and automotive sectors.
The funding round – co-led by Canapi Ventures and Nexus Venture Partners – also had participation from Formus Capital, Soma Capital and angel investors Vojtech Jina, Apple’s privacy-preserving machine learning (ML) lead, Tolga Erbay, Head of Governance, Risk and Compliance at Dropbox and Charu Jangid, product leader at Snowflake, to name a few.
The need for AI solutions that preserve compliance and security has never been greater and LLMs have been presenting unprecedented privacy and compliance risks for enterprises. And it has been widely shown that LLMs can easily memorize sensitive data from its training dataset. Malicious actors could utilize this vulnerability to extract sensitive users’ personally identifiable information and sensitive contract values with carefully designed prompts, posing a major data security risk for the enterprise. But the pace of innovation and adoption in the AI sector is punctuated by the rapidly changing global regulatory landscape, many of which require that enterprises detail these data risks, but enterprises today are not equipped to detect and address the risk of data leakage. In the EU, the GDPR, the impending EU AI act, similar initiatives in China and India, and AI regulation acts in the US require that enterprises detail these data risks. But they are not equipped to detect and address the risk of data leakage.
As government agencies like the FTC explore concerns around LLM providers’ data security, DynamoFL’s machine learning privacy research team has shown how personal information – including sensitive details about C-Suite executives, Fortune 500 corporations, and private contract values – could be easily extracted from a fine-tuned GPT-3 model. And DynamoFL’s privacy evaluation suite provides out-of-the-box testing for data extraction vulnerabilities and automated documentation to ensure enterprises’ LLMs are secure and compliant.
DynamoFL was launched by two MIT PhDs who have spent the last six years researching the cutting-edge, privacy-focused AI and ML technology forming the basis of the company’s core offerings. And the team balances expertise in the latest research in privacy-preserving ML with researchers and engineers from MIT, Harvard, and Cal-Berkeley, and experience in deploying enterprise-grade AI applications for Microsoft, Apple, Meta and Palantir, among other top tech companies.
KEY QUOTES:
“We deploy our suite of privacy-preserving training and testing offerings to directly address and document compliance requirements to help enterprises stay on top of regulatory developments, and deploy LLMs in a safe and compliant manner.”
— DynamoFL co-founder Christian Lau
“Privacy and compliance are critical to ensuring the safe deployment of AI across the enterprise. These are foundational pillars of the DynamoFL platform. By working with DynamoFL, companies can deliver best-in-class AI experiences while mitigating the well-documented data leakage risks. We’re excited to support DynamoFL as they scale the product and expand their team of privacy-focused machine learning engineers.”
— Greg Thome, Principal at Canapi
“This investment validates our philosophy that AI platforms need to be built with a focus on privacy and security from day one in order to scale in enterprise use cases. It also reflects the growing interest and demand for in-house Generative AI solutions across industries.”
— DynamoFL CEO and co-founder Vaikkunth Mugunthan
“While AI holds tremendous potential to transform every industry, the need of the hour is to ensure that AI is safe and trustworthy. DynamoFL is set to do just that and enable enterprises to adopt AI while preserving privacy and remaining regulation-compliant. We are thrilled to have partnered with Vaik, Christian and team in their journey of building an impactful company.”
— Jishnu Bhattacharjee, Managing Director, Nexus Venture Partners