DeepKeep: An Interview With Founder And CEO Rony Ohayon About This Fast-Growing AI Security Company

By Amit Chowdhry • Jul 8, 2024

DeepKeep is a company that developed AI security, which promotes unbiased, error-free, and secure solutions. Pulse 2.0 interviewed DeepKeep CEO and founder Rony Ohayon to learn more about the company.

Rony Ohayon’s Background

Rony Ohyaon - DeepKeep CEO and Founder

What is Rony Ohayon’s background? Ohayon said:

“I have many years of experience within the Israeli high-tech sector with a rich and diverse career spanning development, technology, academia, business, and management. I have a Ph.D. in Communication Systems Engineering, an MBA, and more than 30 registered patents in my name. Before DeepKeep, I was the CEO and Founder of DriveU, where I oversaw the inception, establishment, and management of teleoperation for autonomous vehicle fleets. Additionally, I founded LiveU, a leading technology solutions company for broadcasting, managing, and distributing IP-based video content, where I also served as CTO until the company was acquired.”

Core Products

What are DeepKeep’s core products and features? Ohayon explained:

“Generative AI creates a myriad of internal weaknesses and external attack exposures, like adversarial attacks, and is often a target of cyber attacks, privacy invasions and GenAI risks, such as Toxicity and Jailbreaks. It also lacks trustworthiness due to explainability and fairness challenges, biases and more.”

“DeepKeep’s platform is based on four pillars:

  1. Continuous Risk Assessment – this penetration testing module provides risk analysis and allows the assessment of ML model robustness against a wide range of adversarial attacks, both digital and physical. The platform is continuously updated with new attacks such as Poisoning and Backdoor to stay up to date. The penetration testing module generates reports that include model vulnerabilities as well as recommendations for fixing them.
  2. Automated Prevention – DeepKeep’s automated prevention module allows ML stakeholders to apply protection methods and develop models for pre-deployment hardening. These protection methods are applied during model creation/pre-production and/or during production. While basic protection methods include applying pre/post-processing to the model, more advanced protection methods include the usage of XAI, adjustment of model parameters/AF (Activation Functions), robust component injection, model reparation and ensemble. Usually, more advanced methods provide a higher layer of security and may require more resources to run. The optimal blend of protection methods depends on model type and customer requirements.
  3. Real-time Detection – the attack detection module enables the detection of attacks both online in real-time, and offline. This attack detection tool allows enterprises to block attacks and prevent, reduce, and mitigate the extent of damage. This includes stateful/stateless detection, anomaly detection and explainability-based detectors. Similar to Automated Prevention, the attack detection strategy is dictated by the type of model, available resources, and customer preferences.
  4. Alert and Mitigation – the attack mitigation module allows enterprises to take action and apply security policies, including an AI firewall with an operation center and response options. Mitigation methods span from non-intrusive actions such as sending notifications for further human investigation to more intrusive actions such as blocking/redirecting requests when an attack is detected, or dynamic protection methods.

DeepKeep is a model-agnostic, multi-layer platform that safeguards AI with AI-native security and trustworthiness from the R&D phase of machine learning models through to deployment, covering risk assessment, prevention, detection, and mitigation.”

 Evolution Of DeepKeep’s Technology

How has the company’s technology evolved since launching? Ohayon noted:

“DeepKeep started out securing computer vision models for object detection and facial recognition. It now ensures the health and robustness of a variety of ML model types: computer vision, Large Language (LLM) and multimodal.” 

Significant Milestones

What have been some of the company’s most significant milestones? Ohayon cited:

“DeepKeep’s platform recently prevented data leakage and stopped an LLM from toxic responses at a large financial institution. It also executed fast object detection on an edge device with GenAI. These successful trials have paved the way to DeepKeep applying GenAI on a wide range of additional use cases.”

 Customer Success Stories

After asking Ohayon about customer success stories, he highlighted:

“Large language models are subject to anomalous and malicious inputs, as well as unexpected behavior. They lie and make mistakes. The question is not if, but when and how. DeepKeep recently conducted an extensive evaluation of Meta’s LlamaV2 7B LLM, summarized in the following  weaknesses and strengths:

  1. The LlamaV2 7B model is highly susceptible to both direct and indirect Prompt Injection (PI) attacks, with a majority of test attacks succeeding when exposing the model to context containing injected prompts.
  2. The model is vulnerable to Adversarial Jailbreak attacks, which provoke responses that violate ethical guidelines. Tests reveal a significant reduction in the model’s refusal rate under such attack scenarios.
  3. LlamaV2 7B is highly susceptible to Denial-of-Service (DoS) attacks, with prompts containing transformations like replacement of words, characters and switching order leading to excessive token generation.
  4. The model demonstrates a high propensity for data leakage across diverse datasets, including finance, health, and generic PII.
  5. The model has a significant tendency to hallucinate, challenging its reliability.
  6. The model often opts out of answering questions related to sensitive topics like gender and age, suggesting it was trained to avoid potentially sensitive conversations rather than engage with them in an unbiased manner.”

“Main scores and risks identified are summarized in the figure below:

LlamaV2 7B Scores and Risks

DeepKeep’s evaluation of data leakage and PII management demonstrates the model’s struggle to balance user privacy with the utility of information provided, showing tendencies for data leakage.”

“On the other hand, Meta’s LlamaV2 7B LLM shows a remarkable ability to identify and decline harmful content, boasting a 99% refusal rate in DeepKeep’s tests. However, our investigations into hallucinations indicate a significant tendency to fabricate responses, challenging its reliability.”

“Overall, the LlamaV2 7B model showcases its strengths in task performance and ethical commitment, with areas for improvement in handling complex transformations, addressing bias, and enhancing security against sophisticated threats.”


When asking Ohayon about the company’s funding information, he revealed:

“DeepKeep raised $10M in seed funding in a round led by Canadian-Israeli VC Awz Ventures.”

 Total Addressable Market

What total addressable market (TAM) size is the company pursuing? Ohayon assessed: 

“AI is becoming an essential part of businesses and everyday lives. In 2023, 35% of businesses adopted AI, while 90% of leading businesses supported AI and invested in it to achieve competitive advantages. As the adoption of LLMs and generative AI surges in a wide range of diverse applications and industries, so will organization attack surfaces, posing several types of threats and weaknesses. New risks associated with LLMs are unique beyond traditional, familiar issues, such as cyber-attacks, and include Prompt Injection, Jailbreak, and PII Leakage, as well as a lack of trustworthiness due to challenges with biases, fairness and weak spots.”

“As defense needs and budgets grow worldwide and AI security evolves into a regulatory requirement, DeepKeep’s solution is rising to the challenge. The platform enables data scientists and CISO teams to gain valuable understanding and insights into AI risks, alongside comprehensive protection and alerts. DeepKeep is already deployed by leading global enterprises in the finance, security, and AI computing sectors.”

Differentiation From The Competition

What differentiates DeepKeep from its competition? Ohayon affirmed:

“DeepKeep’s unique approach of using Generative AI to secure Generative AI sets it apart from competitors. We leverage GenAI to protect large language models (LLMs) and AI systems throughout the entire AI lifecycle. Our AI-native security ensures that businesses adopt AI safely, protecting both commercial and consumer data.”

“DeepKeep’s expertise spans several AI model types, covering both computer vision and LLMs. We prioritize implementing both trustworthiness and security, rather than focusing on just one aspect, to enable synergies equaling more than the sum of the parts. Additionally, we address both digital and physical security threats, such as facial recognition and object detection, ensuring comprehensive protection.”

Future Company Goals 

What are some of the company’s future goals? Ohayon concluded: 

“As we collaborate with multinational companies globally, there is a growing demand for support in multiple languages: we plan on expanding into multilingual natural language processing (NLP) with an initial focus on Japanese, driven by our partnerships in Japan.”