In its first cybersecurity investment, OpenAI co-led a $43 million Series A funding round for Adaptive Security, a startup specializing in defending against AI-driven deepfake attacks.
With generative AI enhancing the capabilities of hackers, including the ability to create convincing deepfakes and counterfeit documents, OpenAI is directly addressing the rising threat by backing AI-driven defense mechanisms.
Adaptive Security, established in New York, secured $43 million in a Series A round co-led by OpenAI’s startup fund and Andreessen Horowitz. OpenAI confirmed this investment marks its initial foray into the cybersecurity sector.
Adaptive Security utilizes simulated AI-generated hacks to train employees in identifying and mitigating these advanced threats. The platform simulates attacks across various channels, including phone calls, texts, and emails, to evaluate vulnerabilities and train staff.
The company focuses on social engineering tactics targeting human vulnerabilities, where employees might be tricked into compromising security. CEO and co-founder Brian Long noted the increasing ease of executing social engineering attacks with AI tools.
Gutenberg to lead US market expansion for cybersecurity firm Sequretek
Launched in 2023, Adaptive Security serves over 100 customers, with positive feedback playing a role in attracting OpenAI’s investment. Axie Infinity’s $600 million loss in 2022 due to a manipulated job offer demonstrates the potential damage from such attacks.
Long’s prior ventures include TapCommerce, acquired by Twitter in 2014 for over $100 million. He also founded ad-tech firm Attentive, valued at more than $10 billion in 2021 by one of its investors.
Adaptive Security intends to allocate the new funding primarily to hiring engineers and advancing its product development to counter the AI threats. Other cyber startups are also addressing AI threats, including Cyberhaven, which recently raised $100 million to prevent sensitive data leaks into tools like ChatGPT, and Snyk, which has seen increased demand due to insecure AI-generated code.
Regarding personal security, Long advises individuals concerned about voice cloning to “delete your voicemail.”