HomeIsraelAI Security Startup Irregular raises US$80 million in funding round

AI Security Startup Irregular raises US$80 million in funding round

AI Security Startup Irregular raises US$80 million in funding round

AI security startup Irregular announced that it has raised US$80 million in funding. The money will be used to build defense systems, testing tools, and security solutions to make next-generation AI models safer for use.

The funding round was led by Sequoia Capital Operations and Redpoint Ventures LP. Swish Ventures and well-known angel investors, including Assaf Rappaport (CEO of Wiz Inc.) and Ofir Ehrlich (CEO of E.ON SE), also joined the round.

Founded in 2023 as Pattern Labs Inc., Irregular calls itself the world’s first frontier AI security lab. The company focuses on protecting advanced AI systems before they can be misused.

Its mission is to test, strengthen, and defend next-generation AI models by running them through tough testing environments, working together with leading AI developers.

Irregular conducts controlled tests on advanced AI models to see how they might be exploited.

These tests look at threats like avoiding antivirus, acting aggressively on its own, breaking into systems, or other types of misuse, checking both how the AI could attack and how well it can defend against counterattacks.

In addition to testing, Irregular provides defensive tools, frameworks, and scoring systems to help secure AI systems in real-world applications.

Read also- Information About Hailuo AI

Irregular works with top AI labs and government institutions, integrating its testing into the development of major AI models.

This partnership helps the company anticipate potential threats before they happen and provide guidance on security plans, compliance, and safe deployment.

Irregular is already helping shape industry standards for AI security. Its evaluations are referenced in OpenAI’s system cards for GPT-4, o3, o4 mini, and GPT-5. The U.K. government and Anthropic PBC use Irregular’s SOLVE framework to check for cyber risks in AI models, including Claude 4. Google DeepMind researchers also cited the company in a study on AI’s cyberattack capabilities.

The company co-wrote a whitepaper with Anthropic on using confidential computing to improve AI security and protect user data. It also partnered with RAND Corp. on a paper about AI model theft and misuse influencing Europe’s AI security policies and setting a standard in the field.

“Irregular has taken on an ambitious mission to make sure the future of AI is secure as it is powerful,” said Dan Lahav, co-founder and chief executive officer of Irregular. “AI capabilities are advancing at breakneck speed; we’re building the tools to test the most advanced systems way before public release and to create the mitigations that will shape how AI is deployed responsibly at scale.”

“The real AI security threats haven’t emerged yet,” said Shaun Maguire, a partner at Sequoia Capital. “What stood out about the Irregular team is how far ahead they’re thinking. They’re working with the most advanced models being built today and laying the groundwork for how we’ll need to make AI reliable in the years ahead.”

Read more- Carro raises $60 million in seed round

- Advertisement -
RELATED ARTICLES
- Advertisment -

Most Popular