Navigation

© Zeal News Africa

AI Security Firm 'Irregular' Raises $80M to Bulletproof Frontier Models

Published 1 hour ago3 minute read
Uche Emeka
Uche Emeka
AI Security Firm 'Irregular' Raises $80M to Bulletproof Frontier Models

Irregular, an artificial intelligence security firm previously known as Pattern Labs, has successfully closed an $80 million funding round. The significant investment was spearheaded by prominent venture capital firms Sequoia Capital and Redpoint Ventures, with additional participation from Wiz CEO Assaf Rappaport. This latest funding round reportedly values Irregular at $450 million, underscoring investor confidence in its critical mission to safeguard the evolving landscape of AI interactions.

The company's co-founder, Dan Lahav, highlighted the urgent need for enhanced AI security, stating that "soon, a lot of economic activity is going to come from human-on-AI interaction and AI-on-AI interaction," which he predicts will fundamentally disrupt existing security frameworks. This forward-looking perspective forms the core of Irregular's strategy to pre-empt security vulnerabilities in a rapidly advancing technological environment.

Even before this latest funding, Irregular has established itself as a pivotal entity in AI evaluations. Its expertise is widely recognized and cited in crucial security assessments for leading AI models, including Claude 3.7 Sonnet, as well as OpenAI's o3 and o4-mini models. Furthermore, the company's proprietary framework, dubbed SOLVE, which is designed to score a model’s ability to detect vulnerabilities, is extensively utilized across the industry, affirming its foundational role in setting AI security standards.

Looking ahead, Irregular is channeling its new capital into an even more ambitious endeavor: identifying and mitigating emergent risks and behaviors in AI models before they manifest in real-world scenarios. Co-founder Omer Nevo elaborated on their innovative approach, which involves the construction of sophisticated simulated environments. These simulations enable rigorous and intensive testing of AI models prior to their release. Within these complex network simulations, AI is deployed to assume dual roles — both attacker and defender — allowing Irregular to precisely discern where a model's defenses are robust and where they may falter.

The timing of this funding round reflects the broader industry's heightened focus on AI security. As frontier models continue to evolve in capability, the potential risks they pose have grown substantially, prompting major players like OpenAI to overhaul their internal security measures against threats such as corporate espionage. Concurrently, AI models themselves are becoming increasingly proficient at discovering software vulnerabilities, presenting a double-edged sword for both offensive and defensive cybersecurity strategies.

For the founders of Irregular, these developments signify merely the initial wave of security challenges emanating from the escalating capabilities of large language models. Lahav articulated their commitment: "If the goal of the frontier lab is to create increasingly more sophisticated and capable models, our goal is to secure these models." He acknowledged, however, that this pursuit is inherently a "moving target," emphasizing the continuous and extensive work required to stay ahead of emerging threats in the dynamic field of artificial intelligence.

Loading...
Loading...
Loading...

You may also like...