AI Security Startup Irregular Raises $80M to Safeguard Next-Gen Models
Irregular Secures $80M to Boost AI Security for Frontier Models
AI security firm Irregular has announced a major milestone: an $80 million funding round led by Sequoia Capital and Redpoint Ventures, with participation from notable industry leaders including Wiz CEO Assaf Rappaport. This new investment values Irregular at $450 million, marking a significant step forward in the race to protect cutting-edge AI systems.
Why AI Security Is More Critical Than Ever
As AI continues to evolve, so do its risks. "Our view is that soon, a lot of economic activity is going to come from human-on-AI interaction and AI-on-AI interaction, and that’s going to break the security stack along multiple points," said Dan Lahav, Irregular's co-founder. This vision underscores the growing need for robust, adaptable AI security solutions.
Irregular’s Unique Approach to AI Evaluation
Formerly known as Pattern Labs, Irregular has become a key player in the world of AI model evaluations. The company’s frameworks are referenced in high-profile security assessments for models like Claude 3.7 Sonnet and OpenAI’s o3 and o4-mini. Irregular’s SOLVE framework for scoring model vulnerability-detection capabilities has gained industry-wide adoption, helping organizations understand and mitigate AI risks before deployment.
Staying Ahead of Emerging AI Threats
Irregular isn’t just focused on current vulnerabilities—they’re aiming to detect and address risks before they become real-world issues. The company leverages sophisticated simulated environments, where AI models are tested in roles as both attacker and defender. "We have complex network simulations where we have AI both taking the role of attacker and defender. So when a new model comes out, we can see where the defenses hold up and where they don’t," explained co-founder Omer Nevo.
Industry Context: Security Top Priority for AI Innovators
The AI industry is paying close attention to security as models become more powerful and versatile. Recent efforts include OpenAI’s overhaul of internal security protocols to guard against corporate espionage and the increasing use of AI to uncover software vulnerabilities—tools that can benefit both defenders and attackers.
- Security evaluations now play a crucial role in AI model releases.
- AI models are increasingly adept at identifying software bugs and weaknesses.
- Protecting AI systems requires continuous innovation and vigilance.
What’s Next for Irregular?
With this new funding, Irregular plans to expand its capabilities to stay ahead of ever-evolving AI threats. "If the goal of the frontier lab is to create increasingly more sophisticated and capable models, our goal is to secure these models," said Lahav. "But it’s a moving target, so inherently there’s much, much, much more work to do in the future."
References
- TechCrunch: Irregular raises $80 million to secure frontier AI models
- Irregular: Claude 3.7 Sonnet Security Evaluation
- Irregular: OpenAI's o3 and o4-mini Security Evaluation
- Irregular: Introducing SOLVE
- TechCrunch: OpenAI tightens the screws on security
- TechCrunch: AI slop and fake reports are exhausting some security bug bounties