In a groundbreaking move for AI security, Israeli startup Irregular has raised $80 million in funding to protect cutting-edge artificial intelligence models from potential misuse and cyberattacks.
Announced on September 17, 2025, this significant investment, led by Sequoia Capital and joined by notable backers like Wiz’s Assaf Rappaport and Redpoint Ventures, values the company at $450 million, signaling strong industry confidence in its mission.
The Rise of Frontier AI and Emerging Security Challenges
Frontier AI models, such as OpenAI’s ChatGPT and Anthropic’s Claude, represent the pinnacle of AI innovation, capable of complex reasoning and transformative applications across industries.
However, their advanced capabilities also pose unique risks, including exploitation by cybercriminals for malicious hacking or spreading disinformation, a concern Irregular aims to address through rigorous security testing.
Irregular’s Unique Role in AI Safety
Founded by Dan Lahav and Omer Nevo, Irregular positions itself as the world’s first frontier AI security lab, partnering with leading AI developers to simulate potential threats and fortify model defenses.
The startup’s work is critical at a time when governments and corporations alike are grappling with the ethical and safety implications of deploying powerful AI systems at scale.
Historical Context: The Growing Need for AI Security
Historically, the rapid advancement of AI has often outpaced the development of corresponding safety measures, as seen in early data breaches and misuse of machine learning tools in the 2010s.
Irregular’s emergence reflects a broader industry shift toward prioritizing security alongside innovation, a trend underscored by increasing regulatory scrutiny in regions like the European Union and the United States.
Impact on the AI Ecosystem and Beyond
The implications of Irregular’s work extend beyond tech labs, potentially shaping public trust in AI by ensuring these technologies are safe for widespread use in sectors like healthcare, finance, and defense.
With government clients also on board, the startup’s efforts could influence national security policies, especially as AI becomes integral to critical infrastructure.
Looking Ahead: The Future of AI Security
Looking to the future, Irregular’s $80 million funding round is likely just the beginning, as the demand for robust AI security solutions is expected to grow exponentially with the proliferation of frontier models.
As the company scales, its ability to set industry standards for AI safety could redefine how developers and policymakers approach the balance between innovation and risk in the years to come.