Log In

Snyk Acquires Invariant Labs to Accelerate Agentic AI

Published 17 hours ago3 minute read

BOSTON, June 24, 2025 (GLOBE NEWSWIRE) -- Snyk, the leader in secure AI software development, today announced the acquisition of Invariant Labs, a globally recognized AI security research firm and early pioneer in developing safeguards against emerging AI threats.

“This acquisition is an important integration into Snyk’s recently launched AI Trust Platform that adds the ability to secure applications from emergent threats,” said . “Snyk can now offer customers a single platform to address both current application and agentic AI vulnerabilities.”

The Invariant Labs acquisition will also provide a major advancement for Snyk Labs, the company’s new research arm focused on advancing AI security delivered through its AI Trust Platform. It brings a talented team of preeminent researchers, a proven track record of industry-first intelligence on agentic attack vectors, MCP vulnerabilities, tool poisoning and runtime detection techniques that are now shaping security standards across the industry.


Snyk believes this extension of its single platform approach to software development security significantly advances the ability of its customers to secure the next generation of AI-native and agentic applications, including large language models (LLMs) and autonomous agents.

This acquisition also demonstrates Snyk’s dedication to support security teams as they face urgent and unfamiliar risks in AI-native software which is rapidly becoming the new software development default. For example, unauthorized data exfiltration to AI agents executing unintended actions, and threats like MCP vulnerabilities are already appearing in production. Invariant Labs is at the forefront of research on these evolving risks, discovering and naming attack terminology such as “tool poisoning” and “MCP rug pulls.”

“With Invariant Labs, we’re accelerating our ability to identify, prioritize, and neutralize the next generation of Agentic AI threats before they reach production,” said “This acquisition also underscores Snyk’s proactive commitment to supporting security teams navigating the urgent and unfamiliar risks of AI-native software, which is rapidly becoming the new software development default.”


Invariant Labs has built Guardrails, a transparent security layer at the LLM and agent level that allows agent builders and software engineers to augment existing AI systems with strong security safeguards. The company’s unique methods take into account contextual information, static scans of agent tools and implementations, runtime information, human annotations, and incident databases. With Invariant Labs, developers can inspect and observe agent behavior, enforce contextual security rules on agent systems, and scan MCP servers for vulnerabilities.

“We’ve spent years researching and building the frameworks necessary to secure the AI-native future,” said “We must understand that agent-based AI systems are a powerful new class of software, especially autonomous ones, and demand greater oversight and stronger security guarantees than traditional approaches. We’re excited to join the Snyk team, as this mindset is deeply aligned with their mission.”

To learn more about Snyk’s AI Trust Platform, visit here. To explore Invariant Labs’ research and tools, visit here.


Snyk, the leader in secure AI software development, empowers organizations to build fast and stay secure by unleashing developer productivity and reducing business risk. The company’s AI Trust Platform seamlessly integrates into developer and security workflows to accelerate secure software delivery in the AI Era. Snyk delivers trusted, actionable insights and automated remediation, enabling modern organizations to innovate without limits. Snyk is redefining secure AI-driven software delivery for over 4,500 customers worldwide today.


Invariant Labs is a security research lab dedicated to building robust, reliable, and secure AI agents. As an ETH Zurich Spin-off and ETH AI Center Affiliated Startup, which focuses on securing and safeguarding AI Applications. It is led by Marc Fischer, Luca Beurer-Kellner, Prof. Martin Vechev, and Prof. Florian Tramèr.

Media Contact
Cait Mattingly
[email protected]

Origin:
publisher logo
Snyk
Loading...
Loading...
Loading...

You may also like...