Kaspersky has raised the alarm that a widely adopted open-source AI connector can be weaponised by cybercriminals, which is opening a dangerous new frontier for supply chain attacks.
Kaspersky, in its recent research, revealed that the Model Context Protocol (MCP), an open-source integration standard for AI systems developed by Anthropic in 2024, could allow attackers to steal sensitive data or execute malicious code if misused.
“Supply chain attacks remain one of the most pressing threats in the cybersecurity space, and the potential weaponisation of MCP we demonstrated follows this trend. With the current hype around AI and the race to integrate these tools, businesses may lower their guard and adopt a seemingly legitimate but unverified MCP. This mistake could lead to catastrophic data leaks,” said Mohamed Ghobashy, incident response specialist at Kaspersky GERT.
The findings underscore growing risks tied to the rapid adoption of AI tools in enterprise workflows.
MCP is designed to let large language models (LLMs) seamlessly access external tools and services, from code repositories and customer relationship management (CRM) systems to cloud and financial data. But according to Kaspersky’s Global Emergency Response Team (GERT), this openness is also its weakness.
In a controlled lab simulation, the team at Kaspersky set up a rogue MCP server to mimic a legitimate integration point. The test showed how unsuspecting developers could have their machines silently siphoned for browser passwords, credit card details, cryptocurrency wallet files, API tokens, and cloud configurations, all while the victim only sees expected outputs.
While Kaspersky stressed that no active exploitation has yet been observed, the company warned that attackers could extend such abuse to install backdoors, deploy ransomware, or trigger malicious code execution through AI-integrated environments.
Read also: Is Africa’s supply chain AI ready?
Kaspersky’s proof-of-concept used Cursor, an AI coding assistant, as the client, but researchers caution that any LLM-based app could be similarly vulnerable. Both Cursor and Anthropic have been notified of the findings.
To mitigate risks, Kaspersky advised organisations to strictly vet MCP servers before use, run them in isolated environments, monitor logs for anomalies such as unusual data flows or SQL commands, and maintain a whitelist of approved connectors.
The firm also recommended businesses consider managed security services, such as Kaspersky Managed Detection and Response (MDR), to bolster defenses against evolving AI-enabled cyberthreats.
