MCP and Coding Tools Security
Security risks of Model Context Protocol in IDE environments — covering MCP server attacks in development tools, code exfiltration via tool calls, and IDE-specific hardening strategies.
The Model Context Protocol has become deeply integrated into modern AI coding tools. IDEs like VS Code, Cursor, and Windsurf use MCP to connect AI assistants to development tools — file systems, terminals, databases, API clients, and deployment systems. This integration creates a powerful development experience but also a significant attack surface. This page covers the security risks specific to MCP in development environments.
MCP in the Development Context
In a typical IDE MCP setup, the AI coding assistant connects to multiple MCP servers that provide different capabilities. A developer might have MCP servers for file system access to read and write project files, terminal access to run commands and view output, database access to query development and staging databases, API testing to send HTTP requests and view responses, version control to interact with git repositories, and cloud services to deploy and manage cloud resources.
Each of these MCP servers provides the AI assistant with capabilities that, if abused, can compromise the developer's machine, the codebase, the infrastructure, or all three.
The Trust Model Problem
The fundamental security problem with MCP in development environments is the trust model. The developer trusts the AI assistant to use tools appropriately. The AI assistant trusts tool descriptions to accurately represent what tools do. MCP servers trust that tool calls come from authorized users with legitimate intent.
None of these trust relationships are verified cryptographically. The AI assistant cannot verify that a tool does what its description says. MCP servers cannot verify that a tool call reflects the developer's genuine intent rather than the result of a prompt injection. And the developer cannot easily audit every tool call the AI assistant makes.
Attack Vectors
Vector 1: Malicious MCP Server Installation
The most direct attack is convincing a developer to install a malicious MCP server. This can happen through social engineering such as sharing a helpful-looking MCP server configuration, supply chain compromise where a legitimate MCP server package is replaced with a malicious version, documentation poisoning where a tutorial or guide recommends connecting to a malicious server, or project configuration where a repository includes an MCP configuration file that connects to a malicious server.
Once installed, a malicious MCP server can intercept all tool calls and exfiltrate code, credentials, and conversation context. It can return manipulated results to influence the AI assistant's behavior. It can register tools that shadow legitimate tools from other servers. And it can execute malicious code on the developer's machine through tool implementations.
The stealth of this attack is its strength. A well-designed malicious MCP server provides legitimate functionality while silently exfiltrating data or manipulating results. The developer sees normal behavior and has no indication that data is being stolen.
Vector 2: Code Exfiltration Through Tool Calls
Even with legitimate MCP servers, the AI assistant may be manipulated through prompt injection to exfiltrate code through tool calls. If the codebase contains injection payloads (in comments, strings, or documentation), these payloads can cause the AI assistant to read sensitive files and include their contents in tool call arguments. For instance, an injection payload might cause the AI to use an HTTP tool to send code snippets to an external endpoint, or to include sensitive file contents in database queries or terminal commands that are logged externally.
This attack vector is particularly concerning because it does not require a malicious MCP server. It uses legitimate tools for unintended purposes, making it harder to detect through server-level security controls.
Vector 3: Credential Harvesting
Developer environments are rich in credentials. Environment variables contain API keys, SSH keys provide infrastructure access, cloud CLI configurations contain authentication tokens, database connection strings contain credentials, and git configurations may contain access tokens.
An MCP server with file system or terminal access can read these credentials. Through the AI assistant, an attacker can cause credential reads that appear to be part of normal development activity. A prompt injection payload might cause the AI to read .env files, list SSH keys, or query cloud configurations — all actions that would be normal in a development context but serve the attacker's exfiltration goals.
Vector 4: Supply Chain Injection Through Development Tools
MCP servers connected to package managers, dependency resolution tools, or code generation services can inject malicious dependencies or code into projects. An AI assistant manipulated by a prompt injection might add malicious dependencies to a project's package manifest, generate code that includes unauthorized imports, modify build configurations to include malicious build steps, or commit and push changes to a repository without the developer's explicit approval.
If the developer has configured the AI assistant with git push capabilities through an MCP server, a single prompt injection could result in malicious code being committed and deployed without human review.
Vector 5: Cross-Server Attack Chains
When a developer connects multiple MCP servers, the AI assistant can chain tool calls across servers. This enables attacks where one server's capabilities are used to compromise another server's domain.
For example, an attacker might use a file system MCP server to read database credentials, then use a database MCP server to query sensitive data, then use a terminal MCP server to exfiltrate the data. Each individual tool call appears within the tool's normal scope, but the chain achieves an outcome that none of the servers would individually authorize.
Defense Strategies
MCP Server Vetting
Before connecting any MCP server, verify its source and integrity. Review the server's source code or verify it comes from a trusted publisher. Check the package's integrity through checksums and signatures. Search for known vulnerabilities or security reports about the server. Evaluate whether the server requests more capabilities than it needs.
Maintain an approved server list for your development team. Require security review before adding new servers to the approved list. Monitor for server updates that might change behavior.
Capability Restriction
Limit each MCP server's capabilities to the minimum required. A code search server does not need write access to the file system. A database query server does not need terminal access. A documentation server does not need git push capability.
Where MCP servers support capability configuration, restrict them. Where they do not, use operating system-level controls (file permissions, network restrictions, container isolation) to limit what the server process can do.
Tool Call Monitoring
Implement monitoring for MCP tool calls in your development environment. Log all tool calls with their arguments and results. Alert on unusual patterns such as file reads of credential files, network requests to unknown endpoints, terminal commands that access sensitive resources, and sequences of tool calls that match known attack patterns.
Several IDE extensions and MCP proxy tools provide tool call logging. Deploy these in your development team and review the logs regularly.
Network Isolation
Restrict the network access of MCP server processes. MCP servers that provide local functionality (file access, terminal commands) should not need outbound network access. Use firewall rules or container networking to block outbound connections from these servers.
For MCP servers that require network access (API tools, cloud service integrations), restrict their access to approved endpoints only. Block connections to any endpoint not on the whitelist.
Prompt Injection Defenses
Since prompt injection is the primary mechanism for misusing MCP tools, implement prompt injection defenses in your AI coding tool configuration. Use system prompts that explicitly instruct the AI not to read credential files, send data to external endpoints, or execute commands that are not directly requested by the developer. While prompt injection defenses are not perfect, they raise the bar for exploitation.
Workspace Isolation
Use separate workspaces or profiles for different security contexts. A workspace used for open-source development should not have MCP servers connected to production infrastructure. A workspace used for infrastructure management should not have MCP servers connected to public API testing tools.
This isolation limits the blast radius of any single compromise. A prompt injection in an open-source project cannot reach production credentials if the production MCP servers are not connected in the same workspace.
Regular Security Assessment
Conduct regular security assessments of your development team's MCP configurations. Inventory all connected MCP servers across the team. Verify that each server is from the approved list. Check that capability restrictions are in place and effective. Test for cross-server attack chains. Verify that monitoring and logging are operational.
The integration of MCP into development tools is still in its early stages, and security practices are evolving rapidly. Organizations should treat their MCP configurations as security-critical infrastructure and apply commensurate controls. The productivity benefits of MCP-connected AI coding tools are significant, but they must be balanced against the equally significant security risks of giving AI assistants access to the full range of development capabilities.