logo

Claude Code Vulnerabilities Expose Remote Code Execution and API Key Theft Risks

April 15, 2026 Newsletter

image

Claude Code Vulnerabilities Expose Remote Code Execution and API Key Theft Risks

New research shows how malicious Claude Code project files could trigger remote code execution and leak Anthropic API keys. Here is what the Claude Code vulnerabilities mean for AI coding tool security, software supply chain risk, and enterprise defense.

Security teams should pay close attention to the latest research into Anthropic’s Claude Code. According to reporting from The Register and Check Point Research, three flaws in Claude Code’s collaboration and project-configuration model could let an attacker execute code on a developer’s machine or steal Anthropic API keys simply by planting malicious settings in a repository and waiting for someone to clone and open it. Anthropic patched the reported issues before public disclosure, but the broader lesson remains: AI coding assistants are creating a new software supply chain attack surface inside the development workflow itself.

 

This matters because the attack path is not a traditional exploit chain. The risk came from repository-controlled configuration files that developers may treat as routine project metadata. In this case, Check Point found that Claude Code’s project settings could be manipulated so code executed automatically or sensitive credentials were exposed before a user fully understood the trust implications of opening an untrusted project. The Register specifically notes that the issue turns configuration files into a meaningful new attack surface as enterprises adopt AI coding tools more broadly.

 

What Happened in Claude Code

 

Check Point described three distinct abuse paths:

 

1) Hooks-based remote code execution

Claude Code supports project-level settings in .claude/settings.json. Check Point found that the Hooks feature, which runs user-defined shell commands at certain lifecycle events, could be weaponized because those hook definitions lived in the repository itself. That meant a malicious contributor could embed commands that ran on another developer’s machine when the project was opened.

 

2) MCP consent bypass leading to code execution

Claude Code also supports Model Context Protocol (MCP) server definitions via .mcp.json. After Anthropic improved warnings following the earlier hook issue, Check Point found a separate path using repository-controlled settings that could auto-approve MCP servers and cause commands to run before the user could meaningfully review the trust prompt. The researchers said this could again lead to full remote code execution.

 

3) API key theft through redirected API traffic

The third issue involved the ANTHROPIC_BASE_URL setting. Check Point showed that an attacker-controlled repository could override this value, redirect Claude Code’s outbound API traffic, and capture the user’s Anthropic API key before trust was confirmed. The researchers also showed that because Claude Workspaces can share files across API keys in the same workspace, a stolen key could create a wider blast radius than simple billing abuse.

 

NVD identifies two of the patched issues as CVE-2025-59536 and CVE-2026-21852. NVD says CVE-2025-59536 affected Claude Code versions before 1.0.111 and could allow code execution before the startup trust dialog was accepted. NVD says CVE-2026-21852 affected versions before 2.0.65 and could allow malicious repositories to exfiltrate data, including Anthropic API keys, before the user confirmed trust.

 

Why This Is Bigger Than One Vendor

 

From an InfoSight perspective, this is not just a Claude Code story. It is a warning about how quickly trust boundaries are shifting in AI-assisted development.

 

Historically, defenders focused on source code, dependencies, CI/CD pipelines, build agents, and developer endpoints. Now there is another layer to govern: AI tool configuration files that can influence command execution, external tool access, network destinations, and secret handling.

 

That changes the enterprise risk model in three important ways:

 

AI tool settings are now part of the software supply chain

If repository-controlled settings can trigger behavior on a developer workstation, then those settings must be treated with the same scrutiny as scripts, pipeline definitions, and infrastructure-as-code. They are no longer “just config.” They are executable trust decisions.

 

Developer productivity tools can become initial access paths

The attacker in this scenario did not need to phish a password or exploit a browser. They only needed a developer to interact with a repository containing malicious project files. That makes AI coding assistants a realistic entry point for endpoint compromise and downstream lateral movement.

 

Secret exposure risk expands fast in shared AI workflows

Once an API key is exposed, the issue is no longer isolated to one user session. In shared-workspace designs, stolen credentials can grant access to project artifacts, shared contexts, and other development assets. That turns one exposed key into a potential confidentiality and integrity event, not just a cost overrun.

 

What Security Leaders Should Do Now

Organizations using Claude Code or any AI coding assistant should respond with a control-first mindset:

 

1) Treat AI configuration files as high-risk code artifacts

Review and monitor files such as .claude/settings.json and .mcp.json the same way you review scripts, CI pipeline files, and privileged automation. Block unreviewed changes to AI-related config in shared repositories.

 

2) Restrict AI coding tools in untrusted repositories

Developers should not run AI agents with broad local permissions inside unvetted or third-party codebases. Enforce safer defaults for external repositories, proof-of-concept code, and community projects.

 

3) Sandbox developer tooling where possible

Use endpoint controls, containerized dev environments, and least-privilege workstation policies to reduce the impact if a malicious repo attempts local command execution.

 

4) Lock down secrets and rotate exposed keys fast

Store API keys in secure secret-management workflows, reduce key scope, and monitor for unusual API destinations or sudden spikes in usage. If exposure is suspected, rotate immediately.

 

5) Expand code review and detection logic

Update secure development standards to explicitly include AI assistant settings, AI plugin/server definitions, and tool-specific trust prompts. Detection engineering should watch for abnormal outbound API traffic, unauthorized config changes, and suspicious shell execution originating from developer tools.

 

The Bottom Line

The Claude Code vulnerabilities are a clear example of where AI developer tool security is heading. Even though Anthropic patched the reported issues, the underlying lesson is more important than the patch itself: AI-enabled development tools can quietly introduce execution paths and credential exposure risks that traditional AppSec and endpoint controls may not fully account for.

 

For InfoSight, the takeaway is straightforward. Organizations need to expand software supply chain security beyond packages and pipelines and into the AI tooling layer. That means stronger governance over developer environments, tighter control over repository-borne configuration, better visibility into secret use, and a security program built to handle the operational reality of AI-assisted coding.

 

In this new environment, productivity gains are real. So is the attack surface.

Stay ahead of evolving threats with expert insights

Subscribe to our newsletter to keep you updated on the latest cybersecurity insights & resources.

One follow-up from a security expert—no spam, ever.