Cursor AI Code Editor Flaw Enables Silent Code Execution via Malicious Repositories

A security weakness has been disclosed in the artificial intelligence (AI)-powered code editor Cursor that could trigger code execution when a maliciously crafted repository is opened using the program.

The issue stems from the fact that an out-of-the-box security setting is disabled by default, opening the door for attackers to run arbitrary code on users’ computers with their privileges.

“Cursor ships with Workspace Trust disabled by default, so VS Code-style tasks configured with runOptions.runOn: ‘folderOpen’ auto-execute the moment a developer browses a project,” Oasis Security said in an analysis. “A malicious .vscode/tasks.json turns a casual ‘open folder’ into silent code execution in the user’s context.”

Cursor is an AI-powered fork of Visual Studio Code, which supports a feature called Workspace Trust to allow developers to safely browse and edit code regardless of where it came from or who wrote it.

With this option disabled, an attacker can make available a project in GitHub (or any platform) and include a hidden “autorun” instruction that instructs the IDE to execute a task as soon as a folder is opened, causing malicious code to be executed when the victim attempts to browse the booby-trapped repository in Cursor.

“This has the potential to leak sensitive credentials, modify files, or serve as a vector for broader system compromise, placing Cursor users at significant risk from supply chain attacks,” Oasis Security researcher Erez Schwartz said.

To counter this threat, users are advised to enable Workplace Trust in Cursor, open untrusted repositories in a different code editor, and audit them before opening them in the tool.

Audit and Beyond

The development comes as prompt injections and jailbreaks have emerged as a stealthy and systemic threat plaguing AI-powered coding and reasoning agents like Claude Code, Cline, K2 Think, and Windsurf, allowing threat actors to embed malicious instructions in sneaky ways to trick the systems into performing malicious actions or leaking data from software development environments.

Software supply chain security outfit Checkmarx, in a report last week, revealed how Anthropic’s newly introduced automated security reviews in Claude Code could inadvertently expose projects to security risks, including instructing it to ignore vulnerable code through prompt injections, causing developers to push malicious or insecure code past security reviews.

“In this case, a carefully written comment can convince Claude that even plainly dangerous code is completely safe,” the company said. “The end result: a developer – whether malicious or just trying to shut Claude up – can easily trick Claude into thinking a vulnerability is safe.”

Another problem is that the AI inspection process also generates and executes test cases, which could lead to a scenario where malicious code is run against production databases if Claude Code isn’t properly sandboxed.

The AI company, which also recently launched a new file creation and editing feature in Claude, has warned that the feature carries prompt injection risks due to it running in a “sandboxed computing environment with limited internet access.”

Specifically, it’s possible for a bad actor to “inconspicuously” add instructions via external files or websites – aka indirect prompt injection – that trick the chatbot into downloading and running untrusted code or reading sensitive data from a knowledge source connected via the Model Context Protocol (MCP).

“This means Claude can be tricked into sending information from its context (e.g., prompts, projects, data via MCP, Google integrations) to malicious third parties,” Anthropic said. “To mitigate these risks, we recommend you monitor Claude while using the feature and stop it if you see it using or accessing data unexpectedly.”

That’s not all. Late last month, the company also revealed browser-using AI models like Claude for Chrome can face prompt injection attacks, and that it has implemented several defenses to address the threat and reduce the attack success rate of 23.6% to 11.2%.

“New forms of prompt injection attacks are also constantly being developed by malicious actors,” it added. “By uncovering real-world examples of unsafe behavior and new attack patterns that aren’t present in controlled tests, we’ll teach our models to recognize the attacks and account for the related behaviors, and ensure that safety classifiers will pick up anything that the model itself misses.”

CIS Build Kits

At the same time, these tools have also been found susceptible to traditional security vulnerabilities, broadening the attack surface with potential real-world impact –

  • A WebSocket authentication bypass in Claude Code IDE extensions (CVE-2025-52882, CVSS score: 8.8) that could have allowed an attacker to connect to a victim’s unauthenticated local WebSocket server simply by luring them to visit a website under their control, enabling remote command execution
  • An SQL injection vulnerability in the Postgres MCP server that could have allowed an attacker to bypass the read-only restriction and execute arbitrary SQL statements
  • A path traversal vulnerability in Microsoft NLWeb that could have allowed a remote attacker to read sensitive files, including system configurations (“/etc/passwd”) and cloud credentials (.env files), using a specially crafted URL
  • An incorrect authorization vulnerability in Lovable (CVE-2025-48757, CVSS score: 9.3) that could have allowed remote unauthenticated attackers to read or write to arbitrary database tables of generated sites
  • Open redirect, stored cross-site scripting (XSS), and sensitive data leakage vulnerabilities in Base44 that could have allowed attackers to access the victim’s apps and development workspace, harvest API keys, inject malicious logic into user-generated applications, and exfiltrate data
  • A vulnerability in Ollama Desktop arising as a result of incomplete cross-origin controls that could have allowed an attacker to stage a drive-by attack, where visiting a malicious website can reconfigure the application’s settings to intercept chats and even alter responses using poisoned models

“As AI-driven development accelerates, the most pressing threats are often not exotic AI attacks but failures in classical security controls,” Imperva said. “To protect the growing ecosystem of ‘vibe coding’ platforms, security must be treated as a foundation, not an afterthought.”


Source: thehackernews.com…

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *