Author: Mark

  • Iranian-Backed Pay2Key Ransomware Resurfaces with 80% Profit Share for Cybercriminals

    Iranian-Backed Pay2Key Ransomware Resurfaces with 80% Profit Share for Cybercriminals

    An Iranian-backed ransomware-as-a-service (RaaS) named Pay2Key has resurfaced in the wake of the Israel-Iran-U.S. conflict last month, offering bigger payouts to cybercriminals who launch attacks against Israel and the U.S.

    The financially motivated scheme, now operating under the moniker Pay2Key.I2P, is assessed to be linked to a hacking group tracked as Fox Kitten (aka Lemon Sandstorm).

    “Linked to the notorious Fox Kitten APT group and closely tied to the well-known Mimic ransomware, […] Pay2Key.I2P appears to partner with or incorporate Mimic’s capabilities,” Morphisec security researcher Ilia Kulmin said.

    “Officially, the group offers an 80% profit share (up from 70%) to affiliates supporting Iran or participating in attacks against the enemies of Iran, signaling their ideological commitment.”

    Last year, the U.S. government revealed the advanced persistent threat’s (APT) modus operandi of carrying out ransomware attacks by covertly partnering with NoEscape, RansomHouse, and BlackCat (aka ALPHV) crews.

    Cybersecurity

    The use of Pay2Key by Iranian threat actors goes back to October 2020, with the attacks targeting Israeli companies by exploiting known security vulnerabilities.

    Pay2Key.I2P, per Morphisec, emerged on the scene in February 2025, claiming over 51 successful ransom payouts in four months, netting it more than $4 million in ransom payments and $100,000 in profits for individual operators.

    While their financial motives are apparent and doubtless effective, there is also an underlying ideological agenda behind them: the campaign appears to be a case of cyber warfare waged against targets in Israel and the U.S.

    A notable aspect of the latest variant of Pay2Key.I2P is that it’s the first known RaaS platform to be hosted on the Invisible Internet Project (I2P).

    “While some malware families have used I2P for [command-and-control] communication, this is a step further – a Ransomware-as-a-Service operation running its infrastructure directly on I2P,” Swiss cybersecurity company PRODAFT said in a post shared on X in March 2025. The post was subsequently reposted by Pay2Key.I2P’s own X account.

    What’s more, Pay2Key.I2P has observed posting on a Russian darknet forum that allowed anyone to deploy the ransomware binary for a $20,000 payout per successful attack, marking a shift in RaaS operations. The post was made by a user named “Isreactive” on February 20, 2025.

    “Unlike traditional Ransomware-as-a-Service (RaaS) models, where developers take a cut only from selling the ransomware, this model allows them to capture the full ransom from successful attacks, only sharing a portion with the attackers who deploy it,” Kulmin noted at the time.

    “This shift moves away from a simple tool-sale model, creating a more decentralized ecosystem, where ransomware developers earn from attack success rather than just from selling the tool.”

    As of June 2025, the ransomware builder includes an option to target Linux systems, indicating that the threat actors are actively refining and improving the locker’s functionality. The Windows counterpart, on the other hand, is delivered as a Windows executable within a self-extracting (SFX) archive.

    It also incorporates various evasion techniques that allow it to run unimpeded by disabling Microsoft Defender Antivirus and deleting malicious artifacts deployed as part of the attack to minimize forensic trail.

    Cybersecurity

    “Pay2Key.I2P represents a dangerous convergence of Iranian state-sponsored cyber warfare and global cybercrime,” Morphisec said. “With ties to Fox Kitten and Mimic, an 80% profit incentive for Iran’s supporters, and over $4 million in ransoms, this RaaS operation threatens Western organizations with advanced, evasive ransomware.”

    The findings come as the U.S. cybersecurity and intelligence agencies have warned of retaliatory attacks by Iran after American airstrikes on three nuclear facilities in the country.

    Operational technology (OT) security company Nozomi Networks said it has observed Iranian hacking groups like MuddyWater, APT33, OilRig, Cyber Av3ngers, Fox Kitten, and Homeland Justice targeting transportation and manufacturing organizations in the U.S.

    “Industrial and critical infrastructure organizations in the U.S. and abroad are urged to be vigilant and review their security posture,” the company said, adding it detected 28 cyber attacks related to Iranian threat actors between May and June 2025.

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Critical Wing FTP Server Vulnerability (CVE-2025-47812) Actively Being Exploited in the Wild

    Critical Wing FTP Server Vulnerability (CVE-2025-47812) Actively Being Exploited in the Wild

    Jul 11, 2025Ravie LakshmananCyber Attack / Vulnerability

    Critical Wing FTP Server Vulnerability

    A recently disclosed maximum-severity security flaw impacting the Wing FTP Server has come under active exploitation in the wild, according to Huntress.

    The vulnerability, tracked as CVE-2025-47812 (CVSS score: 10.0), is a case of improper handling of null (‘’) bytes in the server’s web interface, which allows for remote code execution. It has been addressed in version 7.4.4.

    “The user and admin web interfaces mishandle ‘’ bytes, ultimately allowing injection of arbitrary Lua code into user session files,” according to an advisory for the flaw on CVE.org. “This can be used to execute arbitrary system commands with the privileges of the FTP service (root or SYSTEM by default).”

    Cybersecurity

    What makes it even more concerning is that the flaw can be exploited via anonymous FTP accounts. A comprehensive breakdown of the vulnerability entered the public domain towards the end of June 2025, courtesy of RCE Security researcher Julien Ahrens.

    Cybersecurity company Huntress said it observed threat actors exploiting the flaw to download and execute malicious Lua files, conduct reconnaissance, and install remote monitoring and management software.

    “CVE-2025-47812 stems from how null bytes are handled in the username parameter (specifically related to the loginok.html file, which handles the authentication process),” Huntress researchers said. “This can allow remote attackers to perform Lua injection after using the null byte in the username parameter.”

    “By taking advantage of the null-byte injection, the adversary disrupts the anticipated input in the Lua file which stores these session characteristics.”

    Evidence of active exploitation was first observed against a single customer on July 1, 2025, merely a day after details of the exploit were disclosed. Upon gaining access, the threat actors are said to have run enumeration and reconnaissance commands, created new users as a form of persistence, and dropped Lua files to drop an installer for ScreenConnect.

    Cybersecurity

    There is no evidence that the remote desktop software was actually installed, as the attack was detected and stopped before it could progress any further. It’s currently not clear who is behind the activity.

    Data from Censys shows that there are 8,103 publicly-accessible devices running Wing FTP Server, out of which 5,004 have their web interface exposed. The majority of the instances are located in the U.S., China, Germany, the U.K., and India.

    In light of active exploitation, it’s essential that users move quickly to apply the latest patches and update their Wing FTP Server versions of 7.4.4 or later.

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Securing Data in the AI Era

    Securing Data in the AI Era

    Jul 11, 2025The Hacker NewsData Security / Enterprise Security

    The 2025 Data Risk Report: Enterprises face potentially serious data loss risks from AI-fueled tools. Adopting a unified, AI-driven approach to data security can help.

    As businesses increasingly rely on cloud-driven platforms and AI-powered tools to accelerate digital transformation, the stakes for safeguarding sensitive enterprise data have reached unprecedented levels. The Zscaler ThreatLabz 2025 Data Risk Report reveals how evolving technology landscapes are amplifying vulnerabilities, highlighting the critical need for a proactive and unified approach to data protection.

    Drawing on insights from more than 1.2 billion blocked transactions recorded by the Zscaler Zero Trust Exchange between February and December 2024, this year’s report paints a clear picture of the data security challenges that enterprises face. From the rise of data leakage through generative AI tools to the undiminished risks stemming from email, SaaS applications, and file-sharing services, the findings are both eye-opening and urgent.

    The 2025 Data Risk Report sheds light on the multifaceted data security risks enterprises face in today’s digitally enabled world. Some of the most noteworthy trends include:

    • AI apps are a major data loss vector: AI tools like ChatGPT and Microsoft Copilot contributed to millions of data loss incidents in 2024, particularly social security numbers.
    • SaaS data loss is surging: Spanning 3,000+ SaaS apps, enterprises saw more than 872 million data loss violations.
    • Email remains a leading source of data loss: Nearly 104 million transactions leaked billions of instances of sensitive data.
    • File-sharing data loss spikes: Among the most popular file-sharing apps, 212 million transactions saw data loss incidents.

    There has never been a more critical time to rethink your enterprise’s approach to data security. The 2025 ThreatLabz Data Risk Report offers a comprehensive look at where risks lie, what drives them, and how organizations can respond effectively to secure their sensitive data in today’s rapidly evolving, AI-driven ecosystem.

    To learn more about Zscaler Zero Trust Architecture and Zero Trust + AI, visit zscaler.com/security

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • CISA Adds Citrix NetScaler CVE-2025-5777 to KEV Catalog as Active Exploits Target Enterprises

    CISA Adds Citrix NetScaler CVE-2025-5777 to KEV Catalog as Active Exploits Target Enterprises

    The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Thursday added a critical security flaw impacting Citrix NetScaler ADC and Gateway to its Known Exploited Vulnerabilities (KEV) catalog, officially confirming the vulnerability has been weaponized in the wild.

    The shortcoming in question is CVE-2025-5777 (CVSS score: 9.3), an instance of insufficient input validation that could be exploited by an attacker to bypass authentication when the appliance is configured as a Gateway or AAA virtual server. It’s also called Citrix Bleed 2 owing to its similarities with Citrix Bleed (CVE-2023-4966).

    “Citrix NetScaler ADC and Gateway contain an out-of-bounds read vulnerability due to insufficient input validation,” the agency said. “This vulnerability can lead to memory overread when the NetScaler is configured as a Gateway (VPN virtual server, ICA Proxy, CVPN, RDP Proxy) OR AAA virtual server.”

    Cybersecurity

    Although multiple security vendors have since reported that the flaw has been exploited in real-world attacks, Citrix has yet to update its own advisories to reflect this aspect. As of June 26, 2025, Anil Shetty, senior vice president of engineering at NetScaler, said, “there is no evidence to suggest exploitation of CVE-2025-5777.”

    However, security researcher Kevin Beaumont, in a report published this week, said the Citrix Bleed 2 exploitation started as far back as mid-June, adding one of the IP addresses carrying out the attacks has been previously linked to RansomHub ransomware activity.

    Data from GreyNoise shows that exploitation efforts are originating from 10 unique malicious IP addresses located in Bulgaria, the United States, China, Egypt, and Finland over the past 30 days. The primary targets of these efforts are the United States, France, Germany, India, and Italy.

    The addition of CVE-2025-5777 to the KEV catalog comes as another flaw in the same product (CVE-2025-6543, CVSS score: 9.2) has also come under active exploitation in the wild. CISA added the flaw to the KEV catalog on June 30, 2025.

    “The term ‘Citrix Bleed’ is used because the memory leak can be triggered repeatedly by sending the same payload, with each attempt leaking a new chunk of stack memory — effectively ‘bleeding’ sensitive information,” Akamai said, warning of a “drastic increase of vulnerability scanner traffic” after exploit details became public.

    “This flaw can have dire consequences, considering that the affected devices can be configured as VPNs, proxies, or AAA virtual servers. Session tokens and other sensitive data can be exposed — potentially enabling unauthorized access to internal applications, VPNs, data center networks, and internal networks.”

    Because these appliances often serve as centralized entry points into enterprise networks, attackers can pivot from stolen sessions to access single sign-on portals, cloud dashboards, or privileged admin interfaces. This type of lateral movement—where a foothold quickly becomes full network access—is especially dangerous in hybrid IT environments with weak internal segmentation.

    To mitigate this flaw, organizations should immediately upgrade to the patched builds listed in Citrix’s June 17 advisory, including version 14.1-43.56 and later. After patching, all active sessions—especially those authenticated via AAA or Gateway—should be forcibly terminated to invalidate any stolen tokens.

    Admins are also encouraged to inspect logs (e.g., ns.log) for suspicious requests to authentication endpoints such as /p/u/doAuthentication.do, and review responses for unexpected XML data like <InitialValue> fields. Since the vulnerability is a memory overread, it does not leave traditional malware traces—making token hijack and session replay the most urgent concerns.

    Cybersecurity

    The development also follows reports of active exploitation of a critical security vulnerability in OSGeo GeoServer GeoTools (CVE-2024-36401, CVSS score: 9.8) to deploy NetCat and the XMRig cryptocurrency miner in attacks targeting South Korea by means of PowerShell and shell scripts. CISA added the flaw to the KEV catalog in July 2024.

    “Threat actors are targeting environments with vulnerable GeoServer installations, including those of Windows and Linux, and have installed NetCat and XMRig coin miner,” AhnLab said.

    “When a coin miner is installed, it uses the system’s resources to mine the threat actor’s Monero coins. The threat actor can then use the installed NetCat to perform various malicious behaviors, such as installing other malware or stealing information from the system.”

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Critical mcp-remote Vulnerability Enables Remote Code Execution, Impacting 437,000+ Downloads

    Critical mcp-remote Vulnerability Enables Remote Code Execution, Impacting 437,000+ Downloads

    Jul 10, 2025Ravie LakshmananVulnerability / AI Security

    Cybersecurity researchers have discovered a critical vulnerability in the open-source mcp-remote project that could result in the execution of arbitrary operating system (OS) commands.

    The vulnerability, tracked as CVE-2025-6514, carries a CVSS score of 9.6 out of 10.0.

    “The vulnerability allows attackers to trigger arbitrary OS command execution on the machine running mcp-remote when it initiates a connection to an untrusted MCP server, posing a significant risk to users – a full system compromise,” Or Peles, JFrog Vulnerability Research Team Leader, said.

    Mcp-remote is a tool that sprang forth following Anthropic’s release of Model Context Protocol (MCP), an open-source framework that standardizes the way large language model (LLM) applications integrate and share data with external data sources and services.

    It acts as a local proxy, enabling MCP clients like Claude Desktop to communicate with remote MCP servers, as opposed to running them locally on the same machine as the LLM application. The npm package has been downloaded more than 437,000 times to date.

    The vulnerability affects mcp-remote versions from 0.0.5 to 0.1.15. It has been addressed in version 0.1.16 released on June 17, 2025. Anyone using mcp-remote that connects to an untrusted or insecure MCP server using an affected version is at risk.

    Cybersecurity

    “While previously published research has demonstrated risks from MCP clients connecting to malicious MCP servers, this is the first time that full remote code execution is achieved in a real-world scenario on the client operating system when connecting to an untrusted remote MCP server,” Peles said.

    The shortcoming has to do with how a malicious MCP server operated by a threat actor could embed a command during the initial communication establishment and authorization phase, which, when processed by mcp-remote, causes it to be executed on the underlying operating system.

    While the issue leads to arbitrary OS command execution on Windows with full parameter control, it results in the execution of arbitrary executables with limited parameter control on macOS and Linux systems.

    To mitigate the risk posed by the flaw, users are advised to update the library to the latest version and only connect to trusted MCP servers over HTTPS.

    “While remote MCP servers are highly effective tools for expanding AI capabilities in managed environments, facilitating rapid iteration of code, and helping ensure more reliable delivery of software, MCP users need to be mindful of only connecting to trusted MCP servers using secure connection methods such as HTTPS,” Peles said.

    “Otherwise, vulnerabilities like CVE-2025-6514 are likely to hijack MCP clients in the ever-growing MCP ecosystem.”

    The disclosure comes after Oligo Security detailed a critical vulnerability in the MCP Inspector tool (CVE-2025-49596, CVSS score: 9.4) that could pave the way for remote code execution.

    Earlier this month, two other high-severity security defects were uncovered in Anthropic’s Filesystem MCP Server, which, if successfully exploited, could let attackers break out of the server’s sandbox, manipulate any file on the host, and achieve code execution.

    The two flaws, per Cymulate, are listed below –

    • CVE-2025-53110 (CVSS score: 7.3) – A directory containment bypass that makes it possible to access, read, or write outside of the approved directory (e.g., “/private/tmp/allowed_dir”) by using the allowed directory prefix on other directories (e.g., “/private/tmp/allow_dir_sensitive_credentials”), thereby opening the door data theft and possible privilege escalation
    • CVE-2025-53109 (CVSS score: 8.4) – A symbolic link (aka symlink) bypass stemming from poor error handling that can be used to point to any file on the file system from within the allowed directory, allowing an attacker to read or alter critical files (e.g., “/etc/sudoers”) or drop malicious code, resulting in code execution by making use of Launch Agents, cron jobs, or other persistence techniques
    Cybersecurity

    Both shortcomings impact all Filesystem MCP Server versions prior to 0.6.3 and 2025.7.1, which include the relevant fixes.

    “This vulnerability is a serious breach of the Filesystem MCP Servers security model,” security researcher Elad Beber said about CVE-2025-53110. “Attackers can gain unauthorized access by listing, reading or writing to directories outside the allowed scope, potentially exposing sensitive files like credentials or configurations.”

    “Worse, in setups where the server runs as a privileged user, this flaw could lead to privilege escalation, allowing attackers to manipulate critical system files and gain deeper control over the host system.”

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Fake Gaming and AI Firms Push Malware on Cryptocurrency Users via Telegram and Discord

    Fake Gaming and AI Firms Push Malware on Cryptocurrency Users via Telegram and Discord

    Jul 10, 2025Ravie LakshmananCryptocurrency / Cybercrime

    Cryptocurrency users are the target of an ongoing social engineering campaign that employs fake startup companies to trick users into downloading malware that can drain digital assets from both Windows and macOS systems.

    “These malicious operations impersonate AI, gaming, and Web3 firms using spoofed social media accounts and project documentation hosted on legitimate platforms like Notion and GitHub,” Darktrace researcher Tara Gould said in a report shared with The Hacker News.

    The elaborate social media scam has been for sometime now, with a previous iteration in December 2024 leveraging bogus videoconferencing platforms to dupe victims into joining a meeting under the pretext of discussing an investment opportunity after approaching them on messaging apps like Telegram.

    Users who ended up downloading the purported meeting software were stealthily infected by stealer malware such as Realst. The campaign was codenamed Meeten by Cado Security (which was acquired by Darktrace earlier this year) in reference to one of the phony videoconferencing services.

    That said, there are indications that the activity may have been ongoing since at least March 2024, when Jamf Threat Labs disclosed the use of a domain named “meethub[.]gg” to deliver Realst.

    Cybersecurity

    The latest findings from Darktrace show that the campaign not only still remains an active threat, but has also adopted a broader range of themes related to artificial intelligence, gaming, Web3, and social media.

    Furthermore, the attackers have been observed leveraging compromised X accounts associated with companies and employees, primarily those that are verified, to approach prospective targets and give their fake companies an illusion of legitimacy.

    “They make use of sites that are used frequently with software companies such as X, Medium, GitHub, and Notion,” Gould said. “Each company has a professional looking website that includes employees, product blogs, whitepapers and roadmaps.”

    One such non-existent company is Eternal Decay (@metaversedecay), which claims to be a blockchain-powered game and has shared digitally altered versions of legitimate pictures on X to give the impression that they are presenting at various conferences. The end goal is to build an online presence that makes these firms appear as real as possible and increases the likelihood of infection.

    Some of the other identified companies are listed below –

    • BeeSync (X accounts: @BeeSyncAI, @AIBeeSync)
    • Buzzu (X accounts: @BuzzuApp, @AI_Buzzu, @AppBuzzu, @BuzzuApp)
    • Cloudsign (X account: @cloudsignapp)
    • Dexis (X account: @DexisApp)
    • KlastAI (X account: Links to Pollens AI’s X account)
    • Lunelior
    • NexLoop (X account: @nexloopspace)
    • NexoraCore
    • NexVoo (X account: @Nexvoospace)
    • Pollens AI (X accounts: @pollensapp, @Pollens_app)
    • Slax (X accounts: @SlaxApp, @Slax_app, @slaxproject)
    • Solune (X account: @soluneapp)
    • Swox (X accounts: @SwoxApp, @Swox_AI, @swox_app, @App_Swox, @AppSwox, @SwoxProject, @ProjectSwox)
    • Wasper (X accounts: @wasperAI, @WasperSpace)
    • YondaAI (X account: @yondaspace)

    The attack chains begin when one of these adversary-controlled accounts messages a victim through X, Telegram, or Discord, urging them to test out their software in exchange for a cryptocurrency payment.

    Should the target agree to the test, they are redirected to a fictitious website from where they are promoted to enter a registration code provided by the employee to download either a Windows Electron application or an Apple disk image (DMG) file, depending on the operating system used.

    On Windows systems, opening the malicious application displays a Cloudflare verification screen to the victim while it covertly profiles the machine and proceeds to download and execute an MSI installer. Although the exact nature of the payload is unclear, it’s believed that an information stealer is run at this stage.

    Cybersecurity

    The macOS version of the attack, on the other hand, leads to the deployment of the Atomic macOS Stealer (AMOS), a known infostealer malware that can siphon documents as well as data from web browsers and crypto wallets, and exfiltrate the details to external server.

    The DMG binary is also equipped to fetch a shell script that’s responsible for setting up persistence on the system using a Launch Agent to ensure that the app starts automatically upon user login. The script also retrieves and runs an Objective-C/Swift binary that logs application usage and user interaction timestamps, and transmits them to a remote server.

    Darktrace also noted that the campaign shares tactical similarities with those orchestrated by a traffers group called Crazy Evil that’s known to dupe victims into installing malware such as StealC, AMOS, and Angel Drainer.

    “While it is unclear if the campaigns […] can be attributed to CrazyEvil or any sub teams, the techniques described are similar in nature,” Gould said. “This campaign highlights the efforts that threat actors will go to make these fake companies look legitimate in order to steal cryptocurrency from victims, in addition to the use of newer evasive versions of malware.”

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Four Arrested in £440M Cyber Attack on Marks & Spencer, Co-op, and Harrods

    Four Arrested in £440M Cyber Attack on Marks & Spencer, Co-op, and Harrods

    Jul 10, 2025Ravie LakshmananCybercrime / Ransomware

    The U.K. National Crime Agency (NCA) on Thursday announced that four people have been arrested in connection with cyber attacks targeting major retailers Marks & Spencer, Co-op, and Harrods.

    The arrested individuals include two men aged 19, a third aged 17, and a 20-year-old woman. They were apprehended in the West Midlands and London on suspicion of Computer Misuse Act offenses, blackmail, money laundering, and participating in the activities of an organized crime group.

    All four suspects were arrested from their homes and their electronic devices have been seized for further forensic analysis. Their names were not disclosed.

    “Since these attacks took place, specialist NCA cybercrime investigators have been working at pace and the investigation remains one of the Agency’s highest priorities,” Deputy Director Paul Foster, head of the NCA’s National Cyber Crime Unit, said in a statement.

    Cybersecurity

    “Today’s arrests are a significant step in that investigation but our work continues, alongside partners in the U.K. and overseas, to ensure those responsible are identified and brought to justice.”

    According to the Cyber Monitoring Centre (CMC), the April 2025 cyber attacks targeting Marks & Spencer and Co-op have been classified as a “single combined cyber event” with a financial impact of anywhere between £270 million ($363 million) and £440 million ($592 million).

    The NCA did not name the “organized crime group” the individuals are part of, but it’s believed that some of these attacks have been perpetrated by a decentralized cybercrime group called Scattered Spider, which is notorious for its advanced social engineering ploys to breach organizations and deploy ransomware.

    “While ransomware is an ever-present threat, Scattered Spider represents a persistent and capable adversary whose operations have been historically effective even against organizations with mature security programs,” Grayson North, Senior Security Consultant at GuidePoint Security, told The Hacker News.

    “The success of Scattered Spider is not exactly the result of any new or novel tactics, but rather their expertise in social engineering and willingness to be extremely persistent in attempting to gain initial access to their targets.”

    The majority of individuals associated with the financially driven group are young, native English speakers which gives them an edge when attempting to gain trust with their targets by making fake calls to IT help desks posing as employees.

    Scattered Spider is part of The Com, a larger loose-knit collective that’s responsible for a wide range of crimes, including social engineering, phishing, SIM swapping, extortion, sextortion, swatting, kidnapping, and murder.

    Cybersecurity

    “Scattered Spider demonstrates a calculated and opportunistic targeting strategy, rotating across industries and geographies based on visibility, payout potential, and operational heat,” Halcyon pointed out.

    Google-owned Mandiant said Scattered Spider has a habit of focusing on a single sector at a time, while keeping their core tactics, techniques, and procedures (TTPs) consistent. This includes setting up phishing domains that closely mimic legitimate corporate login portals and are designed to trick employees into revealing their credentials.

    “This means that organizations can take proactive steps like training their help desk staff to enforce robust identity verification processes and deploying phishing-resistant MFA to defend against these intrusions,” said Charles Carmakal, CTO, Mandiant Consulting at Google Cloud.

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • New ZuRu Malware Variant Targeting Developers via Trojanized Termius macOS App

    New ZuRu Malware Variant Targeting Developers via Trojanized Termius macOS App

    Jul 10, 2025Ravie LakshmananEndpoint Security / Vulnerability

    New macOS Malware ZuRu

    Cybersecurity researchers have discovered new artifacts associated with an Apple macOS malware called ZuRu, which is known to propagate via trojanized versions of legitimate software.

    SentinelOne, in a new report shared with The Hacker News, said the malware has been observed masquerading as the cross‑platform SSH client and server‑management tool Termius in late May 2025.

    “ZuRu malware continues to prey on macOS users seeking legitimate business tools, adapting its loader and C2 techniques to backdoor its targets,” researchers Phil Stokes and Dinesh Devadoss said.

    ZuRu was first documented in September 2021 by a user on Chinese question-and-answer website Zhihu as part of a malicious campaign that hijacked searches for iTerm2, a legitimate macOS Terminal app, to direct users to fake sites that tricked unsuspecting users into downloading the malware.

    Cybersecurity

    Then in January 2024, Jamf Threat Labs said it discovered a piece of malware distributed via pirated macOS apps that shared similarities with ZuRu. Some of the other popular software that has been trojanized to deliver the malware include Microsoft’s Remote Desktop for Mac, along with SecureCRT and Navicat.

    The fact that ZuRu primarily relies on sponsored web searches for distribution indicates the threat actors behind the malware are more opportunistic than targeted in their attacks, while also ensuring that only those looking for remote connections and database management are compromised.

    Like the samples detailed by Jamf, the newly discovered ZuRu artifacts employ a modified version of the open-source post-exploitation toolkit known as Khepri to enable attackers to gain remote control of infected hosts.

    “The malware is delivered via a .dmg disk image and contains a hacked version of the genuine Termius.app,” the researchers said. “Since the application bundle inside the disk image has been modified, the attackers have replaced the developer’s code signature with their own ad hoc signature in order to pass macOS code signing rules.”

    The altered app packs in two extra executables within Termius Helper.app, a loader named “.localized” that’s designed to download and launch a Khepri command-and-control (C2) beacon from an external server (“download.termius[.]info”) and “.Termius Helper1,” which is a renamed version of the actual Termius Helper app.

    “While the use of Khepri was seen in earlier versions of ZuRu, this means of trojanizing a legitimate application varies from the threat actor’s previous technique,” the researchers explained.

    “In older versions of ZuRu, the malware authors modified the main bundle’s executable by adding an additional load command referencing an external .dylib, with the dynamic library functioning as the loader for the Khepri backdoor and persistence modules.”

    Besides downloading the Khepri beacon, the loader is designed to set up persistence on the host and checks if the malware is already present at a pre-defined path in the system and employs(“/tmp/.fseventsd”) and if so, compares the MD5 hash value of the payload against the one that’s hosted on the server.

    A new version is subsequently downloaded if the hash values don’t match. It’s believed that the feature likely serves as an update mechanism to fetch new versions of the malware as they become available. But SentinelOne also theorized it could be a way to ensure that the payload has not been corrupted or modified after it was dropped.

    Cybersecurity

    The modified Khepri tool is a feature-packed C2 implant that allows file transfer, system reconnaissance, process execution and control, and command execution with output capture. The C2 server used to communicate with the beacon is “ctl01.termius[.]fun.”

    “The latest variant of macOS.ZuRu continues the threat actor’s pattern of trojanizing legitimate macOS applications used by developers and IT professionals,” the researchers said.

    “The shift in technique from Dylib injection to trojanizing an embedded helper application is likely an attempt to circumvent certain kinds of detection logic. Even so, the actor’s continued use of certain TTPs – from choice of target applications and domain name patterns to the reuse of file names, persistence and beaconing methods – suggest these are offering continued success in environments lacking sufficient endpoint protection.”

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • AMD Warns of New Transient Scheduler Attacks Impacting a Wide Range of CPUs

    AMD Warns of New Transient Scheduler Attacks Impacting a Wide Range of CPUs

    Jul 10, 2025Ravie LakshmananVulnerability / Hardware Security

    AMD Transient Scheduler Attacks

    Semiconductor company AMD is warning of a new set of vulnerabilities affecting a broad range of chipsets that could lead to information disclosure.

    The attacks, called Transient Scheduler Attacks (TSA), manifests in the form of a speculative side channel in its CPUs that leverages execution timing of instructions under specific microarchitectural conditions.

    “In some cases, an attacker may be able to use this timing information to infer data from other contexts, resulting in information leakage,” AMD said in an advisory.

    The company said issues were uncovered as part of a study published by Microsoft and ETH Zurich researchers about testing modern CPUs against speculative execution attacks like Meltdown and Foreshadow by stress testing isolation between security domains such as virtual machines, kernel, and processes.

    Following responsible disclosure in June 2024, the issues have been assigned the below CVE identifiers –

    • CVE-2024-36350 (CVSS score: 5.6) – A transient execution vulnerability in some AMD processors may allow an attacker to infer data from previous stores, potentially resulting in the leakage of privileged information
    • CVE-2024-36357 (CVSS score: 5.6) – A transient execution vulnerability in some AMD processors may allow an attacker to infer data in the L1D cache, potentially resulting in the leakage of sensitive information across privileged boundaries
    • CVE-2024-36348 (CVSS score: 3.8) – A transient execution vulnerability in some AMD processors may allow a user process to infer the control registers speculatively even if UMIP[3] feature is enabled, potentially resulting in information leakage
    • CVE-2024-36349 (CVSS score: 3.8) – A transient execution vulnerability in some AMD processors may allow a user process to infer TSC_AUX even when such a read is disabled, potentially resulting in information leakage
    Cybersecurity

    AMD has described TSA as a “new class of speculative side channels” affecting its CPUs, stating it has released microcode updates for impacted processors –

    • 3rd Gen AMD EPYC Processors
    • 4th Gen AMD EPYC Processors
    • AMD Instinct MI300A
    • AMD Ryzen 5000 Series Desktop Processors
    • AMD Ryzen 5000 Series Desktop Processors with Radeon Graphics
    • AMD Ryzen 7000 Series Desktop Processors
    • AMD Ryzen 8000 Series Processors with Radeon Graphics
    • AMD Ryzen Threadripper PRO 7000 WX-Series Processors
    • AMD Ryzen 6000 Series Processors with Radeon Graphics
    • AMD Ryzen 7035 Series Processors with Radeon Graphics
    • AMD Ryzen 5000 Series Processors with Radeon Graphics
    • AMD Ryzen 7000 Series Processors with Radeon Graphics
    • AMD Ryzen 7040 Series Processors with Radeon Graphics
    • AMD Ryzen 8040 Series Mobile Processors with Radeon Graphics
    • AMD Ryzen 7000 Series Mobile Processors
    • AMD EPYC Embedded 7003
    • AMD EPYC Embedded 8004
    • AMD EPYC Embedded 9004
    • AMD EPYC Embedded 97X4
    • AMD Ryzen Embedded 5000
    • AMD Ryzen Embedded 7000
    • AMD Ryzen Embedded V3000

    The company also noted that instructions that read data from memory may experience what’s referred to as “false completion,” which occurs when CPU hardware expects the load instructions to complete quickly, but there exists a condition that prevents it from happening –

    In this case, dependent operations may be scheduled for execution before the false completion is detected. As the load did not actually complete, data associated with that load is considered invalid. The load will be re-executed later in order to complete successfully, and any dependent operations will re-execute with the valid data when it is ready.

    Unlike other speculative behavior such as Predictive Store Forwarding, loads that experience a false completion do not result in an eventual pipeline flush. While the invalid data associated with a false completion may be forwarded to dependent operations, load and store instructions which consume this data will not attempt to fetch data or update any cache or TLB state. As such, the value of this invalid data cannot be inferred using standard transient side channel methods.

    In processors affected by TSA, the invalid data may however affect the timing of other instructions being executed by the CPU in a way that may be detectable by an attacker.

    The chipmaker said it has identified two variants of TSA, TSA-L1 and TSA-SQ, based on the source of the invalid data associated with a false completion: either the L1 data cache or the CPU store queue.

    Cybersecurity

    In a worst-case scenario, successful attacks carried out using TSA-L1 or TSA-SQ flaws could lead to information leakage from the operating system kernel to a user application, from a hypervisor to a guest virtual machine, or between two user applications.

    While TSA-L1 is caused by an error in the way the L1 cache uses microtags for data-cache lookups, TSA-SQ vulnerabilities arise when a load instruction erroneously retrieves data from the CPU store queue when the necessary data isn’t yet available. In both cases, an attacker could infer any data that is present within the L1 cache or used by an older store, even if they were executed in a different context.

    That said, exploiting these flaws requires an attacker to obtain malicious access to a machine and possess the ability to run arbitrary code. It’s not exploitable through malicious websites.

    “The conditions required to exploit TSA are typically transitory as both the microtag and store queue will be updated after the CPU detects the false completion,” AMD said.

    “Consequently, to reliably exfiltrate data, an attacker must typically be able to invoke the victim many times to repeatedly create the conditions for the false completion. This is most likely possible when the attacker and victim have an existing communication path, such as between an application and the OS kernel.”

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • What Security Leaders Need to Know About AI Governance for SaaS

    What Security Leaders Need to Know About AI Governance for SaaS

    Generative AI is not arriving with a bang, it’s slowly creeping into the software that companies already use on a daily basis. Whether it is video conferencing or CRM, vendors are scrambling to integrate AI copilots and assistants into their SaaS applications. Slack can now provide AI summaries of chat threads, Zoom can provide meeting summaries, and office suites such as Microsoft 365 contain AI assistance in writing and analysis. This trend of AI usage implies that the majority of businesses are awakening to a new reality: AI capabilities have spread across their SaaS stack overnight, with no centralized control.

    A recent survey found 95% of U.S. companies are now using generative AI, up massively in just one year. Yet this unprecedented usage comes tempered by growing anxiety. Business leaders have begun to worry about where all this unseen AI activity might lead. Data security and privacy have quickly emerged as top concerns, with many fearing that sensitive information could leak or be misused if AI usage remains unchecked. We’ve already seen some cautionary examples: global banks and tech firms have banned or restricted tools like ChatGPT internally after incidents of confidential data being shared inadvertently.

    Why SaaS AI Governance Matters

    With AI woven into everything from messaging apps to customer databases, governance is the only way to harness the benefits without inviting new risks.

    What do we mean by AI governance?

    In simple terms, it basically refers to the policies, processes, and controls that ensure AI is used responsibly and securely within an organization. Done right, AI governance keeps these tools from becoming a free-for-all and instead aligns them with a company’s security requirements, compliance obligations, and ethical standards.

    This is especially important in the SaaS context, where data is constantly flowing to third-party cloud services.

    1. Data exposure is the most immediate worry. AI features often need access to large swaths of information – think of a sales AI that reads through customer records, or an AI assistant that combs your calendar and call transcripts. Without oversight, an unsanctioned AI integration could tap into confidential customer data or intellectual property and send it off to an external model. In one survey, over 27% of organizations said they banned generative AI tools outright after privacy scares. Clearly, nobody wants to be the next company in the headlines because an employee fed sensitive data to a chatbot.

    2. Compliance violations are another concern. When employees use AI tools without approval, it creates blind spots that can lead to breaches of laws like GDPR or HIPAA. For example, uploading a client’s personal information into an AI translation service might violate privacy regulations – but if it’s done without IT’s knowledge, the company may have no idea it happened until an audit or breach occurs. Regulators worldwide are expanding laws around AI use, from the EU’s new AI Act to sector-specific guidance. Companies need governance to ensure they can prove what AI is doing with their data, or face penalties down the line.

    3. Operational reasons are another reason to rein in AI sprawl. AI systems can introduce biases or make poor decisions (hallucinations) that impact real people. A hiring algorithm might inadvertently discriminate, or a finance AI might give inconsistent results over time as its model changes. Without guidelines, these issues go unchecked. Business leaders recognize that managing AI risks isn’t just about avoiding harm, it can also be a competitive advantage. Those who start to use AI ethically and transparently can generally build greater trust with customers and regulators.

    The Challenges of Managing AI in the SaaS World

    Unfortunately, the very nature of AI adoption in companies today makes it hard to pin down. One big challenge is visibility. Often, IT and security teams simply don’t know how many AI tools or features are in use across the organization. Employees eager to boost productivity can enable a new AI-based feature or sign up for a clever AI app in seconds, without any approval. These shadow AI instances fly under the radar, creating pockets of unchecked data usage. It’s the classic shadow IT problem amplified: you can’t secure what you don’t even realize is there.

    Compounding the problem is the fragmented ownership of AI tools. Different departments might each introduce their own AI solutions to solve local problems – Marketing tries an AI copywriter, engineering experiments with an AI code assistant, customer support integrates an AI chatbot – all without coordinating with each other. With no real centralized strategy, each of these tools might apply different (or nonexistent) security controls. There’s no single point of accountability, and important questions start to fall through the cracks:

    1. Who vetted the AI vendor’s security?

    2. Where is the data going?

    3. Did anyone set usage boundaries?

    The end result is an organization using AI in a dozen different ways, with loads of gaps that an attacker could potentially exploit.

    Perhaps the most serious problem is the lack of data provenance with AI interactions. An employee could copy proprietary text and paste it into an AI writing assistant, get a polished result back, and use that in a client presentation – all outside normal IT monitoring. From the company’s perspective, that sensitive data just left their environment without a trace. Traditional security tools might not catch it because no firewall was breached and no abnormal download occurred; the data was voluntarily given away to an AI service. This black box effect, where prompts and outputs aren’t logged, makes it extremely hard for organizations to ensure compliance or investigate incidents.

    Despite these hurdles, companies can’t afford to throw up their hands.

    The answer is to bring the same rigor to AI that’s applied to other technology – without stifling innovation. It’s a delicate balance: security teams don’t want to become the department of no that bans every useful AI tool. The goal of SaaS AI governance is to enable safe adoption. That means putting protection in place so employees can leverage AI’s benefits while minimizing the downsides.

    5 Best Practices for AI Governance in SaaS

    Establishing AI governance might sound daunting, but it becomes manageable by breaking it into a few concrete steps. Here are some best practices that leading organizations are using to get control of AI in their SaaS environment:

    1. Inventory Your AI Usage

    Start by shining a light on the shadow. You can’t govern what you don’t know exists. Take an audit of all AI-related tools, features, and integrations in use. This includes obvious standalone AI apps and less obvious things like AI features within standard software (for example, that new AI meeting notes feature in your video platform). Don’t forget browser extensions or unofficial tools employees might be using. A lot of companies are surprised by how long the list is once they look. Create a centralized registry of these AI assets noting what they do, which business units use them, and what data they touch. This living inventory becomes the foundation for all other governance efforts.

    2. Define Clear AI Usage Policies

    Just as you likely have an acceptable use policy for IT, make one specifically for AI. Employees need to know what’s allowed and what’s off-limits when it comes to AI tools. For instance, you might permit using an AI coding assistant on open-source projects but forbid feeding any customer data into an external AI service. Specify guidelines for handling data (e.g. “no sensitive personal info in any generative AI app unless approved by security”) and require that new AI solutions be vetted before use. Educate your staff on these rules and the reasons behind them. A little clarity up front can prevent a lot of risky experimentation.

    3. Monitor and Limit Access

    Once AI tools are in play, keep tabs on their behavior and access. Principle of least privilege applies here: if an AI integration only needs read access to a calendar, don’t give it permission to modify or delete events. Regularly review what data each AI tool can reach. Many SaaS platforms provide admin consoles or logs – use them to see how often an AI integration is being invoked and whether it’s pulling unusually large amounts of data. If something looks off or outside policy, be ready to intervene. It’s also wise to set up alerts for certain triggers, like an employee attempting to connect a corporate app to a new external AI service.

    4. Continuous Risk Assessment

    AI governance is not a set and forget task. AI changes too quickly. Establish a process to re-evaluate risks on a regular schedule – say monthly or quarterly. This could involve rescanning the environment for any newly introduced AI tools, reviewing updates or new features released by your SaaS vendors, and staying up to date on AI vulnerabilities. Make adjustments to your policies as needed (for example, if research exposes a new vulnerability like a prompt injection attack, update your controls to address it). Some organizations form an AI governance committee with stakeholders from security, IT, legal, and compliance to review AI use cases and approvals on an ongoing basis.

    5. Cross-Functional Collaboration

    Finally, governance isn’t solely an IT or security responsibility. Make AI a team sport. Bring in legal and compliance officers to help interpret new regulations and ensure your policies meet them. Include business unit leaders so that governance measures align with business needs (and so they act as champions for responsible AI use in their teams). Involve data privacy experts to assess how data is being used by AI. When everyone understands the shared goal – to use AI in ways that are innovative and safe – it creates a culture where following the governance process is seen as enabling success, not hindering it.

    To translate theory into practice, use this checklist to track your progress:

    By taking these foundational steps, organizations can use AI to increase productivity while ensuring security, privacy, and compliance are protected.

    How Reco Simplifies AI Governance

    While establishing AI governance frameworks is critical, the manual effort required to track, monitor, and manage AI across hundreds of SaaS applications can quickly overwhelm security teams. This is where specialized platforms like Reco’s Dynamic SaaS Security solution can make the difference between theoretical policies and practical protection.

    👉 Get a demo of Reco to assess the AI-related risks in your SaaS apps.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…