Tag: Cyber Security

  • U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

    U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

    Aug 28, 2025Ravie LakshmananArtificial Intelligence / Malware

    U.S. Treasury Sanctions DPRK IT-Worker Scheme

    The U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) announced a fresh round of sanctions against two individuals and two entities for their role in the North Korean remote information technology (IT) worker scheme to generate illicit revenue for the regime’s weapons of mass destruction and ballistic missile programs.

    “The North Korean regime continues to target American businesses through fraud schemes involving its overseas IT workers, who steal data and demand ransom,” said Under Secretary of the Treasury for Terrorism and Financial Intelligence John K. Hurley. “Under President Trump, Treasury is committed to protecting Americans from these schemes and holding the guilty accountable.”

    The key players targeted include Vitaliy Sergeyevich Andreyev, Kim Ung Sun, Shenyang Geumpungri Network Technology Co., Ltd, and Korea Sinjin Trading Corporation. The latest effort expands the scope of sanctions imposed against Chinyong Information Technology Cooperation Company in May 2023.

    Cybersecurity

    Chinyong, according to insider risk management firm DTEX, is one of the many IT companies that have deployed IT workers for engaging in freelance IT work and cryptocurrency theft. It has offices in China, Laos, and Russia.

    The years-long IT worker threat, also tracked as Famous Chollima, Jasper Sleet, UNC5267, and Wagemole, is assessed to be affiliated with the Workers’ Party of Korea. At its core, the scheme works by embedding North Korean IT workers in legitimate companies in the U.S. and elsewhere, securing these jobs using fraudulent documents, stolen identities, and false personas on GitHub, CodeSandbox, Freelancer, Medium, RemoteHub, CrowdWorks, and WorkSpace.ru.

    Select cases have also involved the threat actors clandestinely introducing malware into company networks to exfiltrate proprietary and sensitive data, and extort them in return for not leaking the information.

    In a report published Wednesday, Anthropic revealed how the employment fraud operation has leaned heavily on artificial intelligence (AI)-powered tools like Claude to create convincing professional backgrounds and technical portfolios, tailor resumes to specific job descriptions, and even deliver actual technical work.

    “The most striking finding is the actors’ complete dependency on AI to function in technical roles,” Anthropic said. “These operators do not appear to be able to write code, debug problems, or even communicate professionally without Claude’s assistance. Yet they’re successfully maintaining employment at Fortune 500 companies (according to public reporting), passing technical interviews, and delivering work that satisfies their employers.”

    The Treasury Department said Andreyev, a 44-year-old Russian national, has facilitated payments to Chinyong and has worked with Kim Ung Sun, a North Korean economic and trade consular official based in Russia, to conduct multiple financial transfers worth nearly $600,000 by converting cryptocurrency to cash in U.S. dollars since December 2024.

    Shenyang Geumpungri, the department added, is a Chinese front company for Chinyong that consists of a delegation of DPRK IT workers, generating over $1 million in profits for Chinyong and Sinjin since 2021.

    Identity Security Risk Assessment

    “Sinjin is a DPRK [Democratic People’s Republic of Korea] company subordinate to the U.S.-sanctioned DPRK Ministry of People’s Armed Forces General Political Bureau,” the Treasury said. “The company has received directives from DPRK government officials regarding the DPRK IT workers that Chinyong deploys internationally.”

    The announcement comes a little over a month after the Treasury Department sanctioned a North Korean front company (Korea Sobaeksu Trading Company) and three associated individuals (Kim Se Un, Jo Kyong Hun, and Myong Chol Min) for their involvement in the IT worker scheme. In parallel, an Arizona woman was awarded an eight-year prison sentence for running a laptop farm that enabled the actors to connect remotely to companies’ networks.

    Last month, the department also sanctioned Song Kum Hyok, a member of a North Korean hacking group called Andariel, alongside a Russian national (Gayk Asatryan) and four entities (Asatryan LLC, Fortuna LLC, Korea Songkwang Trading General Corporation, and Korea Saenal Trading Corporation) for their participation in the sanctions-evading scheme.


    Source: thehackernews.com…

  • Someone Created the First AI-Powered Ransomware Using OpenAI's gpt-oss:20b Model

    Someone Created the First AI-Powered Ransomware Using OpenAI's gpt-oss:20b Model

    Cybersecurity company ESET has disclosed that it discovered an artificial intelligence (AI)-powered ransomware variant codenamed PromptLock.

    Written in Golang, the newly identified strain uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts in real-time. The open-weight language model was released by OpenAI earlier this month.

    “PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption,” ESET said. “These Lua scripts are cross-platform compatible, functioning on Windows, Linux, and macOS.”

    The ransomware code also embeds instructions to craft a custom note based on the “files affected,” and the infected machine is a personal computer, company server, or a power distribution controller. It’s currently not known who is behind the malware, but ESET told The Hacker News that PromptLoc artifacts were uploaded to VirusTotal from the United States on August 25, 2025.

    Cybersecurity

    “PromptLock uses Lua scripts generated by AI, which means that indicators of compromise (IoCs) may vary between executions,” the Slovak cybersecurity company pointed out. “This variability introduces challenges for detection. If properly implemented, such an approach could significantly complicate threat identification and make defenders’ tasks more difficult.”

    Assessed to be a proof-of-concept (PoC) rather than a fully operational malware deployed in the wild, PromptLock uses the SPECK 128-bit encryption algorithm to lock files.

    Besides encryption, analysis of the ransomware artifact suggests that it could also be used to exfiltrate data or even destroy it, although the functionality to actually perform the erasure appears not yet to be implemented.

    “PromptLock does not download the entire model, which could be several gigabytes in size,” ESET clarified. “Instead, the attacker can simply establish a proxy or tunnel from the compromised network to a server running the Ollama API with the gpt-oss-20b model.”

    The emergence of PromptLock is another sign that AI has made it easier for cybercriminals, even those who lack technical expertise, to quickly set up new campaigns, develop malware, and create compelling phishing content and malicious sites.

    Earlier today, Anthropic revealed that it had banned accounts created by two different threat actors that used its Claude AI chatbot to commit large-scale theft and extortion of personal data targeting at least 17 distinct organizations, and developed several variants of ransomware with advanced evasion capabilities, encryption, and anti-recovery mechanisms.

    The development comes as large language models (LLMs) powering various chatbots and AI-focused developer tools, such as Amazon Q Developer, Anthropic Claude Code, AWS Kiro, Butterfly Effect Manus, Google Jules, Lenovo Lena, Microsoft GitHub Copilot, OpenAI ChatGPT Deep Research, OpenHands, Sourcegraph Amp, and Windsurf, have been found susceptible to prompt injection attacks, potentially allowing information disclosure, data exfiltration, and code execution.

    Despite incorporating robust security and safety guardrails to avoid undesirable behaviors, AI models have repeatedly fallen prey to novel variants of injections and jailbreaks, underscoring the complexity and evolving nature of the security challenge.

    Identity Security Risk Assessment

    “Prompt injection attacks can cause AIs to delete files, steal data, or make financial transactions,” Anthropic said. “New forms of prompt injection attacks are also constantly being developed by malicious actors.”

    What’s more, new research has uncovered a simple yet clever attack called PROMISQROUTE – short for “Prompt-based Router Open-Mode Manipulation Induced via SSRF-like Queries, Reconfiguring Operations Using Trust Evasion” – that abuses ChatGPT’s model routing mechanism to trigger a downgrade and cause the prompt to be sent to an older, less secure model, thus allowing the system to bypass safety filters and produce unintended results.

    “Adding phrases like ‘use compatibility mode’ or ‘fast response needed’ bypasses millions of dollars in AI safety research,” Adversa AI said in a report published last week, adding the attack targets the cost-saving model-routing mechanism used by AI vendors.


    Source: thehackernews.com…

  • Storm-0501 Exploits Entra ID to Exfiltrate and Delete Azure Data in Hybrid Cloud Attacks

    Storm-0501 Exploits Entra ID to Exfiltrate and Delete Azure Data in Hybrid Cloud Attacks

    The financially motivated threat actor known as Storm-0501 has been observed refining its tactics to conduct data exfiltration and extortion attacks targeting cloud environments.

    “Unlike traditional on-premises ransomware, where the threat actor typically deploys malware to encrypt critical files across endpoints within the compromised network and then negotiates for a decryption key, cloud-based ransomware introduces a fundamental shift,” the Microsoft Threat Intelligence team said in a report shared with The Hacker News.

    “Leveraging cloud-native capabilities, Storm-0501 rapidly exfiltrates large volumes of data, destroys data and backups within the victim environment, and demands ransom — all without relying on traditional malware deployment.”

    Storm-0501 was first documented by Microsoft almost a year ago, detailing its hybrid cloud ransomware attacks targeting government, manufacturing, transportation, and law enforcement sectors in the U.S., with the threat actors pivoting from on-premises to cloud for subsequent data exfiltration, credential theft, and ransomware deployment.

    Assessed to be active since 2021, the hacking group has evolved into a ransomware-as-a-service (RaaS) affiliate delivering various ransomware payloads over the years, such as Sabbath, Hive, BlackCat (ALPHV), Hunters International, LockBit, and Embargo.

    Cybersecurity

    “Storm-0501 has continued to demonstrate proficiency in moving between on-premises and cloud environments, exemplifying how threat actors adapt as hybrid cloud adoption grows,” the company said. “They hunt for unmanaged devices and security gaps in hybrid cloud environments to evade detection and escalate cloud privileges and, in some cases, traverse tenants in multi-tenant setups to achieve their goals.”

    Typical attack chains involve the threat actor abusing their initial access to achieve privilege escalation to a domain administrator, followed by on-premises lateral movement and reconnaissance steps that allow the attackers to breach the target’s cloud environment, thereby initiating a multi-stage sequence involving persistence, privilege escalation, data exfiltration, encryption, and extortion.

    Initial access, per Microsoft, is achieved through intrusions facilitated by access brokers like Storm-0249 and Storm-0900, taking advantage of stolen, compromised credentials to sign in to the target system, or exploiting various known remote code execution vulnerabilities in unpatched public-facing servers.

    In a recent campaign targeting an unnamed large enterprise with multiple subsidiaries, Storm-0501 is said to have conducted reconnaissance before laterally moving across the network using Evil-WinRM. The attackers also carried out what’s called a DCSync Attack to extract credentials from Active Directory by simulating the behavior of a domain controller.

    “Leveraging their foothold in the Active Directory environment, they traversed between Active Directory domains and eventually moved laterally to compromise a second Entra Connect server associated with a different Entra ID tenant and Active Directory domain,” Microsoft said.

    “The threat actor extracted the Directory Synchronization Account to repeat the reconnaissance process, this time targeting identities and resources in the second tenant.”

    These efforts ultimately enabled Storm-0501 to identify a non-human synced identity with a Global Admin role in Microsoft Entra ID on that tenant, and lacking in multi-factor authentication (MFA) protections. This subsequently opened the door to a scenario where the attackers reset the user’s on-premises password, causing it to be synced to the cloud identity of that user using the Entra Connect Sync service.

    Armed with the compromised Global Admin account, the digital intruders have been found to access the Azure Portal, registering a threat actor-owned Entra ID tenant as a trusted federated domain to create a backdoor, and then elevate their access to critical Azure resources, before setting the stage for data exfiltration and extortion.

    Identity Security Risk Assessment

    “After completing the exfiltration phase, Storm-0501 initiated the mass-deletion of the Azure resources containing the victim organization data, preventing the victim from taking remediation and mitigation action by restoring the data,” Microsoft said.

    “After successfully exfiltrating and destroying the data within the Azure environment, the threat actor initiated the extortion phase, where they contacted the victims using Microsoft Teams using one of the previously compromised users, demanding ransom.”

    The company said it has enacted a change in Microsoft Entra ID that prevents threat actors from abusing Directory Synchronization Accounts to escalate privileges. It has also released updates to Microsoft Entra Connect (version 2.5.3.0) to support Modern Authentication to allow customers to configure application-based authentication for enhanced security.

    “It is also important to enable Trusted Platform Module (TPM) on the Entra Connect Sync server to securely store sensitive credentials and cryptographic keys, mitigating Storm-0501’s credential extraction techniques,” the tech giant added.


    Source: thehackernews.com…

  • Someone Created First AI-Powered Ransomware Using OpenAI's gpt-oss:20b Model

    Someone Created First AI-Powered Ransomware Using OpenAI's gpt-oss:20b Model

    Cybersecurity company ESET has disclosed that it discovered an artificial intelligence (AI)-powered ransomware variant codenamed PromptLock.

    Written in Golang, the newly identified strain uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts in real-time. The open-weight language model was released by OpenAI earlier this month.

    “PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption,” ESET said. “These Lua scripts are cross-platform compatible, functioning on Windows, Linux, and macOS.”

    The ransomware code also embeds instructions to craft a custom note based on the “files affected,” and the infected machine is a personal computer, company server, or a power distribution controller. It’s currently not known who is behind the malware, but ESET told The Hacker News that PromptLoc artifacts were uploaded to VirusTotal from the United States on August 25, 2025.

    Cybersecurity

    “PromptLock uses Lua scripts generated by AI, which means that indicators of compromise (IoCs) may vary between executions,” the Slovak cybersecurity company pointed out. “This variability introduces challenges for detection. If properly implemented, such an approach could significantly complicate threat identification and make defenders’ tasks more difficult.”

    Assessed to be a proof-of-concept (PoC) rather than a fully operational malware deployed in the wild, PromptLock uses the SPECK 128-bit encryption algorithm to lock files.

    Besides encryption, analysis of the ransomware artifact suggests that it could also be used to exfiltrate data or even destroy it, although the functionality to actually perform the erasure appears not yet to be implemented.

    “PromptLock does not download the entire model, which could be several gigabytes in size,” ESET clarified. “Instead, the attacker can simply establish a proxy or tunnel from the compromised network to a server running the Ollama API with the gpt-oss-20b model.”

    The emergence of PromptLock is another sign that AI has made it easier for cybercriminals, even those who lack technical expertise, to quickly set up new campaigns, develop malware, and create compelling phishing content and malicious sites.

    Earlier today, Anthropic revealed that it banned accounts created by two different threat actors that used its Claude AI chatbot to commit large-scale theft and extortion of personal data targeting at least 17 distinct organizations, and developed several variants of ransomware with advanced evasion capabilities, encryption, and anti-recovery mechanisms.

    The development comes as large language models (LLMs) powering various chatbots and AI-focused developer tools, such as Amazon Q Developer, Anthropic Claude Code, AWS Kiro, Butterfly Effect Manus, Google Jules, Lenovo Lena, Microsoft GitHub Copilot, OpenAI ChatGPT Deep Research, OpenHands, Sourcegraph Amp, and Windsurf, have been found susceptible to prompt injection attacks, potentially allowing information disclosure, data exfiltration, and code execution.

    Despite incorporating robust security and safety guardrails to avoid undesirable behaviors, AI models have repeatedly fallen prey to novel variants of injections and jailbreaks, underscoring the complexity and evolving nature of the security challenge.

    Identity Security Risk Assessment

    “Prompt injection attacks can cause AIs to delete files, steal data, or make financial transactions,” Anthropic said. “New forms of prompt injection attacks are also constantly being developed by malicious actors.”

    What’s more, new research has uncovered a simple yet clever attack called PROMISQROUTE – short for “Prompt-based Router Open-Mode Manipulation Induced via SSRF-like Queries, Reconfiguring Operations Using Trust Evasion” – that abuses ChatGPT’s model routing mechanism to trigger a downgrade and cause the prompt to be sent to an older, less secure model, thus allowing the system to bypass safety filters and produce unintended results.

    “Adding phrases like ‘use compatibility mode’ or ‘fast response needed’ bypasses millions of dollars in AI safety research,” Adversa AI said in a report published last week, adding the attack targets the cost-saving model-routing mechanism used by AI vendors.


    Source: thehackernews.com…

  • ShadowSilk Hits 35 Organizations in Central Asia and APAC Using Telegram Bots

    ShadowSilk Hits 35 Organizations in Central Asia and APAC Using Telegram Bots

    A threat activity cluster known as ShadowSilk has been attributed to a fresh set of attacks targeting government entities within Central Asia and Asia-Pacific (APAC).

    According to Group-IB, nearly three dozen victims have been identified, with the intrusions mainly geared towards data exfiltration. The hacking group shares toolset and infrastructural overlaps with campaigns undertaken by threat actors dubbed YoroTrooper, SturgeonPhisher, and Silent Lynx.

    Victims of the group’s campaigns span Uzbekistan, Kyrgyzstan, Myanmar, Tajikistan, Pakistan, and Turkmenistan, a majority of which are government organizations, and to a lesser extent, entities in the energy, manufacturing, retail, and transportation sectors.

    “The operation is run by a bilingual crew – Russian-speaking developers tied to legacy YoroTrooper code and Chinese-speaking operators spearheading intrusions, resulting in a nimble, multi-regional threat profile,” researchers Nikita Rostovcev and Sergei Turner said. “The exact depth and nature of cooperation of these two sub-groups remains still uncertain.”

    Cybersecurity

    YoroTrooper was first publicly documented by Cisco Talos in March 2023, detailing its attacks targeting government, energy, and international organizations across Europe since at least June 2022. The group is believed to be active as far back as 2021, per ESET.

    A subsequent analysis later that year revealed that the hacking group likely consists of individuals from Kazakhstan based on their fluency in Kazakh and Russian, as well as what appeared to be deliberate efforts to avoid targeting entities in the country.

    Then earlier this January, Seqrite Labs uncovered cyber attacks orchestrated by an adversary dubbed Silent Lynx that singled out various organizations in Kyrgyzstan and Turkmenistan. It also characterized the threat actor as having overlaps with YoroTrooper.

    ShadowSilk represents the latest evolution of the threat actor, leveraging spear-phishing emails as the initial access vector to drop password-protected archives to drop a custom loader that hides command-and-control (C2) traffic behind Telegram bots to evade detection and deliver additional payloads. Persistence is achieved by modifying the Windows Registry to run them automatically after a system reboot.

    The threat actor also employs public exploits for Drupal (CVE-2018-7600 and CVE-2018-76020 and the WP-Automatic WordPress plugin (CVE-2024-27956), alongside leveraging a diverse toolkit comprising reconnaissance and penetration-testing tools such as FOFA, Fscan, Gobuster, Dirsearch, Metasploit, and Cobalt Strike.

    Furthermore, ShadowSilk has incorporated into its arsenal JRAT and Morf Project web panels acquired from darknet forums for managing infected devices, and a bespoke tool for stealing Chrome password storage files and the associated decryption key. Another notable aspect is its compromise of legitimate websites to host malicious payloads.

    “Once inside a network, ShadowSilk deploys web shells [like ANTSWORD, Behinder, Godzilla, and FinalShell], Sharp-based post-exploitation tools, and tunneling utilities such as Resocks and Chisel to move laterally, escalate privileges and siphon data,” the researchers said.

    Identity Security Risk Assessment

    The attacks have been observed paving the way for a Python-based remote access trojan (RAT) that can receive commands and exfiltrate data to a Telegram bot, thereby allowing the malicious traffic to be disguised as legitimate messenger activity. Cobalt Strike and Metasploit modules are used to grab screenshots and webcam pictures, while a custom PowerShell script scans for files matching a predefined list of extensions and copies them into a ZIP archive, which is then transmitted to an external server.

    The Singaporean company has assessed that the operators of the YoroTrooper group are fluent in Russian, and are likely engaged in malware development and facilitating initial access.

    However, a series of screenshots capturing one of the attackers’ workstations — featuring images of the active keyboard layout, automatic translation of Kyrgyzstan government websites into Chinese, and a Chinese language vulnerability scanner — indicates the involvement of a Chinese-speaking operator, it added.

    “Recent behavior indicates that the group remains highly active, with new victims identified as recently as July,” Group-IB said. “ShadowSilk continues to focus on the government sector in Central Asia and the broader APAC region, underscoring the importance of monitoring its infrastructure to prevent long-term compromise and data exfiltration.”


    Source: thehackernews.com…

  • Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors

    Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors

    Aug 27, 2025Ravie LakshmananCyber Attack / Artificial Intelligence

    Anthropic on Wednesday revealed that it disrupted a sophisticated operation that weaponized its artificial intelligence (AI)-powered chatbot Claude to conduct large-scale theft and extortion of personal data in July 2025.

    “The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, and government, and religious institutions,” the company said. “Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded $500,000.”

    “The actor employed Claude Code on Kali Linux as a comprehensive attack platform, embedding operational instructions in a CLAUDE.md file that provided persistent context for every interaction.”

    The unknown threat actor is said to have used AI to an “unprecedented degree,” using Claude Code, Anthropic’s agentic coding tool, to automate various phases of the attack cycle, including reconnaissance, credential harvesting, and network penetration.

    The reconnaissance efforts involved scanning thousands of VPN endpoints to flag susceptible systems, using them to obtain initial access and following up with user enumeration and network discovery steps to extract credentials and set up persistence on the hosts.

    Furthermore, the attacker used Claude Code to craft bespoke versions of the Chisel tunneling utility to sidestep detection efforts, and disguise malicious executables as legitimate Microsoft tools – an indication of how AI tools are being used to assist with malware development with defense evasion capabilities.

    Cybersecurity

    The activity, codenamed GTG-2002, is notable for employing Claude to make “tactical and strategic decisions” on its own and allowing it to decide which data needs to be exfiltrated from victim networks and craft targeted extortion demands by analyzing the financial data to determine an appropriate ransom amount ranging from $75,000 to $500,000 in Bitcoin.

    Claude Code, per Anthropic, was also put to use to organize stolen data for monetization purposes, pulling out thousands of individual records, including personal identifiers, addresses, financial information, and medical records from multiple victims. Subsequently, the tool was employed to create customized ransom notes and multi-tiered extortion strategies based on exfiltrated data analysis.

    “Agentic AI tools are now being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators,” Anthropic said. “This makes defense and enforcement increasingly difficult, since these tools can adapt to defensive measures, like malware detection systems, in real-time.”

    To mitigate such “vibe hacking” threats from occurring in the future, the company said it developed a custom classifier to screen for similar behavior and shared technical indicators with “key partners.”

    Other documented misuses of Claude are listed below –

    • Use of Claude by North Korean operatives related to the fraudulent remote IT worker scheme in order to create elaborate fictitious personas with persuasive professional backgrounds and project histories, technical and coding assessments during the application process, and assist with their day-to-day work once hired
    • Use of Claude by a U.K.-based cybercriminal, codenamed GTG-5004, to develop, market, and distribute several variants of ransomware with advanced evasion capabilities, encryption, and anti-recovery mechanisms, which were then sold on darknet forums such as Dread, CryptBB, and Nulled to other threat actors for $400 to $1,200
    • Use of Claude by a Chinese threat actor to enhance cyber operations targeting Vietnamese critical infrastructure, including telecommunications providers, government databases, and agricultural management systems, over the course of a 9-month campaign
    • Use of Claude by a Russian-speaking developer to create malware with advanced evasion capabilities
    • Use of Model Context Protocol (MCP) and Claude by a threat actor operating on the xss[.]is cybercrime forum with the goal of analyzing stealer logs and build detailed victim profiles
    • Use of Claude Code by a Spanish-speaking actor to maintain and improve an invite-only web service geared towards validating and reselling stolen credit cards at scale
    • Use of Claude as part of a Telegram bot that offers multimodal AI tools to support romance scam operations, advertising the chatbot as a “high EQ model”
    • Use of Claude by an unknown actor to launch an operational synthetic identity service that rotates between three card validation services, aka “card checkers”
    Identity Security Risk Assessment

    The company also said it foiled attempts made by North Korean threat actors linked to the Contagious Interview campaign to create accounts on the platform to enhance their malware toolset, create phishing lures, and generate npm packages, effectively blocking them from issuing any prompts.

    The case studies add to growing evidence that AI systems, despite the various guardrails baked into them, are being abused to facilitate sophisticated schemes at speed and at scale.

    “Criminals with few technical skills are using AI to conduct complex operations, such as developing ransomware, that would previously have required years of training,” Anthropic’s Alex Moix, Ken Lebedev, and Jacob Klein said, calling out AI’s ability to lower the barriers to cybercrime.

    “Cybercriminals and fraudsters have embedded AI throughout all stages of their operations. This includes profiling victims, analyzing stolen data, stealing credit card information, and creating false identities allowing fraud operations to expand their reach to more potential targets.”


    Source: thehackernews.com…

  • ShadowSilk Hits 36 Government Targets in Central Asia and APAC Using Telegram Bots

    ShadowSilk Hits 36 Government Targets in Central Asia and APAC Using Telegram Bots

    A threat activity cluster known as ShadowSilk has been attributed to a fresh set of attacks targeting government entities within Central Asia and Asia-Pacific (APAC).

    According to Group-IB, nearly three dozen victims have been identified, with the intrusions mainly geared towards data exfiltration. The hacking group shares toolset and infrastructural overlaps with campaigns undertaken by threat actors dubbed YoroTrooper, SturgeonPhisher, and Silent Lynx.

    Victims of the group’s campaigns span Uzbekistan, Kyrgyzstan, Myanmar, Tajikistan, Pakistan, and Turkmenistan, a majority of which are government organizations, and to a lesser extent, entities in the energy, manufacturing, retail, and transportation sectors.

    “The operation is run by a bilingual crew – Russian-speaking developers tied to legacy YoroTrooper code and Chinese-speaking operators spearheading intrusions, resulting in a nimble, multi-regional threat profile,” researchers Nikita Rostovcev and Sergei Turner said. “The exact depth and nature of cooperation of these two sub-groups remains still uncertain.”

    Cybersecurity

    YoroTrooper was first publicly documented by Cisco Talos in March 2023, detailing its attacks targeting government, energy, and international organizations across Europe since at least June 2022. The group is believed to be active as far back as 2021, per ESET.

    A subsequent analysis later that year revealed that the hacking group likely consists of individuals from Kazakhstan based on their fluency in Kazakh and Russian, as well as what appeared to be deliberate efforts to avoid targeting entities in the country.

    Then earlier this January, Seqrite Labs uncovered cyber attacks orchestrated by an adversary dubbed Silent Lynx that singled out various organizations in Kyrgyzstan and Turkmenistan. It also characterized the threat actor as having overlaps with YoroTrooper.

    ShadowSilk represents the latest evolution of the threat actor, leveraging spear-phishing emails as the initial access vector to drop password-protected archives to drop a custom loader that hides command-and-control (C2) traffic behind Telegram bots to evade detection and deliver additional payloads. Persistence is achieved by modifying the Windows Registry to run them automatically after a system reboot.

    The threat actor also employs public exploits for Drupal (CVE-2018-7600 and CVE-2018-76020 and the WP-Automatic WordPress plugin (CVE-2024-27956), alongside leveraging a diverse toolkit comprising reconnaissance and penetration-testing tools such as FOFA, Fscan, Gobuster, Dirsearch, Metasploit, and Cobalt Strike.

    Furthermore, ShadowSilk has incorporated into its arsenal JRAT and Morf Project web panels acquired from darknet forums for managing infected devices, and a bespoke tool for stealing Chrome password storage files and the associated decryption key. Another notable aspect is its compromise of legitimate websites to host malicious payloads.

    “Once inside a network, ShadowSilk deploys web shells [like ANTSWORD, Behinder, Godzilla, and FinalShell], Sharp-based post-exploitation tools, and tunneling utilities such as Resocks and Chisel to move laterally, escalate privileges and siphon data,” the researchers said.

    Identity Security Risk Assessment

    The attacks have been observed paving the way for a Python-based remote access trojan (RAT) that can receive commands and exfiltrate data to a Telegram bot, thereby allowing the malicious traffic to be disguised as legitimate messenger activity. Cobalt Strike and Metasploit modules are used to grab screenshots and webcam pictures, while a custom PowerShell script scans for files matching a predefined list of extensions and copies them into a ZIP archive, which is then transmitted to an external server.

    The Singaporean company has assessed that the operators of the YoroTrooper group are fluent in Russian, and are likely engaged in malware development and facilitating initial access.

    However, a series of screenshots capturing one of the attackers’ workstations — featuring images of the active keyboard layout, automatic translation of Kyrgyzstan government websites into Chinese, and a Chinese language vulnerability scanner — indicates the involvement of a Chinese-speaking operator, it added.

    “Recent behavior indicates that the group remains highly active, with new victims identified as recently as July,” Group-IB said. “ShadowSilk continues to focus on the government sector in Central Asia and the broader APAC region, underscoring the importance of monitoring its infrastructure to prevent long-term compromise and data exfiltration.”


    Source: thehackernews.com…

  • The 5 Golden Rules of Safe AI Adoption

    The 5 Golden Rules of Safe AI Adoption

    Aug 27, 2025The Hacker NewsEnterprise Security / Data Protection

    Employees are experimenting with AI at record speed. They are drafting emails, analyzing data, and transforming the workplace. The problem is not the pace of AI adoption, but the lack of control and safeguards in place.

    For CISOs and security leaders like you, the challenge is clear: you don’t want to slow AI adoption down, but you must make it safe. A policy sent company-wide will not cut it. What’s needed are practical principles and technological capabilities that create an innovative environment without an open door for a breach.

    Here are the five rules you cannot afford to ignore.

    Rule #1: AI Visibility and Discovery

    The oldest security truth still applies: you cannot protect what you cannot see. Shadow IT was a headache on its own, but shadow AI is even slipperier. It is not just ChatGPT, it’s also the embedded AI features that exist in many SaaS apps and any new AI agents that your employees might be creating.

    The golden rule: turn on the lights.

    You need real-time visibility into AI usage, both stand-alone and embedded. AI discovery should be continuous and not a one-time event.

    Rule #2: Contextual Risk Assessment

    Not all AI usage carries the same level of risk. An AI grammar checker used inside a text editor doesn’t carry the same risk as an AI tool that connects directly to your CRM. Wing enriches each discovery with meaningful context so you can get contextual awareness, including:

    • Who the vendor is and their reputation in the market
    • If your data being used for AI training and if it’s configurable
    • Whether the app or vendor has a history of breaches or security issues
    • The app’s compliance adherence (SOC 2, GDPR, ISO, etc.)
    • If the app connects to any other systems in your environment

    The golden rule: context matters.

    Prevent leaving gaps that are big enough for attackers to exploit. Your AI security platform should give you contextual awareness to make the right decisions about which tools are in use and if they are safe.

    Rule #3: Data Protection

    AI thrives on data, which makes it both powerful and risky. If employees feed sensitive information into applications with AI without controls, you risk exposure, compliance violations, and devastating consequences in the event of a breach. The question is not if your data will end up in AI, but how to ensure it is protected along the way.

    The golden rule: data needs a seatbelt.

    Put boundaries around what data can be shared with AI tools and how it is handled, both in policy and by utilizing your security technology to give you full visibility. Data protection is the backbone of safe AI adoption. Enabling clear boundaries now will prevent potential loss later.

    Rule #4: Access Controls and Guardrails

    Letting employees use AI without controls is like handing your car keys to a teenager and yelling, “Drive safe!” without driving lessons.

    You need technology that enables access controls to determine which tools are being used and under what conditions. This is new for everyone, and your organization is relying on you to make the rules.

    The golden rule: zero trust. Still!

    Make sure your security tools enable you to define clear, customizable policies for AI use, like:

    • Blocking AI vendors that don’t meet your security standards
    • Restricting connections to certain types of AI apps
    • Trigger a workflow to validate the need for a new AI tool

    Rule #5: Continuous Oversight

    Securing your AI is not a “set it and forget it” project. Applications evolve, permissions change, and employees find new ways to use the tools. Without ongoing oversight, what was safe yesterday can quietly become a risk today.

    The golden rule: keep watching.

    Continuous oversight means:

    • Monitoring apps for new permissions, data flows, or behaviors
    • Auditing AI outputs to ensure accuracy, fairness, and compliance
    • Reviewing vendor updates that may change how AI features work
    • Being ready to step in when AI is breached

    This is not about micromanaging innovation. It is about making sure AI continues to serve your business safely as it evolves.

    Harness AI wisely

    AI is here, it is useful, and it is not going anywhere. The smart play for CISOs and security leaders is to adopt AI with intention. These five golden rules give you a blueprint for balancing innovation and protection. They will not stop your employees from experimenting, but they will stop that experimentation from turning into your next security headline.

    Safe AI adoption is not about saying “no.” It is about saying: “yes, but here’s how.”

    Want to see what’s really hiding in your stack? Wing’s got you covered.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Blind Eagle’s Five Clusters Target Colombia Using RATs, Phishing Lures, and Dynamic DNS Infra

    Blind Eagle’s Five Clusters Target Colombia Using RATs, Phishing Lures, and Dynamic DNS Infra

    Cybersecurity researchers have discovered five distinct activity clusters linked to a persistent threat actor known as Blind Eagle between May 2024 and July 2025.

    These attacks, observed by Recorded Future Insikt Group, targeted various victims, but primarily within the Colombian government across local, municipal, and federal levels. The threat intelligence firm is tracking the activity under the name TAG-144.

    “Although the clusters share similar tactics, techniques, and procedures (TTPs) such as leveraging open-source and cracked remote access trojans (RATs), dynamic domain providers, and legitimate internet services (LIS) for staging, they differ significantly in infrastructure, malware deployment, and other operational methods,” the Mastercard-owned company said.

    Blind Eagle has a history of targeting organizations in South America since at least 2018, with the attacks reflecting both cyber espionage and financially driven motivations. This is evidenced in their recent campaigns, which have involved banking-related keylogging and browser monitoring as well as targeting government entities using various remote access trojans (RATs).

    Cybersecurity

    Targets of the group’s attacks include the judiciary and tax authorities, along with entities in the financial, petroleum, energy, education, healthcare, manufacturing, and professional services sectors. The operations predominantly span Colombia, Ecuador, Chile, and Panama, and, in some cases, Spanish-speaking users in North America.

    Attack chains typically involve the use of spear-phishing lures impersonating local government agencies to entice recipients into opening malicious documents or clicking on links concealed using URL shorteners like cort[.]as, acortaurl[.]com, and gtly[.]to.

    Blind Eagle makes use of compromised email accounts to send the messages and leverages geofencing tricks to redirect users to official government websites when attempting to navigate to attacker-controlled infrastructure outside of Colombia or Ecuador.

    “TAG-144’s command-and-control (C2) infrastructure often incorporates IP addresses from Colombian ISPs alongside virtual private servers (VPS) such as Proton666 and VPN services like Powerhouse Management, FrootVPN, and TorGuard,” Recorded Future said. This setup is further enhanced by the use of dynamic DNS services, including duckdns[.]org, ip-ddns[.]com, and noip[.]com.”

    The threat group has also taken advantage of legitimate internet services, such as Bitbucket, Discord, Dropbox, GitHub, Google Drive, the Internet Archive, lovestoblog.com, Paste.ee, Tagbox, and lesser-known Brazilian image-hosting websites, for staging payloads in order to obscure malicious content and evade detection.

    Recent campaigns orchestrated by the threat actor have employed a Visual Basic Script file as a dropper to execute a dynamically generated PowerShell script at runtime, which, in turn, reaches out to an external server to download an injector module that’s responsible for loading Lime RAT, DCRat, AsyncRAT, or Remcos RAT.

    The regional focus aside, the hacking group has consistently relied on the same techniques since its emergence, underscoring how “well-established methods” continue to yield high success rates in the region.

    Recorded Future’s analysis of Blind Eagle’s campaigns have uncovered five clusters of activity –

    • Cluster 1 (from February through July 2025), which has targeted Colombian government entities exclusively with DCRat, AsyncRAT, and Remcos RAT
    • Cluster 2 (from September through December 2024), which has targeted Colombian government and entities in the education, defense, and retail sectors with AsyncRAT and XWorm
    • Cluster 3 (from September 2024 through July 2025), which is characterized by the deployment of AsyncRAT and Remcos RAT
    • Cluster 4 (from May 2024 through February 2025), which is associated with malware and phishing infrastructure attributed to TAG-144, with the phishing pages mimicking Banco Davivienda, Bancolombia, and BBVA
    • Cluster 5 (from March through July 2025), which is associated with Lime RAT and a cracked AsyncRAT variant observed in Clusters 1 and 2

    The digital missives used in these campaigns come with an SVG attachment, which then reaches out to Discord CDN to retrieve a JavaScript payload that, for its part, fetches a PowerShell script from Paste.ee. The PowerShell script is designed to decode and execute another PowerShell payload that obtains a JPG image hosted on the Internet Archive and extracts from it an embedded .NET assembly.

    Identity Security Risk Assessment

    Interestingly, the cracked version of AsyncRAT used in the attacks has been previously observed in connection with intrusion activity mounted by threat actors Red Akodon and Shadow Vector, both of which have targeted Colombia over the past year.

    Nearly 60% of the observed Blind Eagle activity during the analysis period has targeted the government sector, followed by education, healthcare, retail, transportation, defense, and oil verticals.

    “Although TAG-144 has targeted other sectors and has occasionally been linked to intrusions in additional South American countries such as Ecuador, as well as Spanish-speaking victims in the US, its primary focus has consistently remained on Colombia, particularly on government entities,” Recorded Future said.

    “This persistent targeting raises questions about the threat group’s true motivations, such as whether it operates solely as a financially driven threat actor leveraging established tools, techniques, and monetization strategies, or whether elements of state-sponsored espionage are also at play.”


    Source: thehackernews.com…

  • Salesloft OAuth Breach via Drift AI Chat Agent Exposes Salesforce Customer Data

    Salesloft OAuth Breach via Drift AI Chat Agent Exposes Salesforce Customer Data

    Aug 27, 2025Ravie LakshmananCloud Security / Threat Intelligence

    A widespread data theft campaign has allowed hackers to breach sales automation platform Salesloft to steal OAuth and refresh tokens associated with the Drift artificial intelligence (AI) chat agent.

    The activity, assessed to be opportunistic in nature, has been attributed to a threat actor tracked by Google Threat Intelligence Group and Mandiant, tracked as UNC6395.

    “Beginning as early as August 8, 2025, through at least August 18, 2025, the actor targeted Salesforce customer instances through compromised OAuth tokens associated with the Salesloft Drift third-party application,” researchers Austin Larsen, Matt Lin, Tyler McLellan, and Omar ElAhdan said.

    In these attacks, the threat actors have been observed exporting large volumes of data from numerous corporate Salesforce instances, with the likely aim of harvesting credentials that could be then used to compromise victim environments. These include Amazon Web Services (AWS) access keys (AKIA), passwords, and Snowflake-related access tokens.

    Cybersecurity

    UNC6395 has also demonstrated operational security awareness by deleting query jobs, although Google is urging organizations to review relevant logs for evidence of data exposure, alongside revoking API keys, rotating credentials, and performing further investigation to determine the extent of compromise.

    Salesloft, in an advisory issued August 20, 2025, said it identified a security issue in the Drift application and that it has proactively revoked connections between Drift and Salesforce. The incident does not affect customers who do not integrate with Salesforce.

    “A threat actor used OAuth credentials to exfiltrate data from our customers’ Salesforce instances,” Salesloft said. “The threat actor executed queries to retrieve information associated with various Salesforce objects, including Cases, Accounts, Users, and Opportunities.”

    The company is also recommending that administrators re-authenticate their Salesforce connection to re-enable the integration. The exact scale of the activity is not known. However, Salesloft said it has notified all affected parties.

    In a statement Tuesday, Salesforce said a “small number of customers” were impacted, stating the issue stems from a “compromise of the app’s connection.”

    “Upon detecting the activity, Salesloft, in collaboration with Salesforce, invalidated active Access and Refresh Tokens, and removed Drift from AppExchange. We then notified affected customers,” Salesforce added.

    The development comes as Salesforce instances have become an active target for financially motivated threat groups like UNC6040 and UNC6240 (aka ShinyHunters), the latter of which has since joined hands with Scattered Spider (aka UNC3944) to secure initial access.

    Identity Security Risk Assessment

    “What’s most noteworthy about the UNC6395 attacks is both the scale and the discipline,” Cory Michal, CSO of AppOmni, said. “This wasn’t a one-off compromise; hundreds of Salesforce tenants of specific organizations of interest were targeted using stolen OAuth tokens, and the attacker methodically queried and exported data across many environments.”

    “They demonstrated a high level of operational discipline, running structured queries, searching specifically for credentials, and even attempting to cover their tracks by deleting jobs. The combination of scale, focus, and tradecraft makes this campaign stand out.”

    Michal also pointed out that many of the targeted and compromised organizations were themselves security and technology companies, indicating that the campaign may be an “opening move” as part of a broader supply chain attack strategy.

    “By first infiltrating vendors and service providers, the attackers put themselves in position to pivot into downstream customers and partners,” Michal added. “That makes this not just an isolated SaaS compromise, but potentially the foundation for a much larger campaign aimed at exploiting the trust relationships that exist across the technology supply chain.”


    Source: thehackernews.com…