Author: Mark

  • ThreatsDay Bulletin: AI Tools in Malware, Botnets, GDI Flaws, Election Attacks & More

    ThreatsDay Bulletin: AI Tools in Malware, Botnets, GDI Flaws, Election Attacks & More

    Nov 06, 2025Ravie LakshmananCybersecurity / Hacking News

    Cybercrime has stopped being a problem of just the internet — it’s becoming a problem of the real world. Online scams now fund organized crime, hackers rent violence like a service, and even trusted apps or social platforms are turning into attack vectors.

    The result is a global system where every digital weakness can be turned into physical harm, economic loss, or political leverage. Understanding these links is no longer optional — it’s survival.

    For a full look at the most important security news stories of the week, keep reading.

    1. AI speeds triage but human skill still needed

      Check Point has demonstrated a way by which ChatGPT can be used for malware analysis and flip the balance when it comes to taking apart sophisticated trojans like XLoader, which is designed such that its code decrypts only at runtime and is protected by multiple layers of encryption. Specifically, the research found that cloud-based static analysis with ChatGPT can be combined with MCP for runtime key extraction and live debugging validation. “The use of AI doesn’t eliminate the need for human expertise,” security researcher Alexey Bukhteyev said. “XLoader’s most sophisticated protections, such as scattered key derivation logic and multi-layer function encryption, still require manual analysis and targeted adjustments. But the heavy lifting of triage, deobfuscation, and scripting can now be accelerated dramatically. What once took days can now be compressed into hours.”

    Every hack or scam has one thing in common — someone takes advantage of trust. As security teams improve their defenses, attackers quickly find new tricks. The best way to stay ahead isn’t to panic, but to stay informed, keep learning, and stay alert.

    Cybersecurity keeps changing fast — and our understanding needs to keep up.


    Source: thehackernews.com…

  • From Tabletop to Turnkey: Building Cyber Resilience in Financial Services

    From Tabletop to Turnkey: Building Cyber Resilience in Financial Services

    Introduction

    Financial institutions are facing a new reality: cyber-resilience has passed from being a best practice, to an operational necessity, to a prescriptive regulatory requirement.

    Crisis management or Tabletop exercises, for a long time relatively rare in the context of cybersecurity, have become required as a series of regulations has introduced this requirement to FSI organizations in several regions, including DORA (Digital Operational Resilience Act) in the EU; CPS230 / CORIE (Cyber Operational Resilience Intelligence-led Exercises) in Australia; MAS TRM (Monetary Authority of Singapore Technology Risk Management guidelines); FCA/PRA Operational Resilience in the UK; the FFIEC IT Handbook in the US, and the SAMA Cybersecurity Framework in Saudi Arabia.

    What makes complying with these regulatory requirements complex is the cross-functional collaboration between technical and non-technical teams. For example, simulation of the technical aspects of the cyber incident – in other words, red-teaming – is required, if not precisely at the same time, then certainly within the same resilience program, in the same context, and with many of the same inputs and outputs. This is strongest in the regulations based on the TIBER-EU framework, particularly CORIE and DORA.

    There’s Always Excel

    As requirements become more prescriptive, and best practices become more established, what used to be a tabletop exercise driven by a simple Excel file with a short series of events, timestamps, personas and comments, has grown into a series of scenarios, scripts, threat landscape analyses, threat actor profiles, TTPs and IOCs, folders of threat reports, hacking tools, injects and reports – all of which must be reviewed, prepared, rehearsed, played, analyzed, and reported, at least once per year, if not per quarter, if not continuously.

    While Excel is a stalwart in each of the cyber, financial, and GRC domains, even it has its limits at these levels of complexity.

    Blending Tabletop and Red Team Simulation

    Over the past several years, Filigran has advanced OpenAEV to the point where you can design and execute end-to-end scenarios that blend human communications with technical events. Initially launched as a crisis simulation management platform, it later incorporated breach & attack simulation to now holistic adversarial exposure management, providing a unique capability to assess both technical and human readiness.

    Simulations are more realistic when ransomware encryption alerts are followed by emails from confused users

    There are many advantages to blending these two capabilities into one tool. For a start, it greatly simplifies the preparation work for the scenario. Following threat landscape research in OpenCTI (a threat intelligence platform), a relevant intelligence report can be used to both generate the technical injects based on the Attacker TTPs, but also have content such as attacker communications, third party Security Operations Centre and Managed Detection and Response communications, and internal leadership communications, built off intelligence and timing from the same report.

    Keeping Track of the Team

    Using a single tool also deduplicates logistics, before, during, and after the exercise. “Players” in the exercise, in their teams and organizational units, can be synchronized with enterprise Identity and Access Management sources, so that recipients of alerts from technical events during the exercise, are the same as those receiving simulated crisis emails from the tabletop components; and the same who receive the automated feedback questionnaires for the ‘hot wash’ review immediately after the exercise; and the same who appear in the final reports for auditor review.

    OpenAEV can synchronise current team participant and analyst details from multiple identity sources

    Similarly, if the same exercise is run again after lessons learnt have been put into place, as part of the demonstrable continual improvement required under DORA and CORIE, then this synchronization will maintain a current contact list for the individuals in these roles, or, indeed, for the alternate phone tree and out-of-band crisis communications channels that are also kept up to date, and for third parties such as MSSP, MDR, and upstream supply chain providers.

    Similar efficiencies exist in threat landscape tracking, threat report mapping, and other features. As with all business processes, streamlining logistics makes for greater efficiency, enabling shorter preparation times, and more frequent simulations.

    Choosing your timing

    With CORIE and DORA being relatively recently enforced regulations, most organizations will be just starting their journey in running tabletop and red team scenarios, with much refinement in the process still to come. For such organizations, running blended simulations may feel too large a first step.

    This is fine. Scenarios can be run in OpenAEV in more discreet ways. Most typically, this might involve running a red team simulation on the first day, to test detective and preventative technical controls, and SOC response processes. The tabletop exercise would then be run on the second day, and can potentially be tweaked to reflect findings and timings from the technical exercise.

    Simulations can be scheduled to repeat over days, weeks, or months

    More interestingly, simulations can be scheduled and run over much longer periods of time – even months. This permits automation and management of trickier, but very real scenarios, such as leaving signs of intrusion on hosts in advance, and challenging the SOC, IR and CTI teams to show their ability to retrieve logs from archive in order to search for patient zero, the first system compromised. This can be hard to realistically model in a day’s simulation, but all too common a requirement in reality.

    Practice makes Perfect

    Aside from the regulatory requirements, insurance conditions, risk management, and other external drivers, the ability to streamline attack simulations and tabletop exercises for current, relevant threats, with all the technical integrations, scheduling, and automation that enable this means that your security, leadership, and crisis management teams, will develop a muscle memory and flow that will engender confidence in your organization’s ability to handle a real crisis, when the next one occurs.

    Having access to a tool like OpenAEV, which is free for community use, with a library of common ransomware and threat scenarios, technical integrations to SIEMs and EDRs, and an extensible and open source integration ecosystem, is one of many ways in which we can help improve our cyber defenses and cyber resilience. And, not to forget, our compliance.

    And when your team is fully rehearsed and confident at handling crisis situations, then it’s no longer a crisis.

    Ready to Take the Next Step?

    To dive deeper into how organizations can turn regulatory mandates into actionable resilience strategies, join one of Filigran’s upcoming expert-led sessions:

    Operationalizing Incident Response: Compliance-Ready Tabletop Exercises with an AEV Platform



    Source: thehackernews.com…

  • Bitdefender Named a Representative Vendor in the 2025 Gartner® Market Guide for Managed Detection and Response

    Bitdefender Named a Representative Vendor in the 2025 Gartner® Market Guide for Managed Detection and Response

    Nov 06, 2025The Hacker NewsUnited States

    Bitdefender has once again been recognized as a Representative Vendor in the Gartner® Market Guide for Managed Detection and Response (MDR) — marking the fourth consecutive year of inclusion. According to Gartner, more than 600 providers globally claim to deliver MDR services, yet only a select few meet the criteria to appear in the Market Guide. While inclusion is not a ranking or comparative assessment, we believe it underscores Bitdefender’s human-driven approach to MDR and our continued alignment with Gartner’s rigorous inclusion standards.

    To be included, must demonstrate consistent visibility through Gartner client inquiries or Peer Insights reviews, focus on delivering end-user–oriented services rather than purely technological solutions, and represent a variety of company sizes and geographies.

    We believe independent analyst research like the Gartner Market Guide for Managed Detection and Response is a valuable resource for organizations assessing MDR providers. The report outlines the evolving MDR landscape, identifies its core components, and highlights emerging trends — including the growing emphasis on proactive exposure management.

    Download the Report

    Why MDR Adoption Is Accelerating

    The MDR market continues to expand rapidly, fueled by two key forces: the rising sophistication of cyber threats and the ongoing shortage of skilled in-house security talent. While large enterprises have long had access to around-the-clock monitoring and expert-led response, small and mid-sized organizations are increasingly recognizing the same need — often without the capacity to build and maintain full Security Operations Centers (SOCs).

    For these organizations, MDR delivers human-led, enterprise-grade protection with proactive exposure management — without the complexity or cost of running it internally. Bitdefender MDR integrates advanced detection technologies, global threat intelligence, and expert-led response, giving organizations access to elite analysts who monitor, investigate, and neutralize threats 24×7. This approach enhances resilience, reduces alert fatigue, and allows internal teams to focus on strategic initiatives instead of managing constant alerts.

    Organizations leveraging MDR typically experience faster detection, reduced dwell time, and increased confidence in handling advanced attacks such as ransomware or supply-chain compromises. Many also report improved compliance readiness and more efficient recovery from incidents. As threat actors exploit vulnerabilities across cloud, identity, and endpoint layers, MDR fills a critical role by delivering continuous visibility and active defense.

    Bitdefender MDR stands out for its focus on proactive threat hunting — identifying hidden adversaries before damage occurs — and its use of AI-driven analytics to surface only the most relevant, high-priority alerts. This blend of human expertise and advanced technology enables rapid containment and minimal business disruption, delivering measurable security outcomes for organizations of all sizes.

    Choosing the Right MDR Partner

    When selecting an MDR provider, prioritize services that can proactively reduce exposure, hunt for emerging threats, and enable rapid incident containment.

    An MDR service that accomplishes these goals doesn’t just reinforce defenses — it transforms your security posture. By minimizing exposure, detecting threats early, and responding with speed and accuracy, you gain stronger protection and lasting peace of mind. Your team can operate confidently knowing expert defenders are watching over your environment 24×7, ready to act before anomalies escalate into breaches.

    Join your industry peers in downloading the Gartner Market Guide for Managed Detection and Response to take the next step in your MDR journey. According to the 2025 Bitdefender Cybersecurity Assessment, 64% of IT and security professionals say independent evaluations and research from organizations like Gartner and MITRE influence their cybersecurity purchasing decisions — underscoring the importance of trusted third-party insights in shaping effective security strategies.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Hackers Weaponize Windows Hyper-V to Hide Linux VM and Evade EDR Detection

    Hackers Weaponize Windows Hyper-V to Hide Linux VM and Evade EDR Detection

    Nov 06, 2025Ravie LakshmananMalware / Network Security

    The threat actor known as Curly COMrades has been observed exploiting virtualization technologies as a way to bypass security solutions and execute custom malware.

    According to a new report from Bitdefender, the adversary is said to have enabled the Hyper-V role on selected victim systems to deploy a minimalistic, Alpine Linux-based virtual machine.

    “This hidden environment, with its lightweight footprint (only 120MB disk space and 256MB memory), hosted their custom reverse shell, CurlyShell, and a reverse proxy, CurlCat,” security researcher Victor Vrabie, along with Adrian Schipor and Martin Zugec, said in a technical report.

    DFIR Retainer Services

    Curly COMrades was first documented by the Romanian cybersecurity vendor in August 2025 in connection with a series of attacks targeting Georgia and Moldova. The activity cluster is assessed to be active since late 2023, operating with interests that are aligned with Russia.

    These attacks were found to deploy tools like CurlCat for bidirectional data transfer, RuRat for persistent remote access, Mimikatz for credential harvesting, and a modular .NET implant dubbed MucorAgent, with early iterations dating back all the way to November 2023.

    In a follow-up analysis conducted in collaboration with Georgia CERT, additional tooling associated with the threat actor has been identified, alongside attempts to establish long-term access by weaponizing Hyper-V on compromised Windows 10 hosts to set up a hidden remote operating environment.

    “By isolating the malware and its execution environment within a VM, the attackers effectively bypassed many traditional host-based EDR detections,” the researchers said. “The threat actor demonstrated a clear determination to maintain a reverse proxy capability, repeatedly introducing new tooling into the environment.”

    Besides using Resocks, Rsockstun, Ligolo-ng, CCProxy, Stunnel, and SSH-based methods for proxy and tunneling, Curly COMrades has employed various other tools, including a PowerShell script designed for remote command execution and CurlyShell, a previously undocumented ELF binary deployed in the virtual machine that provides a persistent reverse shell.

    CIS Build Kits

    Written in C++, the malware is executed as a headless background daemon to connect to a command-and-control (C2) server and launch a reverse shell, allowing the threat actors to run encrypted commands. Communication is achieved via HTTP GET requests to poll the server for new commands and using HTTP POST requests to transmit the results of the command execution back to the server.

    “Two custom malware families – CurlyShell and CurlCat – were at the center of this activity, sharing a largely identical code base but diverging in how they handled received data: CurlyShell executed commands directly, while CurlCat funneled traffic through SSH,” Bitdefender said. “These tools were deployed and operated to ensure flexible control and adaptability.”


    Source: thehackernews.com…

  • SonicWall Confirms State-Sponsored Hackers Behind September Cloud Backup Breach

    SonicWall Confirms State-Sponsored Hackers Behind September Cloud Backup Breach

    Nov 06, 2025Ravie LakshmananIncident Response / Cloud Security

    SonicWall has formally implicated state-sponsored threat actors as behind the September security breach that led to the unauthorized exposure of firewall configuration backup files.

    “The malicious activity – carried out by a state-sponsored threat actor – was isolated to the unauthorized access of cloud backup files from a specific cloud environment using an API call,” the company said in a statement released this week. “The incident is unrelated to ongoing global Akira ransomware attacks on firewalls and other edge devices.”

    The disclosure comes nearly a month after the company said an unauthorized party accessed firewall configuration backup files for all customers who have used the cloud backup service. In September, it claimed that the threat actors accessed the backup files stored in the cloud for less than 5% of its customers.

    DFIR Retainer Services

    SonicWall, which engaged the services of Google-owned Mandiant to investigate the breach, said it did not affect its products or firmware, or any of its other systems. It also said it has adopted various remedial actions recommended by Mandiant to harden its network and cloud infrastructure, and that it will continue to improve its security posture.

    “As nation-state–backed threat actors increasingly target edge security providers, especially those serving SMB and distributed environments, SonicWall is committed to strengthening its position as a leader for partners and their SMB customers on the front lines of this escalation,” it added.

    SonicWall customers are advised to log in to MySonicWall.com and check for their devices, and reset the credentials for impacted services, if any. The company has also released an Online Analysis Tool and Credentials Reset Tool to identify services that require remediation and perform credential-related security tasks, respectively.


    Source: thehackernews.com…

  • Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly

    Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly

    Nov 05, 2025Ravie LakshmananArtificial Intelligence / Threat Intelligence

    Google on Wednesday said it discovered an unknown threat actor using an experimental Visual Basic Script (VB Script) malware dubbed PROMPTFLUX that interacts with its Gemini artificial intelligence (AI) model API to write its own source code for improved obfuscation and evasion.

    “PROMPTFLUX is written in VBScript and interacts with Gemini’s API to request specific VBScript obfuscation and evasion techniques to facilitate ‘just-in-time’ self-modification, likely to evade static signature-based detection,” Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News.

    The novel feature is part of its “Thinking Robot” component, which periodically queries the large language model (LLM), Gemini 1.5 Flash or later in this case, to obtain new code so as to sidestep detection. This, in turn, is accomplished by using a hard-coded API key to send the query to the Gemini API endpoint.

    The prompt sent to the model is both highly specific and machine-parsable, requesting VB Script code changes for antivirus evasion and instructing the model to output only the code itself.

    The regeneration capability aside, the malware saves the new, obfuscated version to the Windows Startup folder to establish persistence and attempts to propagate by copying itself to removable drives and mapped network shares.

    “Although the self-modification function (AttemptToUpdateSelf) is commented out, its presence, combined with the active logging of AI responses to ‘%TEMP%thinking_robot_log.txt,’ clearly indicates the author’s goal of creating a metamorphic script that can evolve over time,” Google added.

    DFIR Retainer Services

    The tech giant also said it discovered multiple variations of PROMPTFLUX incorporating LLM-driven code regeneration, with one version using a prompt to rewrite the malware’s entire source code every hour by instructing the LLM to act as an “expert VB Script obfuscator.”

    PROMPTFLUX is assessed to be under development or testing phase, with the malware currently lacking any means to compromise a victim network or device. It’s currently not known who is behind the malware, but signs point to a financially motivated threat actor that has adopted a broad, geography- and industry-agnostic approach to target a wide range of users.

    Google also noted that adversaries are going beyond utilizing AI for simple productivity gains to create tools that are capable of adjusting their behavior in the midst of execution, not to mention developing purpose-built tools that are then sold on underground forums for financial gain. Some of the other instances of LLM-powered malware observed by the company are as follows –

    From a Gemini point of view, the company said it observed a China-nexus threat actor abusing its AI tool to craft convincing lure content, build technical infrastructure, and design tooling for data exfiltration.

    In at least one instance, the threat actor is said to have reframed their prompts by identifying themselves as a participant in a capture-the-flag (CTF) exercise to bypass guardrails and trick the AI system into returning useful information that can be leveraged to exploit a compromised endpoint.

    “The actor appeared to learn from this interaction and used the CTF pretext in support of phishing, exploitation, and web shell development,” Google said. “The actor prefaced many of their prompts about exploitation of specific software and email services with comments such as ‘I am working on a CTF problem’ or ‘I am currently in a CTF, and I saw someone from another team say …’ This approach provided advice on the next exploitation steps in a ‘CTF scenario.’”

    Other instances of Gemini abuse by state-sponsored actors from China, Iran, and North Korea to streamline their operations, including reconnaissance, phishing lure creation, command-and-control (C2) development, and data exfiltration, are listed below –

    • The misuse of Gemini by a suspected China-nexus actor on various tasks, ranging from conducting initial reconnaissance on targets of interest and phishing techniques to delivering payloads and seeking assistance on lateral movement and data exfiltration methods
    • The misuse of Gemini by Iranian nation-state actor APT41 for assistance on code obfuscation and developing C++ and Golang code for multiple tools, including a C2 framework called OSSTUN
    • The misuse of Gemini by Iranian nation-state actor MuddyWater (aka Mango Sandstorm, MUDDYCOAST or TEMP.Zagros) to conduct research to support the development of custom malware to support file transfer and remote execution, while circumventing safety barriers by claiming to be a student working on a final university project or writing an article on cybersecurity
    • The misuse of Gemini by Iranian nation-state actor APT42 (aka Charming Kitten and Mint Sandstorm) to craft material for phishing campaigns that often involve impersonating individuals from think tanks, translating articles and messages, researching Israeli defense, and developing a “Data Processing Agent” that converts natural language requests into SQL queries to obtain insights from sensitive data
    • The misuse of Gemini by North Korean threat actor UNC1069 (aka CryptoCore or MASAN) – one of the two clusters alongside TraderTraitor (aka PUKCHONG or UNC4899) that has succeeded the now-defunct APT38 (aka BlueNoroff) – to generate lure material for social engineering, develop code to steal cryptocurrency, and craft fraudulent instructions impersonating a software update to extract user credentials
    • The misuse of Gemini by TraderTraitor to develop code, research exploits, and improve their tooling
    CIS Build Kits

    Furthermore, GTIG said it recently observed UNC1069 employing deepfake images and video lures impersonating individuals in the cryptocurrency industry in their social engineering campaigns to distribute a backdoor called BIGMACHO to victim systems under the guise of a Zoom software development kit (SDK). It’s worth noting that some aspect of the activity shares similarities with the GhostCall campaign recently disclosed by Kaspersky.

    The development comes as Google said it expects threat actors to “move decisively from using AI as an exception to using it as the norm” in order to boost the speed, scope, and effectiveness of their operations, thereby allowing them to mount attacks at scale.

    “The increasing accessibility of powerful AI models and the growing number of businesses integrating them into daily operations create perfect conditions for prompt injection attacks,” it said. “Threat actors are rapidly refining their techniques, and the low-cost, high-reward nature of these attacks makes them an attractive option.”


    Source: thehackernews.com…

  • Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

    Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data

    Nov 05, 2025Ravie LakshmananArtificial Intelligence / Vulnerability

    Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI’s ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users’ memories and chat histories without their knowledge.

    The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI’s GPT-4o and GPT-5 models. OpenAI has since addressed some of them.

    These issues expose the AI system to indirect prompt injection attacks, allowing an attacker to manipulate the expected behavior of a large language model (LLM) and trick it into performing unintended or malicious actions, security researchers Moshe Bernstein and Liv Matan said in a report shared with The Hacker News.

    The identified shortcomings are listed below –

    • Indirect prompt injection vulnerability via trusted sites in Browsing Context, which involves asking ChatGPT to summarize the contents of web pages with malicious instructions added in the comment section, causing the LLM to execute them
    • Zero-click indirect prompt injection vulnerability in Search Context, which involves tricking the LLM into executing malicious instructions simply by asking about a website in the form of a natural language query, owing to the fact that the site may have been indexed by search engines like Bing and OpenAI’s crawler associated with SearchGPT.
    • Prompt injection vulnerability via one-click, which involves crafting a link in the format “chatgpt[.]com/?q={Prompt},” causing the LLM to automatically execute the query in the “q=” parameter
    • Safety mechanism bypass vulnerability, which takes advantage of the fact that the domain bing[.]com is allow-listed in ChatGPT as a safe URL to set up Bing ad tracking links (bing[.]com/ck/a) to mask malicious URLs and allow them to be rendered on the chat.
    • Conversation injection technique, which involves inserting malicious instructions into a website and asking ChatGPT to summarize the website, causing the LLM to respond to subsequent interactions with unintended replies due to the prompt being placed within the conversational context (i.e., the output from SearchGPT)
    • Malicious content hiding technique, which involves hiding malicious prompts by taking advantage of a bug resulting from how ChatGPT renders markdown that causes any data appearing on the same line denoting a fenced code block opening (“`) after the first word to not be rendered
    • Memory injection technique, which involves poisoning a user’s ChatGPT memory by concealing hidden instructions in a website and asking the LLM to summarize the site
    DFIR Retainer Services

    The disclosure comes close on the heels of research demonstrating various kinds of prompt injection attacks against AI tools that are capable of bypassing safety and security guardrails –

    • A technique called PromptJacking that exploits three remote code execution vulnerabilities in Anthropic Claude’s Chrome, iMessage, and Apple Notes connectors to achieve unsanitized command injection, resulting in prompt injection
    • A technique called Claude pirate that abuses Claude’s Files API for data exfiltration by using indirect prompt injections that weaponize an oversight in Claude’s network access controls
    • A technique called agent session smuggling that leverages the Agent2Agent (A2A) protocol and allows a malicious AI agent to exploit an established cross-agent communication session to inject additional instructions between a legitimate client request and the server’s response, resulting in context poisoning, data exfiltration, or unauthorized tool execution
    • A technique called prompt inception that employs prompt injections to steer an AI agent to amplify bias or falsehoods, leading to disinformation at scale
    • A zero-click attack called shadow escape that can be used to steal sensitive data from interconnected systems by leveraging standard Model Context Protocol (MCP) setups and default MCP permissioning through specially crafted documents containing “shadow instructions” that trigger the behavior when uploaded to AI chatbots
    • An indirect prompt injection targeting Microsoft 365 Copilot that abuses the tool’s built-in support for Mermaid diagrams for data exfiltration by taking advantage of its support for CSS
    • A vulnerability in GitHub Copilot Chat called CamoLeak (CVSS score: 9.6) that allows for covert exfiltration of secrets and source code from private repositories and full control over Copilot’s responses by combining a Content Security Policy (CSP) bypass and remote prompt injection using hidden comments in pull requests
    • A white-box jailbreak attack called LatentBreak that generates natural adversarial prompts with low perplexity, capable of evading safety mechanisms by substituting words in the input prompt with semantically-equivalent ones and preserving the initial intent of the prompt

    The findings show that exposing AI chatbots to external tools and systems, a key requirement for building AI agents, expands the attack surface by presenting more avenues for threat actors to conceal malicious prompts that end up being parsed by models.

    “Prompt injection is a known issue with the way that LLMs work, and, unfortunately, it will probably not be fixed systematically in the near future,” Tenable researchers said. “AI vendors should take care to ensure that all of their safety mechanisms (such as url_safe) are working properly to limit the potential damage caused by prompt injection.”

    The development comes as a group of academics from Texas A&M, the University of Texas, and Purdue University found that training AI models on “junk data” can lead to LLM “brain rot,” warning “heavily relying on Internet data leads LLM pre-training to the trap of content contamination.”

    CIS Build Kits

    Last month, a study from Anthropic, the U.K. AI Security Institute, and the Alan Turing Institute also discovered that it’s possible to successfully backdoor AI models of different sizes (600M, 2B, 7B, and 13B parameters) using just 250 poisoned documents, upending previous assumptions that attackers needed to obtain control of a certain percentage of training data in order to tamper with a model’s behavior.

    From an attack standpoint, malicious actors could attempt to poison web content that’s scraped for training LLMs, or they could create and distribute their own poisoned versions of open-source models.

    “If attackers only need to inject a fixed, small number of documents rather than a percentage of training data, poisoning attacks may be more feasible than previously believed,” Anthropic said. “Creating 250 malicious documents is trivial compared to creating millions, making this vulnerability far more accessible to potential attackers.”

    And that’s not all. Another research from Stanford University scientists found that optimizing LLMs for competitive success in sales, elections, and social media can inadvertently drive misalignment, a phenomenon referred to as Moloch’s Bargain.

    “In line with market incentives, this procedure produces agents achieving higher sales, larger voter shares, and greater engagement,” researchers Batu El and James Zou wrote in an accompanying paper published last month.

    “However, the same procedure also introduces critical safety concerns, such as deceptive product representation in sales pitches and fabricated information in social media posts, as a byproduct. Consequently, when left unchecked, market competition risks turning into a race to the bottom: the agent improves performance at the expense of safety.”


    Source: thehackernews.com…

  • Securing the Open Android Ecosystem with Samsung Knox

    Securing the Open Android Ecosystem with Samsung Knox

    Nov 05, 2025The Hacker NewsMobile Security / Enterprise IT

    Raise your hand if you’ve heard the myth, “Android isn’t secure.”

    Android phones, such as the Samsung Galaxy, unlock new ways of working. But, as an IT admin, you may worry about the security—after all, work data is critical.

    However, outdated concerns can hold your business back from unlocking its full potential. The truth is, with work happening everywhere, every device connected to your network is a potential security breach point. As threats evolve, so must the tools to defend against them.

    Allow me to introduce Samsung Knox—a built-in security platform that combines hardware and software protections on Samsung Galaxy devices. It’s loaded with features and is designed to safeguard data, provide IT teams with deeper control, and offer a flexible foundation for enterprise needs.

    Let’s take a look at some myths about open source and how Samsung can get you on the right path to success.

    Myth 1: “Isn’t Android more prone to malware and attacks?”

    Common concerns around sideloading and third-party apps can be addressed through Samsung Knox’s enterprise controls, which let IT admins curate approved apps and prevent sideloading. Plus, AI-powered malware defense adds another layer of protection to help keep the Android ecosystem secure. Here’s how:

    Proactive protection at scale:

    • Google Play Protect scans over 200 billion apps daily, ensuring threats are blocked before they spread.
    • According to Google, Managed Google Play devices see an exceptionally low rate of potentially harmful app installs, even when company-published apps are included.

    Extra defense with Samsung Knox on Samsung Galaxy devices:

    • Samsung Message Guard protects Samsung Galaxy devices from zero-click attacks by automatically isolating and scanning suspicious image files received through messaging apps.
    • DEFEX (Defeat Exploit) detects abnormal app behaviours and can terminate them before they become active threats.

    Key point: Android security isn’t about being open or closed—it’s about layered, proactive protection. With Samsung Knox on Samsung Galaxy devices, enterprises get exactly that.

    Myth 2: “Aren’t modern threats about platforms, not people?”

    A growing number of breaches today actually stem from human vulnerabilities—not just the platform itself! Let’s take a look:

    The bottom line is, the biggest risks originate from overlooked basics. For example, failing to update devices with the latest security patches and not implementing the necessary IT policies—this applies to both open and closed platforms!

    Here’s how Samsung Knox helps:

    • Know which device to update, when, and why: Knox Asset Intelligence provides IT admins centralized visibility into such information, and Knox E-FOTA provides precise and stable version control that’s hard to match on other platforms.
    • Manage work devices and data according to your business needs: Samsung Knox enhances the security of Samsung Galaxy devices by providing granular security controls and comprehensive visibility. Users can access these features in multiple ways, including connecting to their own Enterprise Mobility Management systems, using Knox Suite.

    Key point: Closed systems don’t automatically protect against human error. Enterprises need a layered defense, strong policies, and visibility into device behavior. That’s exactly what Samsung Knox delivers!

    Myth 3: “Android updates are slower and harder to manage, right?”

    With modern Android and Samsung Knox tools, updates are now faster, more flexible, and fully manageable at scale. Let’s take a look:

    Android innovations:

    • Mainline enables critical security and system updates to be pushed directly through Google Play—no wait required for OS upgrades.

    Samsung innovations:

    The Samsung Knox platform on Samsung Galaxy devices enables hard-to-beat detailed scheduling and stable deployment. Using Knox E-FOTA, IT admins can access:

    • Target specific firmware versions instead of just the latest release.
    • Block all types of user updates, including over-the-air, USB, and unauthorized installations to unintended versions.
    • Schedule updates based not only on time but also on factors like battery level and network bandwidth.
    • Perform on-prem firmware updates without relying on a cloud network environment.

    Key point: With Knox E-FOTA, you gain a strategic level of control that turns mobile updates from a support burden into a predictable, business-aligned process!

    The reality: Samsung Knox transforms Android security

    Samsung Galaxy devices, secured by Samsung Knox, are redefining what mobile security looks like for enterprises. By addressing old vulnerabilities, tackling human-driven threats, and giving IT strategic update control, Samsung Knox shifts Android from “perceived risk” to enterprise-grade resilience.

    The result? Government-grade protection, centralized visibility, and smarter management. Don’t take my word for it; find out for yourself by trying Samsung Knox.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • U.S. Sanctions 10 North Korean Entities for Laundering $12.7M in Crypto and IT Fraud

    U.S. Sanctions 10 North Korean Entities for Laundering $12.7M in Crypto and IT Fraud

    Nov 05, 2025Ravie LakshmananCybercrime / Ransomware

    The U.S. Treasury Department on Tuesday imposed sanctions against eight individuals and two entities within North Korea’s global financial network for laundering money for various illicit schemes, including cybercrime and information technology (IT) worker fraud.

    “North Korean state-sponsored hackers steal and launder money to fund the regime’s nuclear weapons program,” said Under Secretary of the Treasury for Terrorism and Financial Intelligence John K. Hurley.

    “By generating revenue for Pyongyang’s weapons development, these actors directly threaten U.S. and global security. The Treasury will continue to pursue the facilitators and enablers behind these schemes to cut off the DPRK’s illicit revenue streams.”

    DFIR Retainer Services

    The names of sanctioned individuals and entities are listed below –

    • Jang Kuk Chol (Jang) and Ho Jong Son, who are said to have helped manage funds, including $5.3 million in cryptocurrency, on behalf of First Credit Bank (aka Cheil Credit Bank), which was previously subjected to sanctions in September 2017 for facilitating North Korea’s missile programs
    • Korea Mangyongdae Computer Technology Company (KMCTC), an IT company based in North Korea that has dispatched two IT worker delegations to the Chinese cities of Shenyang and Dandong, and has used Chinese nationals as banking proxies to conceal the origin of funds generated as part of the fraudulent employment scheme
    • U Yong Su, KMCTC’s current president
    • Ryujong Credit Bank, which has provided financial assistance in sanctions avoidance activities between China and North Korea
    • Ho Yong Chol, Han Hong Gil, Jong Sung Hyok, Choe Chun Pom, and Ri Jin Hyok, who are representatives of North Korean financial institutions in Russia and China and are said to have facilitated transactions worth millions of dollars on behalf of the sanctioned banks

    A portion of $5.3 million has been linked to a North Korean ransomware actor known to have targeted U.S. victims in the past and handled revenue from IT worker operations.

    Describing North Korean cyber actors as orchestrating espionage, disruptive attacks, and financial theft at a scale “unmatched” by any other country, the Treasury said the Pyongyang-affiliated cybercriminals have stolen over $3 billion, mostly in digital assets, over the past three years using sophisticated malware and social engineering.

    The department also accused the regime of leveraging its IT army located across the world to gain employment at companies by obfuscating their nationality and identities, and funneling back a huge chunk of their income back to the Democratic People’s Republic of Korea (DPRK).

    CIS Build Kits

    “In some instances, DPRK IT workers engage other foreign freelance programmers to establish business partnerships,” it added. “They collaborate with these non-North Korean freelance workers on projects which were originally commissioned to those workers and split the revenue.”

    According to TRM Labs, the cryptocurrency wallet addresses linked to First Credit Bank show “consistent inbound flows resembling salary payments” and that “these flows likely represent income from IT workers employed abroad under false identities.”

    In all, the wallets controlled by the bank are said to have received more than $12.7 million between June 2023 and May 2025, indicating sustained activity spanning over two years.

    “Together, these individuals and entities form a central component of Pyongyang’s sanctions-evasion architecture, enabling the regime to move millions of dollars through both traditional and digital channels, including cryptocurrency, to fund weapons programs and cyber operations,” the blockchain intelligence firm said.


    Source: thehackernews.com…

  • Why SOC Burnout Can Be Avoided: Practical Steps

    Why SOC Burnout Can Be Avoided: Practical Steps

    Behind every alert is an analyst; tired eyes scanning dashboards, long nights spent on false positives, and the constant fear of missing something big. It’s no surprise that many SOCs face burnout before they face their next breach. But this doesn’t have to be the norm. The path out isn’t through working harder, but through working smarter, together.

    Here are three practical steps every SOC can take to prevent burnout and build a healthier, more resilient team.

    Step 1: Reduce Alert Overload with Real-Time Context

    SOC burnout often starts with alert fatigue. Analysts waste hours dissecting incomplete data because traditional systems provide only fragments of the story. By giving teams the full behavioral context behind alerts, leaders can help them prioritize faster and act with confidence.

    Leading SOCs are already turning to advanced solutions like ANY.RUN’s interactive sandbox to cut through the noise. Instead of static logs, they see the full attack chain unfold in real time, from the first process execution to network connections, registry changes, and data exfiltration attempts. Every action is visualized step by step, giving analysts instant clarity on what’s malicious and what’s safe.

    Check recent attack fully exposed in real-time

    Real-time analysis of Clickup abuse fully exposed in 60 seconds

    For instance, in this analysis session, analysts exposed the entire phishing attack chain in just 60 seconds, uncovering how attackers abused ClickUp to deliver a fake Microsoft 365 login page. This fast, real-time detection turned what could have been hours of log review into a clear, actionable case.

    See how your SOC can achieve 3× higher efficiency and eliminate analyst burnout with real-time, connected analysis.

    Talk to ANY.RUN Experts

    Here’s what SOC teams gain from real-time interactive analysis:

    1. Safe, hands-on investigation: Analysts can interact with live samples inside an isolated environment, reducing the risk of human error in production systems.
    2. Full attack chain exposure: Visibility into every process, file, and network action helps identify the threat’s origin, intent, and lateral movement.
    3. IOC extraction in seconds: Behavioral data is automatically captured, making it easy to feed verified indicators directly into detection systems.
    4. Fewer false positives: Clear behavioral evidence allows teams to confirm or dismiss alerts faster, improving confidence and focus.

    Result: Faster triage, reduced noise, and a calmer, more efficient SOC.

    Step 2: Automate Repetitive Work to Protect Analyst Focus

    Even the best SOCs lose countless hours to manual, low-impact tasks, collecting logs, exporting reports, copying IOCs, and updating tickets. These repetitive duties might seem small, but together they drain focus, slow investigations, and feed the burnout cycle.

    Automation breaks this pattern. When systems take care of the routine, analysts can dedicate their time to higher-value work; investigation, detection tuning, and incident response.

    The real breakthrough comes from combining automation with interactive analysis. This pairing saves enormous time while keeping analysts in control. In fact, some sandboxes like ANY.RUN now include automated interactivity; a feature that performs human-like actions such as solving CAPTCHAs, uncovering hidden malicious links behind QR codes, and executing tasks that traditional tools can’t handle without manual input.

    QR code–based phishing fully exposed inside ANY.RUN sandbox; the hidden malicious link and full attack chain revealed in under 60 seconds.

    The sandbox behaves as an analyst would, interacting with the sample autonomously while still allowing experts to step in whenever needed.

    As a result, SOC teams gain both efficiency and flexibility, scaling their capacity without sacrificing precision. According to ANY.RUN’s latest survey, teams using this combination of automation and interactivity achieved remarkable results:

    • 95% of SOC teams sped up threat investigations.
    • Up to 20% decrease in workload for Tier 1 analysts.
    • 30% reduction in Tier 1 → Tier 2 escalations.
    • 3× higher SOC efficiency through faster triage and automated evidence collection.

    Result: A focused, high-performing SOC where automation handles the dull work, and analysts handle what truly matters.

    Step 3: Integrate Real-Time Threat Intelligence to Cut Manual Work

    One of the most exhausting parts of a SOC analyst’s job is chasing outdated data, verifying domains that are already inactive, checking expired IOCs, or switching between disconnected tools just to confirm what’s real. This constant context-switching drains focus and leads straight to burnout.

    The solution is smarter integration. When fresh, verified threat intelligence flows directly into existing tools, analysts spend less time hunting for context and more time acting on it.

    That’s why leading teams use ANY.RUN’s Threat Intelligence Feeds, which gather live IOCs from more than 15 000 SOCs and 500 000 analysts worldwide. Each indicator comes straight from real-time sandbox investigations, meaning the data reflects current phishing kits, redirect chains, and active infrastructure, not last month’s reports.

    Because these feeds integrate smoothly with existing SOC platforms, analysts can:

    1. Access continuously updated data without leaving their familiar environment.
    2. See how threats actually behave by tracing each IOC back to its live sandbox analysis.
    3. Avoid repetitive manual checks for outdated domains or expired indicators.
    4. Act faster with confidence, using evidence backed by current global activity.

    Result: Fewer context switches, faster validation, and analysts who stay sharp instead of overwhelmed.

    Prevent Analyst Burnout with Real-Time Insight and Smarter Workflows

    SOC burnout doesn’t come from the workload alone; it comes from slow tools, outdated data, and constant context switching. When teams gain real-time visibility, automated workflows, and connected intelligence, they move faster, think clearer, and stay motivated longer.

    With these improvements, SOCs can:

    • Stay ahead of evolving threats with always-fresh intelligence
    • Eliminate repetitive manual work through automation
    • Investigate incidents faster with full behavioral context
    • Keep analysts focused, confident, and engaged

    Talk to ANY.RUN experts to discover how your SOC can replace fatigue with focus and transform burnout into better performance.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…