Author: Mark

  • Between Buzz and Reality: The CTEM Conversation We All Need

    Between Buzz and Reality: The CTEM Conversation We All Need

    Jun 24, 2025Ravie LakshmananThreat Exposure Management

    I had the honor of hosting the first episode of the Xposure Podcast live from Xposure Summit 2025. And I couldn’t have asked for a better kickoff panel: three cybersecurity leaders who don’t just talk security, they live it.

    Let me introduce them.

    Alex Delay, CISO at IDB Bank, knows what it means to defend a highly regulated environment. Ben Mead, Director of Cybersecurity at Avidity Biosciences, brings a forward-thinking security perspective that reflects the innovation behind Avidity’s targeted RNA therapeutics. Last but not least, Michael Francess, Director of Cybersecurity Advanced Threat at Wyndham Hotels and Resorts, leads the charge in protecting the franchise. Each brought a unique vantage point to a common challenge: applying Continuous Threat Exposure Management (CTEM) to complex production environments.

    Gartner made waves in 2023 with a bold prediction: organizations that prioritize CTEM will be three times less likely to be breached by 2026. But here’s the kicker – only if it’s operationalized.

    Speaking with these seasoned defenders, we unpacked the realities and challenges behind the hype of implementing and operationalizing an effective Exposure Management strategy, addressing the following tough questions:

    • What does a good CTEM program look like and what are the typical challenges that need to be overcome?
    • How do you optimize cyber and risk reporting to influence board-level decisions?
    • And ultimately, how do you measure the success of your CTEM program?

    Challenges, Priorities, and Best Practices

    CTEM isn’t plug-and-play. The panelists’ prescription was clear: start with asset inventory and identity management; weak service accounts, over-permissioned users, legacy logins. None of these are small gaps, they’re wide-open doors that need to be checked frequently. And for all of our panelists, frequency matters – a lot. Because guess what? Adversaries are constantly challenging defenses too. For internal assets, weekly validation is the rule of thumb. For external-facing assets? Daily. As they see it, it’s the only way to maintain a constant handle over their constantly changing environments.

    Surprisingly, Michael pointed to threat intelligence as the backbone of any security testing program. “You need to understand your adversaries, simulate their TTPs, and test your defenses against real-world scenarios, not just patching CVEs.” That’s the key difference between CTEM and vulnerability management. Vulnerability management is about patching. Exposure management is about figuring out whether your controls actually work to block threats.

    Reporting: Translating Cyber to Risk Terms

    In the banking industry, like many other highly regulated industries, Alex couldn’t emphasize enough the need to be prepared to answer hard questions asked from regulators. “You will get challenged on your exposure, your remediation timelines, and your risk treatment. And that’s a good thing. It forces clarity and accountability”.

    But even outside regulated industries, the conversation is changing. Boards do not want to hear about CVSS scores. They want to understand risk – and that’s a completely different discussion. Is the company’s risk profile going up or down? Where is it concentrated? And what are we doing about it?

    Measuring Progress

    Success in CTEM isn’t about counting vulnerabilities; Ben pinned it down when he said he measures the number of exploited attack paths his team has closed. He shared how validating attack paths revealed risky security gaps, like over-permissioned accounts and forgotten assets. Suddenly, risk becomes visible.

    Others took it in another direction with tabletop exercises that walk leadership through real

    attack scenarios. It’s not about metrics, it’s about explaining the risk and the consequences. A shift that moves the discussion from noise to signal, and gives the business clarity on what matters: where we’re exposed, and what we’re doing about it.

    From Concept to Action

    Want to hear how these defenders are putting CTEM into action without drowning in noise?

    This episode dives deep into the real questions: where do you start, how do you stay focused on what’s exploitable, and how do you connect it all to business risk? You’ll hear first-hand how security leaders like Alex, Ben, and Michael are tackling these challenges head-on, with a few surprises along the way…

    🎧Make sure to catch the full conversation on Apple Podcast and Spotify

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Hackers Exploit Misconfigured Docker APIs to Mine Cryptocurrency via Tor Network

    Hackers Exploit Misconfigured Docker APIs to Mine Cryptocurrency via Tor Network

    Jun 24, 2025Ravie LakshmananCloud Security / Cryptojacking

    Docker APIs to Mine Cryptocurrency

    Misconfigured Docker instances are the target of a campaign that employs the Tor anonymity network to stealthily mine cryptocurrency in susceptible environments.

    “Attackers are exploiting misconfigured Docker APIs to gain access to containerized environments, then using Tor to mask their activities while deploying crypto miners,” Trend Micro researchers Sunil Bharti and Shubham Singh said in an analysis published last week.

    In using Tor, the idea is to anonymize their origins during the installation of the miner on compromised systems. The attacks, per the cybersecurity company, commence with a request from the IP address 198.199.72[.]27 to obtain a list of all containers on the machine.

    If no containers are present, the attacker proceeds to create a new one based on the “alpine” Docker image and mounts the “/hostroot” directory – i.e., the root directory (“/”) of the physical or virtual host machine – as a volume inside it. This behavior poses security risks as it allows the container to access and modify files and directories on the host system, resulting in a container escape.

    Cybersecurity

    The threat actors then execute a carefully orchestrated sequence of actions that involves running a Base64-encoded shell script to set up Tor on the container as part of the creation request and ultimately fetch and execute a remote script from a .onion domain (“wtxqf54djhp5pskv2lfyduub5ievxbyvlzjgjopk6hxge5umombr63ad[.]onion”)

    “It reflects a common tactic used by attackers to hide command-and-control (C&C) infrastructure, avoid detection, and deliver malware or miners within compromised cloud or container environments,” the researchers said. “Additionally, the attacker uses ‘socks5h’ to route all traffic and DNS resolution through Tor for enhanced anonymity and evasion.”

    Once the container is created, the “docker-init.sh” shell script is deployed, which then checks for the “/hostroot” directory mounted earlier and modifies the system’s SSH configuration to set up remote access by enabling root login and adding an attacker-controlled SSH key into the ~/.ssh/authorized_keys file.

    The threat actor has also been found to install various tools like masscan, libpcap, zstd, and torsocks, beacon to the C&C server details about the infected system, and ultimately deliver a binary that acts as a dropper for the XMRig cryptocurrency miner, along with the necessary mining configuration, the wallet addresses, and mining pool URLs.

    “This approach helps attackers avoid detection and simplifies deployment in compromised environments,” Trend Micro said, adding it observed the activity targeting technology companies, financial services, and healthcare organizations.

    Cybersecurity

    The findings point to an ongoing trend of cyber attacks that target misconfigured or poorly secured cloud environments for cryptojacking purposes.

    The development comes as Wiz revealed that a scan of public code repositories has uncovered hundreds of validated secrets in mcp.json, .env, and AI agent configuration files and Python notebooks (.ipynb), turning them into a treasure trove for attackers.

    The cloud security firm said it found valid secrets belonging to over 30 companies and startups, including those belonging to Fortune 100 companies.

    “Beyond just secrets, code execution results in Python notebooks should be generally treated as sensitive,” researchers Shay Berkovich and Rami McCarthy said. “Their content, if correlated to a developer’s organization, can provide reconnaissance details for malicious actors.”

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • APT28 Uses Signal Chat to Deploy BEARDSHELL Malware and COVENANT in Ukraine

    APT28 Uses Signal Chat to Deploy BEARDSHELL Malware and COVENANT in Ukraine

    Jun 24, 2025Ravie LakshmananMalware / Threat Intelligence

    BEARDSHELL Malware and COVENANT

    The Computer Emergency Response Team of Ukraine (CERT-UA) has warned of a new cyber attack campaign by the Russia-linked APT28 (aka UAC-0001) threat actors using Signal chat messages to deliver two new malware families dubbed BEARDSHELL and COVENANT.

    BEARDSHELL, per CERT-UA, is written in C++ and offers the ability to download and execute PowerShell scripts, as well as upload the results of the execution back to a remote server over the Icedrive API.

    The agency said it first observed BEARDSHELL, alongside a screenshot-taking tool named SLIMAGENT, as part of incident response efforts in March-April 2024 in a Windows computer.

    While at that time, there were no details available on how the infection took place, the agency said it received threat intelligence from ESET more than a year later that detected evidence of unauthorized access to a “gov.ua” email account.

    Cybersecurity

    The exact nature of the information shared was not disclosed, but it likely pertains to a report from the Slovak cybersecurity company last month that detailed APT28’s exploitation of cross-site scripting (XSS) vulnerabilities in various webmail software such as Roundcube, Horde, MDaemon, and Zimbra to breach Ukrainian government entities.

    Further investigation triggered as a result of this discovery unearthed crucial evidence, including the initial access vector used in the 2024 attack, as well as the presence of BEARDSHELL and a malware framework dubbed COVENANT.

    Specifically, it has come to light that the threat actors are sending messages on Signal to deliver a macro-laced Microsoft Word document (“Акт.doc”), which, when launched, drops two payloads: A malicious DLL (“ctec.dll”) and a PNG image (“windows.png”).

    The embedded macro also makes Windows Registry modifications to ensure that the DLL is launched when the File Explorer (“explorer.exe”) is launched the next time. The primary task of the DLL is to load the shellcode from the PNG file, resulting in the execution of the memory-resident COVENANT framework.

    COVENANT subsequently downloads two more intermediate payloads that are designed to launch the BEARDSHELL backdoor on the compromised host.

    To mitigate potential risks associated with the threat, state organizations are recommended to keep an eye on network traffic associated with the domains “app.koofr[.]net” and “api.icedrive[.]net.”

    The disclosure comes as CERT-UA revealed APT28’s targeting of outdated Roundcube webmail instances in Ukraine to deliver exploits for CVE-2020-35730, CVE-2021-44026, and CVE-2020-12641 via phishing emails that ostensibly contain text about news events but weaponize these flaws to execute arbitrary JavaScript.

    Cybersecurity

    The email “contained a content bait in the form of an article from the publication ‘NV’ (nv.ua), as well as an exploit for the Roundcube XSS vulnerability CVE-2020-35730 and the corresponding JavaScript code designed to download and run additional JavaScript files: ‘q.js’ and ‘e.js,’” CERT-UA said.

    “E.js” ensures the creation of a mailbox rule for redirecting incoming emails to a third-party email address, in addition to exfiltrating the victim’s address book and session cookies via HTTP POST requests. On the other hand, “q.js” features an exploit for an SQL injection flaw in Roundcube (CVE-2021-44026) that’s used to gather information from the Roundcube database.

    CERT-UA said it also discovered a third JavaScript file named “c.js” that includes an exploit for a third Roundcube flaw (CVE-2020-12641) to execute arbitrary commands on the mail server. In all, similar phishing emails were sent to the email addresses of more than 40 Ukrainian organizations.

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • U.S. House Bans WhatsApp on Official Devices Over Security and Data Protection Issues

    U.S. House Bans WhatsApp on Official Devices Over Security and Data Protection Issues

    Jun 24, 2025Ravie LakshmananData Protection / Mobile Security

    U.S. House Bans WhatsApp

    The U.S. House of Representatives has formally banned congressional staff members from using WhatsApp on government-issued devices, citing security concerns.

    The development was first reported by Axios.

    The decision, according to the House Chief Administrative Officer (CAO), was motivated by worries about the app’s security.

    “The Office of Cybersecurity has deemed WhatsApp a high-risk to users due to the lack of transparency in how it protects user data, absence of stored data encryption, and potential security risks involved with its use,” the CAO said in a memo, according to Axios.

    To that end, House staff are prohibited from downloading the app on any device issued by the government, including its mobile, desktop, or web browser versions.

    Cybersecurity

    WhatsApp has pushed back against these concerns, stating messages sent on the platform are end-to-end encrypted by default, and that it offers a “higher level” of security than most of the apps on CAO’s approved list.

    “We disagree with the House Chief Administrative Officer’s characterization in the strongest possible terms,” Meta’s Communication Director Andy Stone said in a post on social media site X.

    “We know members and their staffs regularly use WhatsApp and we look forward to ensuring members of the House can join their Senate counterparts in doing so officially.”

    As “acceptable” alternatives, the CAO’s message has recommended that the staff use apps like Microsoft Teams, Amazon’s Wickr, Signal, and Apple’s iMessage and FaceTime. WhatsApp is the latest app to be banned by the House after TikTok, OpenAI ChatGPT, and DeepSeek.

    Last week, the Meta-owned messaging app said it’s bringing ads in an effort to monetize the platform, but emphasized they are done in a manner without sacrificing user privacy.

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • China-linked Salt Typhoon Exploits Critical Cisco Vulnerability to Target Canadian Telecom

    China-linked Salt Typhoon Exploits Critical Cisco Vulnerability to Target Canadian Telecom

    Jun 24, 2025Ravie LakshmananCyber Espionage / Chinese Hackers

    China-linked Salt Typhoon

    The Canadian Centre for Cyber Security and the U.S. Federal Bureau of Investigation (FBI) have issued an advisory warning of cyber attacks mounted by the China-linked Salt Typhoon actors to breach major global telecommunications providers as part of a cyber espionage campaign.

    The attackers exploited a critical Cisco IOS XE software (CVE-2023-20198, CVSS score: 10.0) to access configuration files from three network devices registered to a Canadian telecommunications company in mid-February 2025.

    The threat actors are also said to have modified at least one of the files to configure a Generic Routing Encapsulation (GRE) tunnel, enabling traffic collection from the network. The name of the targeted company was not disclosed.

    Cybersecurity

    Stating that the targeting likely goes beyond the telecommunications sector, the agencies said the targeting of Canadian devices may permit the threat actors to collect information from the compromised networks and use them as leverage to breach additional devices.

    “In some cases, we assess that the threat actors’ activities were very likely limited to network reconnaissance,” per the alert.

    The agencies further pointed out that edge network devices continue to be an attractive target for Chinese state-sponsored threat actors looking to breach and maintain persistent access to telecom service providers.

    The findings dovetail with an earlier report from Recorded Future that detailed the exploitation of CVE-2023-20198 and CVE-2023-20273 to infiltrate telecom and internet firms in the U.S., South Africa, and Italy, and leveraging the footholds to set up GRE tunnels for long-term access and data exfiltration.

    U.K. NCSC Warns of SHOE RACK and UMBRELLA STAND Malware Targeting Fortinet Devices

    The development comes as the U.K. National Cyber Security Centre (NCSC) revealed two different malware families dubbed SHOE RACK and UMBRELLA STAND that have been found targeting FortiGate 100D series firewalls made by Fortinet.

    While SHOE RACK is a post-exploitation tool for remote shell access and TCP tunneling through a compromised device, UMBRELLA STAND is designed to run shell commands issued from an attacker-controlled server.

    Cybersecurity

    Interestingly, SHOE RACK is partly based on a publicly available tool named reverse_shell, which, coincidentally, has also been repurposed by a China-nexus threat cluster called PurpleHaze to devise a Windows implant codenamed GoReShell. It’s currently not clear if these activities are related.

    The NCSC said it identified some similarities between UMBRELLA STAND and COATHANGER, a backdoor that was previously put to use by Chinese state-backed hackers in a cyber attack aimed at a Dutch armed forces network.

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • DHS Warns Pro-Iranian Hackers Likely to Target U.S. Networks After Iranian Nuclear Strikes

    DHS Warns Pro-Iranian Hackers Likely to Target U.S. Networks After Iranian Nuclear Strikes

    Jun 23, 2025Ravie LakshmananHacktivism / Cyber Warfare

    Pro-Iranian Hackers

    The United States government has warned of cyber attacks mounted by pro-Iranian groups after it launched airstrikes on Iranian nuclear sites as part of the Iran–Israel war that commenced on June 13, 2025.

    Stating that the ongoing conflict has created a “heightened threat environment” in the country, the Department of Homeland Security (DHS) said in a bulletin that cyber actors are likely to target U.S. networks.

    “Low-level cyber attacks against U.S. networks by pro-Iranian hacktivists are likely, and cyber actors affiliated with the Iranian government may conduct attacks against U.S. networks,” the DHS said.

    “Both hacktivists and Iranian government-affiliated actors routinely target poorly secured U.S. networks and Internet-connected devices for disruptive cyber attacks.”

    Cybersecurity

    The development comes after U.S. President Donald Trump announced that the U.S. military had conducted a bombing attack on three Iranian nuclear facilities at Fordo, Natanz, and Isfahan. Trump described the strikes as a “spectacular military success” and warned of “far greater” attacks if Tehran does not make peace.

    The Iran-Israel war of 2025 has triggered a maelstrom in cyberspace, what with hacktivist groups aligned with the two nations targeting the other.

    In response to the U.S. military strikes, a pro-Iranian group named Team 313 claimed it took down Trump’s Truth Social platform in a distributed denial-of-service (DDoS) attack.

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content

    Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content

    Jun 23, 2025Ravie LakshmananLLM Security / AI Security

    Echo Chamber Jailbreak Tricks LLMs

    Cybersecurity researchers are calling attention to a new jailbreaking method called Echo Chamber that could be leveraged to trick popular large language models (LLMs) into generating undesirable responses, irrespective of the safeguards put in place.

    “Unlike traditional jailbreaks that rely on adversarial phrasing or character obfuscation, Echo Chamber weaponizes indirect references, semantic steering, and multi-step inference,” NeuralTrust researcher Ahmad Alobaid said in a report shared with The Hacker News.

    “The result is a subtle yet powerful manipulation of the model’s internal state, gradually leading it to produce policy-violating responses.”

    While LLMs have steadily incorporated various guardrails to combat prompt injections and jailbreaks, the latest research shows that there exist techniques that can yield high success rates with little to no technical expertise.

    Cybersecurity

    It also serves to highlight a persistent challenge associated with developing ethical LLMs that enforce clear demarcation between what topics are acceptable and not acceptable.

    While widely-used LLMs are designed to refuse user prompts that revolve around prohibited topics, they can be nudged towards eliciting unethical responses as part of what’s called a multi-turn jailbreaking.

    In these attacks, the attacker starts with something innocuous and then progressively asks a model a series of increasingly malicious questions that ultimately trick it into producing harmful content. This attack is referred to as Crescendo.

    LLMs are also susceptible to many-shot jailbreaks, which take advantage of their large context window (i.e., the maximum amount of text that can fit within a prompt) to flood the AI system with several questions (and answers) that exhibit jailbroken behavior preceding the final harmful question. This, in turn, causes the LLM to continue the same pattern and produce harmful content.

    Echo Chamber, per NeuralTrust, leverages a combination of context poisoning and multi-turn reasoning to defeat a model’s safety mechanisms.

    Echo Chamber Attack

    “The main difference is that Crescendo is the one steering the conversation from the start while the Echo Chamber is kind of asking the LLM to fill in the gaps and then we steer the model accordingly using only the LLM responses,” Alobaid said in a statement shared with The Hacker News.

    Specifically, this plays out as a multi-stage adversarial prompting technique that starts with a seemingly-innocuous input, while gradually and indirectly steering it towards generating dangerous content without giving away the end goal of the attack (e.g., generating hate speech).

    “Early planted prompts influence the model’s responses, which are then leveraged in later turns to reinforce the original objective,” NeuralTrust said. “This creates a feedback loop where the model begins to amplify the harmful subtext embedded in the conversation, gradually eroding its own safety resistances.”

    Cybersecurity

    In a controlled evaluation environment using OpenAI and Google’s models, the Echo Chamber attack achieved a success rate of over 90% on topics related to sexism, violence, hate speech, and pornography. It also achieved nearly 80% success in the misinformation and self-harm categories.

    “The Echo Chamber Attack reveals a critical blind spot in LLM alignment efforts,” the company said. “As models become more capable of sustained inference, they also become more vulnerable to indirect exploitation.”

    The disclosure comes as Cato Networks demonstrated a proof-of-concept (PoC) attack that targets Atlassian’s model context protocol (MCP) server and its integration with Jira Service Management (JSM) to trigger prompt injection attacks when a malicious support ticket submitted by an external threat actor is processed by a support engineer using MCP tools.

    The cybersecurity company has coined the term “Living off AI” to describe these attacks, where an AI system that executes untrusted input without adequate isolation guarantees can be abused by adversaries to gain privileged access without having to authenticate themselves.

    “The threat actor never accessed the Atlassian MCP directly,” security researchers Guy Waizel, Dolev Moshe Attiya, and Shlomo Bamberger said. “Instead, the support engineer acted as a proxy, unknowingly executing malicious instructions through Atlassian MCP.”

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • XDigo Malware Exploits Windows LNK Flaw in Eastern European Government Attacks

    XDigo Malware Exploits Windows LNK Flaw in Eastern European Government Attacks

    Jun 23, 2025Ravie LakshmananCyber Espionage / Vulnerability

    Cybersecurity researchers have uncovered a Go-based malware called XDigo that has been used in attacks targeting Eastern European governmental entities in March 2025.

    The attack chains are said to have leveraged a collection of Windows shortcut (LNK) files as part of a multi-stage procedure to deploy the malware, French cybersecurity company HarfangLab said.

    XDSpy is the name assigned to a cyber espionage that’s known to target government agencies in Eastern Europe and the Balkans since 2011. It was first documented by the Belarusian CERT in early 2020.

    In recent years, companies in Russia and Moldova have been targeted by various campaigns to deliver malware families like UTask, XDDown, and DSDownloader that can download additional payloads and steal sensitive information from compromised hosts.

    Cybersecurity

    HarfangLab said it observed the threat actor leveraging a remote code execution flaw in Microsoft Windows that’s triggered when processing specially crafted LNK files. The vulnerability (ZDI-CAN-25373) was publicly disclosed by Trend Micro earlier this March.

    “Crafted data in an LNK file can cause hazardous content in the file to be invisible to a user who inspects the file via the Windows-provided user interface,” Trend Micro’s Zero Day Initiative (ZDI) said at the time. “An attacker can leverage this vulnerability to execute code in the context of the current user.”

    Further analysis of the LNK file artifacts that exploit ZDI-CAN-25373 has uncovered a smaller subset comprising nine samples, which take advantage of an LNK parsing confusion flaw stemming as a result of Microsoft not implementing its own MS-SHLLINK specification (version 8.0).

    According to the spec, the maximum theoretical limit for the length of a string within LNK files is the greatest integer value that can be encoded within two bytes (i.e., 65,535 characters). However, the actual Windows 11 implementation limits the total stored text content to 259 characters with the exception of command-line arguments.

    “This leads to confusing situations, where some LNK files are parsed differently per specification and in Windows, or even that some LNK files which should be invalid per specification are actually valid to Microsoft Windows,” HarfangLab said.

    “Because of this deviation from the specification, one can specifically craft an LNK file which seemingly executes a certain command line or even be invalid according to third party parsers implementing the specification, while executing another command line in Windows.”

    A consequence of combining the whitespace padding issue with the LNK parsing confusion is that it can be leveraged by attackers to hide the command that’s being executed on both Windows UI and third-party parsers.

    The nine LNK files are said to have been distributed within ZIP archives, with each of the latter containing a second ZIP archive that includes a decoy PDF file, a legitimate but renamed executable, and a rogue DLL that’s sideloaded via the binary.

    It’s worth noting this attack chain was documented by BI.ZONE late last month as conducted by a threat actor it tracks as Silent Werewolf to infect Moldovan and Russian companies with malware.

    Cybersecurity

    The DLL is a first-stage downloader dubbed ETDownloader that, in turn, is likely meant to deploy a data collection implant referred to as XDigo based on infrastructure, victimology, timing, tactics, and tooling overlaps. XDigo is assessed to be a newer version of malware (“UsrRunVGA.exe”) that was detailed by Kaspersky in October 2023.

    XDigo is a stealer that can harvest files, extract clipboard content, and capture screenshots. It also supports commands to execute a command or binary retrieved from a remote server over HTTP GET requests. Data exfiltration occurs via HTTP POST requests.

    At least one confirmed target has been identified in the Minsk region, with other artifacts suggesting the targeting of Russian retail groups, financial institutions, large insurance companies, and governmental postal services.

    “This targeting profile aligns with XDSpy’s historical pursuit of government entities in Eastern Europe and Belarus in particular,” HarfangLab said.

    “XDSpy’s focus is also demonstrated by its customized evasion capabilities, as their malware was reported as the first malware attempting to evade detection from PT Security’s Sandbox solution, a Russian cybersecurity company providing service to public and financial organizations in the Russian Federation.”

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • How AI-Enabled Workflow Automation Can Help SOCs Reduce Burnout

    How AI-Enabled Workflow Automation Can Help SOCs Reduce Burnout

    AI-Enabled SoC Workflow Automation

    It sure is a hard time to be a SOC analyst.

    Every day, they are expected to solve high-consequence problems with half the data and twice the pressure. Analysts are overwhelmed—not just by threats, but by the systems and processes in place that are meant to help them respond. Tooling is fragmented. Workflows are heavy. Context lives in five places, and alerts never slow down. What started as a fast-paced, high-impact role has, for many analysts, become a repetitive loop of alert triage and data wrangling that offers little room for strategy or growth.

    Most SOC teams also run lean. Last year, our annual SANS SOC Survey found that a majority of SOCs only consist of just 2–10 full-time analysts, a number unchanged since the survey began tracking in 2017. Meanwhile, the scope of coverage has exploded, ranging from on-prem infrastructure to cloud environments, remote endpoints, SaaS platforms, and beyond. Compounded at scale, this has led to systemic burnout across SOC environments—a legitimate business risk that hinders your organization’s ability to defend itself.

    Addressing the issue isn’t a matter of simply increasing headcount. The longer we treat burnout as a people problem, the longer we ignore what’s really going wrong inside the SOC. The challenge at hand demands a shift in how SOC work is designed and executed, as well as how analysts are positioned for success.

    Enter artificial intelligence (AI). AI implementation at scale offers a practical path forward here by optimizing parts of the job that push analysts toward the door: the repetitive steps, the cognitive overhead, and the lack of visible progress. From streamlining inefficient workflows and supporting skill development to facilitating more impactful team-wide oversight, AI can open wider avenues for making SOC work more sustainable.

    Reducing Alert Fatigue and Repetitive Load with Smarter Automation

    A constant stream of low-context alerts is one of the fastest ways to drain a SOC team. In the SANS SOC Survey, 38% of organizations reported ingesting all available data into their SIEM. While that may expand visibility, it also floods analysts with low-priority noise. And without strong correlation logic or cross-platform integration, assembling a full picture still falls on the analyst. They’re left chasing indicators across disjointed systems, piecing together context manually, and deciding whether escalation is even necessary. It’s inefficient, exhausting, and unsustainable.

    SOC teams have been automating tasks for years, but most of that automation has relied on brittle logic like rigid playbooks and static SOAR flows that break down as soon as the scenario deviates from the expected. AI changes that. AI-powered automation can relieve that pressure by acting as a uniquely powerful contextual aggregator and investigative assistant. When paired with capabilities like those enabled by the new Model Context Protocol (MCP), language models can integrate telemetry, threat intelligence, asset metadata, and user history into a single view, tailoring it to each unique situation the analyst faces. This gives analysts enriched, case-specific summaries instead of raw events. Clarity replaces guesswork. Response decisions happen faster and with greater confidence—two things that directly reduce burnout.

    The key here is that, unlike SOAR, AI enables adaptive automation and even makes it easily accessible via an LLM interface. With AI agents and new standards like MCP and Agent2Agent protocol, a future is now here where analysts can describe what needs to happen in plain language, and the system can dynamically build the automation, deciding which tasks need to be performed and the best way to complete them. Whether it’s retrieving data, correlating signals, or coordinating a response, AI can adjust in real time based on context. That flexibility matters, especially when investigation paths aren’t always clear or linear.

    Building Analyst Confidence Through Smarter Feedback

    Burnout doesn’t only come from long hours. Sometimes it stems from stagnation—doing the same work without growing or getting meaningful feedback. If an analyst doesn’t see progress, frustration takes root quickly. This is an area where AI can offer real support. It allows analysts to refine their own work on the fly—tuning detection logic, troubleshooting false positives, and generating better queries with fast, targeted suggestions. Real-time feedback like this is especially valuable for newer analysts, but even experienced team members benefit from the ability to pressure-test their approach without waiting for peer review.

    These interactions support what researchers call deliberate practice: focused repetition paired with immediate, actionable feedback. That is worth its weight in gold when it comes to retention. According to the SANS SOC Survey, “meaningful work” and “career progression” were ranked as the top two factors in analyst retention—above compensation. Teams that embed growth into the day-to-day workflow are more likely to keep their people. AI can’t replace human mentorship, but it can help replicate some of its most meaningful effects at scale.

    Helping SOC Leaders Manage and Strengthen Their Teams

    SOC leaders have a direct influence on reducing burnout. However, a lack of time and visibility is often their biggest obstacle for making a positive impact. Performance data such as case load, note quality, investigation depth, and response times is scattered across platforms and investigations. Without a way to synthesize it, managers are left guessing who’s struggling and why.

    AI makes that analysis possible. With access to case management and workflow data, models can surface performance trends: which analysts consistently handle certain threat types well, where errors cluster, or when quality is starting to dip. That insight allows managers to coach more effectively and assign work based on capability, not just availability. It also gives them the chance to intervene early. Burnout doesn’t announce itself. It builds slowly, often out of sight. But with the right signals—flagging overload, spotting skill gaps, noticing drop-offs in case quality—leaders can take action before problems become exits.

    Over time, that kind of targeted support reshapes team culture. Performance improves, retention stabilizes, and analysts are more likely to stay and grow in roles where they feel seen, supported, and set up to succeed.

    Let’s Continue the Conversation at SANS Network Security 2025

    SOC burnout rarely shows up all at once. It builds through repetition without learning, pressure without progress, and effort without impact. AI won’t remove every stressor in the SOC, but it can help alleviate friction where it matters most.

    If this topic resonates, join me at SANS Network Security 2025 this September in Las Vegas. I’ll be leading sessions on building healthier, more effective SOCs—including how to apply AI to reduce burnout, streamline workflows, and support analyst growth in real-world environments.

    Register for SANS Network Security 2025 (Sept. 22-27, 2025) here.

    Note: This article was expertly written and contributed by John Hubbard, SANS Senior Instructor. Learn more about his background and courses here.

    Note: This article was written and contributed by John Hubbard, Senior Instructor at the SANS Institute.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

    Google Adds Multi-Layered Defenses to Secure GenAI from Prompt Injection Attacks

    Google has revealed the various safety measures that are being incorporated into its generative artificial intelligence (AI) systems to mitigate emerging attack vectors like indirect prompt injections and improve the overall security posture for agentic AI systems.

    “Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources,” Google’s GenAI security team said.

    These external sources can take the form of email messages, documents, or even calendar invites that trick the AI systems into exfiltrating sensitive data or performing other malicious actions.

    The tech giant said it has implemented what it described as a “layered” defense strategy that is designed to increase the difficulty, expense, and complexity required to pull off an attack against its systems.

    These efforts span model hardening, introducing purpose-built machine learning (ML) models to flag malicious instructions and system-level safeguards. Furthermore, the model resilience capabilities are complemented by an array of additional guardrails that have been built into Gemini, the company’s flagship GenAI model.

    Cybersecurity

    These include –

    • Prompt injection content classifiers, which are capable of filtering out malicious instructions to generate a safe response
    • Security thought reinforcement, which inserts special markers into untrusted data (e.g., email) to ensure that the model steers away from adversarial instructions, if any, present in the content, a technique called spotlighting.
    • Markdown sanitization and suspicious URL redaction, which uses Google Safe Browsing to remove potentially malicious URLs and employs a markdown sanitizer to prevent external image URLs from being rendered, thereby preventing flaws like EchoLeak
    • User confirmation framework, which requires user confirmation to complete risky actions
    • End-user security mitigation notifications, which involve alerting users about prompt injections

    However, Google pointed out that malicious actors are increasingly using adaptive attacks that are specifically designed to evolve and adapt with automated red teaming (ART) to bypass the defenses being tested, rendering baseline mitigations ineffective.

    “Indirect prompt injection presents a real cybersecurity challenge where AI models sometimes struggle to differentiate between genuine user instructions and manipulative commands embedded within the data they retrieve,” Google DeepMind noted last month.

    “We believe robustness to indirect prompt injection, in general, will require defenses in depth – defenses imposed at each layer of an AI system stack, from how a model natively can understand when it is being attacked, through the application layer, down into hardware defenses on the serving infrastructure.”

    The development comes as new research has continued to find various techniques to bypass a large language model’s (LLM) safety protections and generate undesirable content. These include character injections and methods that “perturb the model’s interpretation of prompt context, exploiting over-reliance on learned features in the model’s classification process.”

    Another study published by a team of researchers from Anthropic, Google DeepMind, ETH Zurich, and Carnegie Mellon University last month also found that LLMs can “unlock new paths to monetizing exploits” in the “near future,” not only extracting passwords and credit cards with higher precision than traditional tools, but also to devise polymorphic malware and launch tailored attacks on a user-by-user basis.

    The study noted that LLMs can open up new attack avenues for adversaries, allowing them to leverage a model’s multi-modal capabilities to extract personally identifiable information and analyze network devices within compromised environments to generate highly convincing, targeted fake web pages.

    At the same time, one area where language models are lacking is their ability to find novel zero-day exploits in widely used software applications. That said, LLMs can be used to automate the process of identifying trivial vulnerabilities in programs that have never been audited, the research pointed out.

    According to Dreadnode’s red teaming benchmark AIRTBench, frontier models from Anthropic, Google, and OpenAI outperformed their open-source counterparts when it comes to solving AI Capture the Flag (CTF) challenges, excelling at prompt injection attacks but struggled when dealing with system exploitation and model inversion tasks.

    “AIRTBench results indicate that although models are effective at certain vulnerability types, notably prompt injection, they remain limited in others, including model inversion and system exploitation – pointing to uneven progress across security-relevant capabilities,” the researchers said.

    “Furthermore, the remarkable efficiency advantage of AI agents over human operators – solving challenges in minutes versus hours while maintaining comparable success rates – indicates the transformative potential of these systems for security workflows.”

    Cybersecurity

    That’s not all. A new report from Anthropic last week revealed how a stress-test of 16 leading AI models found that they resorted to malicious insider behaviors like blackmailing and leaking sensitive information to competitors to avoid replacement or to achieve their goals.

    “Models that would normally refuse harmful requests sometimes chose to blackmail, assist with corporate espionage, and even take some more extreme actions, when these behaviors were necessary to pursue their goals,” Anthropic said, calling the phenomenon agentic misalignment.

    “The consistency across models from different providers suggests this is not a quirk of any particular company’s approach but a sign of a more fundamental risk from agentic large language models.”

    These disturbing patterns demonstrate that LLMs, despite the various kinds of defenses built into them, are willing to evade those very safeguards in high-stakes scenarios, causing them to consistently choose “harm over failure.” However, it’s worth pointing out that there are no signs of such agentic misalignment in the real world.

    “Models three years ago could accomplish none of the tasks laid out in this paper, and in three years models may have even more harmful capabilities if used for ill,” the researchers said. “We believe that better understanding the evolving threat landscape, developing stronger defenses, and applying language models towards defenses, are important areas of research.”

    Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…