Author: Mark

  • SonicWall Urges Password Resets After Cloud Backup Breach Affecting Under 5% of Customers

    SonicWall Urges Password Resets After Cloud Backup Breach Affecting Under 5% of Customers

    Sep 18, 2025Ravie LakshmananData Breach / Network Security

    SonicWall Urges Password Resets

    SonicWall is urging customers to reset credentials after their firewall configuration backup files were exposed in a security breach impacting MySonicWall accounts.

    The company said it recently detected suspicious activity targeting the cloud backup service for firewalls, and that unknown threat actors accessed backup firewall preference files stored in the cloud for less than 5% of its customers.

    “While credentials within the files were encrypted, the files also included information that could make it easier for attackers to potentially exploit the related firewall,” the company said.

    The network security company said it’s not aware of any of these files being leaked online by the threat actors, adding it was not a ransomware event targeting its network.

    “Rather this was a series of brute-force attacks aimed at gaining access to the preference files stored in backup for potential further use by threat actors,” it noted. It’s currently not known who is responsible for the attack.

    Audit and Beyond

    As a result of the incident, the company is urging customers to follow the steps below –

    • Login to MySonicWall.com and verify if cloud backups are enabled
    • Verify if affected serial numbers have been flagged in the accounts
    • Initiate containment and remediation procedures by limiting access to services from WAN, turning off access to HTTP/HTTPS/SSH Management, disabling access to SSL VPN and IPSec VPN, reset passwords and TOTPs saved on the firewall, and review logs and recent configuration changes for unusual activity

    In addition, affected customers have also been recommended to import fresh preferences files provided by SonicWall into the firewalls. The new preferences file includes the following changes –

    • Randomized password for all local users
    • Reset TOTP binding, if enabled
    • Randomized IPSec VPN keys

    “The modified preferences file provided by SonicWall was created from the latest preferences file found in cloud storage,” it said. “If the latest preferences file does not represent your desired settings, please do not use the file.”

    The disclosure comes as threat actors affiliated with the Akira ransomware group have continued to target unpatched SonicWall devices for obtaining initial access to target networks by exploiting a year-old security flaw (CVE-2024-40766, CVSS score: 9.3).

    CIS Build Kits

    Earlier this week, cybersecurity company Huntress detailed an Akira ransomware incident involving the exploitation of SonicWall VPNs in which the threat actors leveraged a plaintext file containing recovery codes of its security software to bypass multi-factor authentication (MFA), suppress incident visibility, and attempt to remove endpoint protections.

    “In this incident, the attacker used exposed Huntress recovery codes to log into the Huntress portal, close active alerts, and initiate the uninstallation of Huntress EDR agents, effectively attempting to blind the organization’s defenses and leave it vulnerable to follow-on attacks,” researchers Michael Elford and Chad Hudson said.

    “This level of access can be weaponized to disable defenses, manipulate detection tools, and execute further malicious actions. Organizations should treat recovery codes with the same sensitivity as privileged account passwords.”


    Source: thehackernews.com…

  • CountLoader Broadens Russian Ransomware Operations With Multi-Version Malware Loader

    CountLoader Broadens Russian Ransomware Operations With Multi-Version Malware Loader

    Cybersecurity researchers have discovered a new malware loader codenamed CountLoader that has been put to use by Russian ransomware gangs to deliver post-exploitation tools like Cobalt Strike and AdaptixC2, and a remote access trojan known as PureHVNC RAT.

    “CountLoader is being used either as part of an Initial Access Broker’s (IAB) toolset or by a ransomware affiliate with ties to the LockBit, Black Basta, and Qilin ransomware groups,” Silent Push said in an analysis.

    Appearing in three different versions – .NET, PowerShell, and JavaScript – the emerging threat has been observed in a campaign targeting individuals in Ukraine using PDF-based phishing lures and impersonating the National Police of Ukraine.

    It’s worth noting that the PowerShell version of the malware was previously flagged by Kaspersky as being distributed using DeepSeek-related decoys to trick users into installing it.

    Audit and Beyond

    The attacks, per the Russian cybersecurity vendor, led to the deployment of an implant named BrowserVenom that can reconfigure all browsing instances to force traffic through a proxy controlled by the threat actors, enabling the attackers to manipulate network traffic and collect data.

    Silent Push’s investigation has found the JavaScript version is the most fleshed out implementation of the loader, offering six different methods for file downloading, three different methods for executing various malware binaries, and a predefined function to identify a victim’s device based on Windows domain information.

    The malware is also capable of gathering system information, setting up persistence on the host by creating a scheduled task that impersonates a Google update task for the Chrome web browser, and connecting to a remote server to await further instructions.

    This includes the ability to download and run DLL and MSI installer payloads using rundll32.exe and msiexec.exe, transmit system metadata, and delete the created scheduled task. The six methods used to download files involve the use of curl, PowerShell, MSXML2.XMLHTTP, WinHTTP.WinHttpRequest.5.1, bitsadmin, and certutil.exe.

    “By using LOLBins like ‘certutil’ and ‘bitsadmin,’ and by implementing an ‘on the fly’ command encryption PowerShell generator, CountLoader’s developers demonstrate here an advanced understanding of the Windows operating system and malware development,” Silent Push said.

    A notable aspect of CountLoader is its use of the victim’s Music folder as a staging ground for malware. The .NET flavor shares some degree of functional crossover with its JavaScript counterpart, but supports only two different types of commands (UpdateType.Zip or UpdateType.Exe), indicating a reduced, stripped-down version.

    CountLoader is supported by an infrastructure comprising over 20 unique domains, with the malware serving as a conduit for Cobalt Strike, AdaptixC2, and PureHVNC RAT, the last of which is a commercial offering from a threat actor known as PureCoder. It’s worth pointing out that PureHVNC RAT is a predecessor to PureRAT, which is also referred to as ResolverRAT.

    Recent campaigns distributing PureHVNC RAT have leveraged the tried-and-tested ClickFix social engineering tactic as a delivery vector, with victims lured to the ClickFix phishing page through fake job offers, per Check Point. The trojan is deployed by means of a Rust-based loader.

    CIS Build Kits

    “The attacker lured the victim through fake job advertisements, allowing the attacker to execute malicious PowerShell code through the ClickFix phishing technique,” the cybersecurity company said, describing PureCoder as using a revolving set of GitHub accounts to host files that support the functionality of PureRAT.

    Analysis of the GitHub commits has revealed that activity was carried out from the timezone UTC+03:00, which corresponds to many countries, including Russia, among others.

    The development comes as the DomainTools Investigations team has uncovered the interconnected nature of the Russian ransomware landscape, identifying threat actor movements across groups and the use of tools like AnyDesk and Quick Assist, suggesting operational overlaps.

    “Brand allegiance among these operators is weak, and human capital appears to be the primary asset, rather than specific malware strains,” DomainTools said. “Operators adapt to market conditions, reorganize in response to takedowns, and trust relationships are critical. These individuals will choose to work with people they know, regardless of the name of the organization.”


    Source: thehackernews.com…

  • How CISOs Can Drive Effective AI Governance

    How CISOs Can Drive Effective AI Governance

    AI’s growing role in enterprise environments has heightened the urgency for Chief Information Security Officers (CISOs) to drive effective AI governance. When it comes to any emerging technology, governance is hard – but effective governance is even harder. The first instinct for most organizations is to respond with rigid policies. Write a policy document, circulate a set of restrictions, and hope the risk is contained. However, effective governance doesn’t work that way. It must be a living system that shapes how AI is used every day, guiding organizations through safe transformative change without slowing down the pace of innovation.

    For CISOs, finding that balance between security and speed is critical in the age of AI. This technology simultaneously represents the greatest opportunity and greatest risk enterprises have faced since the dawn of the internet. Move too fast without guardrails, and sensitive data leaks into prompts, shadow AI proliferates, or regulatory gaps become liabilities. Move too slow, and competitors pull ahead with transformative efficiencies that are too powerful to compete with. Either path comes with ramifications that can cost CISOs their job.

    In turn, they cannot lead a “department of no” where AI adoption initiatives are stymied by the organization’s security function. It is crucial to instead find a path to yes, mapping governance to organizational risk tolerance and business priorities so that the security function serves as a true revenue enabler. Over the course of this article, I’ll share three components that can help CISOs make that shift and drive AI governance programs that enable safe adoption at scale.

    1. Understand What’s Happening on the Ground

    When ChatGPT first arrived in November 2022, most CISOs I know scrambled to publish strict policies that told employees what not to do. It came from a place of positive intent considering sensitive data leakage was a legitimate concern. However, while policies written from that “document backward” approach are great in theory, they rarely work in practice. Due to how fast AI is evolving, AI governance must be designed through a “real-world forward” mindset that accounts for what’s really happening on the ground inside an organization. This requires CISOs to have a foundational understanding of AI: the technology itself, where it is embedded, which SaaS platforms are enabling it, and how employees are using it to get their jobs done.

    AI inventories, model registries, and cross-functional committees may sound like buzzwords, but they are practical mechanisms that can help security leaders develop this AI fluency. For example, an AI Bill of Materials (AIBOM) offers visibility into the components, datasets, and external services that will feed an AI model. Just as a software bill of materials (SBOM) clarifies third-party dependencies, an AIBOM ensures leaders know what data is being used, where it came from, and what risks it introduces.

    Model registries serve a similar role for AI systems already in use. They track which models are deployed, when they were last updated, and how they’re performing to prevent “black box sprawl” and inform decisions about patching, decommissioning, or scaling usage. AI committees ensure that oversight doesn’t fall on security or IT alone. Often chaired by a designated AI lead or risk officer, these groups include representatives from legal, compliance, HR, and business units – turning governance from a siloed directive into a shared responsibility that bridges security concerns with business outcomes.

    2. Align Policies to the Speed of the Organization

    Without real-world forward policies, security leaders often fall into the trap of codifying controls they cannot realistically deliver. I’ve seen this firsthand through a CISO colleague of mine. Knowing employees were already experimenting with AI, he worked to enable the responsible adoption of several GenAI applications across his workforce. However, when a new CIO joined the organization and felt there were too many GenAI applications in use, the CISO was directed to ban all GenAI until one enterprise-wide platform was selected. Fast forward one year later, that single platform still hadn’t been implemented, and employees were using unapproved GenAI tools that exposed the organization to shadow AI vulnerabilities. The CISO was stuck trying to enforce a blanket ban he couldn’t execute, fielding criticism without the authority to implement a workable solution.

    This kind of scenario plays out when policies are written faster than they can be executed, or when they fail to anticipate the pace of organizational adoption. Policies that look decisive on paper can quickly become obsolete if they don’t evolve with leadership changes, embedded AI functionality, and the organic ways employees integrate new tools into their work. Governance must be flexible enough to adapt, or else it risks leaving security teams enforcing the impossible.

    The way forward is to design policies as living documents. They should evolve as the business does, informed by actual use cases and aligned to measurable outcomes. Governance also can’t stop at policy; it needs to cascade into standards, procedures, and baselines that guide daily work. Only then do employees know what secure AI adoption really looks like in practice.

    3. Make AI Governance Sustainable

    Even with strong policies and roadmaps in place, employees will continue to use AI in ways that aren’t formally approved. The goal for security leaders shouldn’t be to ban AI, but to make responsible use the easiest and most attractive option. That means equipping employees with enterprise-grade AI tools, whether purchased or homegrown, so they do not need to reach for insecure alternatives. In addition, it means highlighting and reinforcing positive behaviors so that employees see value in following the guardrails rather than bypassing them.

    Sustainable governance also stems from Utilizing AI and Protecting AI, two pillars of the SANS Institute’s recently published Secure AI Blueprint. To govern AI effectively, CISOs should empower their SOC teams to effectively utilize AI for cyber defense – automating noise reduction and enrichment, validating detections against threat intelligence, and ensuring analysts remain in the loop for escalation and incident response. They should also ensure the right controls are in place to protect AI systems from adversarial threats, as outlined in the SANS Critical AI Security Guidelines.

    Learn More at SANS Cyber Defense Initiative 2025

    This December, SANS will be offering LDR514: Security Strategic Planning, Policy, and Leadership at SANS Cyber Defense Initiative 2025 in Washington, D.C. This course is designed for leaders who want to move beyond generic governance advice and learn how to build business-driven security programs that steer organizations to safe AI adoption. It will cover how to create actionable policies, align governance with business strategy, and embed security into culture so you can lead your enterprise through the AI era securely.

    If you’re ready to turn AI governance into a business enabler, register for SANS CDI 2025 here.

    Note: This article was contributed by Frank Kim, SANS Institute Fellow.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • SilentSync RAT Delivered via Two Malicious PyPI Packages Targeting Python Developers

    SilentSync RAT Delivered via Two Malicious PyPI Packages Targeting Python Developers

    Sep 18, 2025Ravie LakshmananMalware / Supply Chain Attack

    Cybersecurity researchers have discovered two new malicious packages in the Python Package Index (PyPI) repository that are designed to deliver a remote access trojan called SilentSync on Windows systems.

    “SilentSync is capable of remote command execution, file exfiltration, and screen capturing,” Zscaler ThreatLabz’s Manisha Ramcharan Prajapati and Satyam Singh said. “SilentSync also extracts web browser data, including credentials, history, autofill data, and cookies from web browsers like Chrome, Brave, Edge, and Firefox.”

    The packages, now no longer available for download from PyPI, are listed below. They were both uploaded by a user named “CondeTGAPIS.”

    • sisaws (201 Downloads)
    • secmeasure (627 Downloads)
    Audit and Beyond

    Zscaler said the package sisaws mimics the behavior of the legitimate Python package sisa, which is associated with Argentina’s national health information system, Sistema Integrado de Información Sanitaria Argentino (SISA).

    However, present in the library is a function called “gen_token()” in the initialization script (__init__.py) that acts as a downloader for a next-stage malware. To achieve this, it sends a hard-coded token as input, and receives as response a secondary static token in a manner that’s similar to the legitimate SISA API.

    “If a developer imports the sisaws package and invokes the gen_token function, the code will decode a hexadecimal string that reveals a curl command, which is then used to fetch an additional Python script,” Zscaler said. “The Python script retrieved from PasteBin is written to the filename helper.py in a temporary directory and executed.”

    Secmeasure, in a similar fashion, masquerades as a “library for cleaning strings and applying security measures,” but harbors embedded functionality to drop SilentSync RAT.

    SilentSync is mainly geared towards infecting Windows systems at this stage, but the malware is also equipped with built-in features for Linux and macOS as well, making Registry modifications on Windows, altering the crontab file on Linux to execute the payload on system startup, and registering a LaunchAgent on macOS.

    The package relies on the secondary token’s presence to send an HTTP GET request to a hard-coded endpoint (“200.58.107[.]25”) in order to receive Python code that’s directly executed in memory. The server supports four different endpoints –

    • /checkin, to verify connectivity
    • /comando, to request commands to execute
    • /respuesta, to send a status message
    • /archivo, to send command output or stolen data
    CIS Build Kits

    The malware is capable of harvesting browser data, executing shell commands, capturing screenshots, and stealing files. It can also exfiltrate files and entire directories in the form of ZIP archives. Once the data is transmitted, all the artifacts are deleted from the host to sidestep detection efforts.

    “The discovery of the malicious PyPI packages sisaws and secmeasure highlight the growing risk of supply chain attacks within public software repositories,” Zscaler said. “By leveraging typosquatting and impersonating legitimate packages, threat actors can gain access to personally identifiable information (PII).”


    Source: thehackernews.com…

  • Google Patches Chrome Zero-Day CVE-2025-10585 as Active V8 Exploit Threatens Millions

    Google Patches Chrome Zero-Day CVE-2025-10585 as Active V8 Exploit Threatens Millions

    Sep 18, 2025Ravie LakshmananVulnerability / Browser Security

    Google on Wednesday released security updates for the Chrome web browser to address four vulnerabilities, including one that it said has been exploited in the wild.

    The zero-day vulnerability in question is CVE-2025-10585, which has been described as a type confusion issue in the V8 JavaScript and WebAssembly engine.

    Type confusion vulnerabilities can have severe consequences as they can be weaponized by bad actors to trigger unexpected software behavior, resulting in the execution of arbitrary code and program crashes.

    Google’s Threat Analysis Group (TAG) has been credited with discovering and reporting the flaw on September 16, 2025.

    As is typically the case, the company did not share any additional specifics about how the vulnerability is being abused in real-world attacks, by whom, or the scale of such efforts. This is done to prevent other threat actors from exploiting the issue before users can apply a fix.

    Audit and Beyond

    “Google is aware that an exploit for CVE-2025-10585 exists in the wild,” it acknowledged in a terse advisory.

    CVE-2025-10585 is the sixth zero-day vulnerability in Chrome that has been either actively exploited or demonstrated as a proof-of-concept (PoC) since the start of the year. This includes: CVE-2025-2783, CVE-2025-4664, CVE-2025-5419, CVE-2025-6554, and CVE-2025-6558.

    To safeguard against potential threats, it’s advised to update their Chrome browser to versions 140.0.7339.185/.186 for Windows and Apple macOS, and 140.0.7339.185 for Linux. To make sure the latest updates are installed, users can navigate to More > Help > About Google Chrome and select Relaunch.

    Users of other Chromium-based browsers, such as Microsoft Edge, Brave, Opera, and Vivaldi, are also advised to apply the fixes as and when they become available.


    Source: thehackernews.com…

  • TA558 Uses AI-Generated Scripts to Deploy Venom RAT in Brazil Hotel Attacks

    TA558 Uses AI-Generated Scripts to Deploy Venom RAT in Brazil Hotel Attacks

    The threat actor known as TA558 has been attributed to a fresh set of attacks delivering various remote access trojans (RATs) like Venom RAT to breach hotels in Brazil and Spanish-speaking markets.

    Russian cybersecurity vendor Kaspersky is tracking the activity, observed in summer 2025, to a cluster it tracks as RevengeHotels.

    “The threat actors continue to employ phishing emails with invoice themes to deliver Venom RAT implants via JavaScript loaders and PowerShell downloaders,” the company said. “A significant portion of the initial infector and downloader code in this campaign appears to be generated by large language model (LLM) agents.”

    The findings demonstrate a new trend among cybercriminal groups to leverage artificial intelligence (AI) to bolster their tradecraft.

    Known to be active since at least 2015, RevengeHotels has a history of hospitality, hotel, and travel organizations in Latin America with the goal of installing malware on compromised systems.

    Audit and Beyond

    Early iterations of the threat actor’s campaigns were found to distribute emails with crafted Word, Excel, or PDF documents attached, some of which exploit a known remote code execution flaw in Microsoft Office (CVE-2017-0199) to trigger the deployment of Revenge RAT, NjRAT, NanoCoreRAT, and 888 RAT, as well as a piece of custom malware called ProCC.

    Subsequent campaigns documented by Proofpoint and Positive Technologies have demonstrated the threat actor’s ability to refine their attack chains to deliver a wide range of RATs such as Agent Tesla, AsyncRAT, FormBook, GuLoader, Loda RAT, LokiBot, Remcos RAT, Snake Keylogger, and Vjw0rm.

    The main goal of the attacks is to capture credit card data from guests and travelers stored in hotel systems, as well as credit card data received from popular online travel agencies (OTAs) such as Booking.com.

    According to Kaspersky, the latest campaigns involve sending phishing emails written in Portuguese and Spanish bearing hotel reservation and job application lures to trick recipients into clicking on fraudulent links, resulting in the download of a WScript JavaScript payload.

    “The script appears to be generated by a large language model (LLM), as evidenced by its heavily commented code and a format similar to those produced by this type of technology,” the company said. “The primary function of the script is to load subsequent scripts that facilitate the infection.”

    This includes a PowerShell script, which, in turn, retrieves a downloader named “cargajecerrr.txt” from an external server and runs it via PowerShell. The downloader, as the name implies, fetches two additional payloads: a loader that’s responsible for launching the Venom RAT malware.

    Based on the open-source Quasar RAT, Venom RAT is a commercial tool that’s offered for $650 for a lifetime license. A one-month subscription bundling the malware with HVNC and Stealer components, costs $350.

    The malware is equipped to siphon data, act as a reverse proxy, and features an anti-kill protection mechanism to ensure that it runs uninterrupted. To accomplish this, it modifies the Discretionary Access Control List (DACL) associated with the running process to remove any permissions that could interfere with its functioning, and terminates any running process that matches any of the hard-coded processes.

    “The second component of this anti-kill measure involves a thread that runs a continuous loop, checking the list of running processes every 50 milliseconds,” Kaspersky said.

    “The loop specifically targets those processes commonly used by security analysts and system administrators to monitor host activity or analyze .NET binaries, among other tasks. If the RAT detects any of these processes, it will terminate them without prompting the user.”

    CIS Build Kits

    The anti-kill feature also comes fitted with the ability to set up persistence on the host using Windows Registry modifications and re-run the malware anytime the associated process is not found in the list of running processes.

    Should the malware be executed with elevated privileges, it proceeds to set the SeDebugPrivilege token and marks itself as a critical system process, thereby allowing it to persist even when there is an attempt to terminate the process. It also forces the computer’s display to remain on and prevents it from entering sleep mode.

    Lastly, the Venom RAT artifacts incorporate capabilities to spread via removable USB drives and terminate the process associated with Microsoft Defender Antivirus, as well as tamper with the task scheduler and Registry to disable the security program.

    “RevengeHotels has significantly enhanced its capabilities, developing new tactics to target the hospitality and tourism sectors,” Kaspersky said. “With the assistance of LLM agents, the group has been able to generate and modify their phishing lures, expanding their attacks to new regions.”


    Source: thehackernews.com…

  • Chinese TA415 Uses VS Code Remote Tunnels to Spy on U.S. Economic Policy Experts

    Chinese TA415 Uses VS Code Remote Tunnels to Spy on U.S. Economic Policy Experts

    Sep 17, 2025Ravie LakshmananCyber Espionage / Malware

    A China-aligned threat actor known as TA415 has been attributed to spear-phishing campaigns targeting the U.S. government, think tanks, and academic organizations utilizing U.S.-China economic-themed lures.

    “In this activity, the group masqueraded as the current Chair of the Select Committee on Strategic Competition between the United States and the Chinese Communist Party (CCP), as well as the U.S.-China Business Council, to target a range of individuals and organizations predominantly focused on U.S.-China relations, trade, and economic policy,” Proofpoint said in an analysis.

    Audit and Beyond

    The enterprise security company said the activity, observed throughout July and August 2025, is likely an effort on part of Chinese state-sponsored threat actors to facilitate intelligence gathering amid ongoing U.S.-China trade talks, adding the hacking group shares overlaps with a threat cluster tracked broadly under the names APT41 and Brass Typhoon (formerly Barium).

    The findings come days after the U.S. House Select Committee on China issued an advisory warning of an “ongoing” series of highly targeted cyber espionage campaigns linked to Chinese threat actors, including a campaign that impersonated the Republican Party Congressman John Robert Moolenaar in phishing emails designed to deliver data-stealing malware.

    The campaign, per Proofpoint, mainly focused on individuals who specialized in international trade, economic policy, and U.S.-China relations, sending them emails spoofing the U.S.-China Business Council that invited them to a supposed closed-door briefing on U.S.-Taiwan and U.S.-China affairs.

    The messages were sent using the email address “uschina@zohomail[.]com,” while also relying on the Cloudflare WARP VPN service to obfuscate the source of the activity. They contain links to password-protected archives hosted on public cloud sharing services such as Zoho WorkDrive, Dropbox, and OpenDrive, within which there exists a Windows shortcut (LNK) along with other files in a hidden folder.

    The primary function of the LNK file is to execute a batch script within the hidden folder, and display a PDF document as a decoy to the user. In the background, the batch script executes an obfuscated Python loader named WhirlCoil that’s also present in the archive.

    “Earlier variations of this infection chain instead downloaded the WhirlCoil Python loader from a Paste site, such as Pastebin, and the Python package directly from the official Python website,” Proofpoint noted.

    The script is also designed to set up a scheduled task, typically named GoogleUpdate or MicrosoftHealthcareMonitorNode, to run the loader every two hours as a form of persistence. It also runs the task with SYSTEM privileges if the user has administrative access to the compromised host.

    CIS Build Kits

    The Python loader subsequently establishes a Visual Studio Code remote tunnel to establish persistent backdoor access and harvests system information and the contents of various user directories. The data and the remote tunnel verification code are sent to a free request logging service (e.g., requestrepo[.]com) in the form of a base64-encoded blob within the body of an HTTP POST request.

    “With this code, the threat actor is then able to authenticate the VS Code Remote Tunnel and remotely access the file system and execute arbitrary commands via the built-in Visual Studio terminal on the targeted host,” Proofpoint said.


    Source: thehackernews.com…

  • From Quantum Hacks to AI Defenses – Expert Guide to Building Unbreakable Cyber Resilience

    From Quantum Hacks to AI Defenses – Expert Guide to Building Unbreakable Cyber Resilience

    Sep 17, 2025The Hacker NewsCyber Resilience / Webinar

    Quantum Hacks to AI Defenses

    Quantum computing and AI working together will bring incredible opportunities. Together, the technologies will help us extend innovation further and faster than ever before. But, imagine the flip side, waking up to news that hackers have used a quantum computer to crack your company’s encryption overnight, exposing your most sensitive data, rendering much of it untrustworthy.

    And with your sensitive data exposed, where does that leave trust from your customers? And the cost to mitigate – if that is even possible with your outdated pre-quantum systems? According to IBM, cyber breaches are already hitting businesses with an average of $4.44 million per incident, and as high as $10.22 million in the US, but with quantum and AI working simultaneously, experts warn it could go much higher.

    In 2025, nearly two-thirds of organizations see quantum computing as the biggest cybersecurity threat looming in the next 3-5 years, while 93% of security leaders are prepping for daily AI-driven attacks. If you’re in tech, finance, healthcare, or any field handling big data, this isn’t sci-fi—it’s the storm brewing right now.

    But what if you could get ahead of it? Build reliable systems with multiple layers of protection that keep your operations rock-solid? That’s what our upcoming webinar, “Building Trust and Resilience for the AI and Quantum 2.0 Era,” is all about.

    It’s a panel of top experts diving into the world where quantum meets AI, and how to make your infrastructure unbreakable. Happening soon—don’t miss out. Sign up for the Webinar now and secure your spot today!

    The Risk Hiding in Quantum and AI Advances

    Let’s keep it real: Quantum 2.0 is exploding with cool stuff like super-fast computing, entanglement for instant communication, and sensors that see the unseen. Throw AI into the mix, and it’s optimizing and analyzing everything from quantum systems to drug discovery to evolving everyday tech. Sounds awesome, right? But here’s the flip side—these technology breakthroughs are also widening the door for cyber bad guys.

    Quantum computers could render much of today’s encryption useless, while AI makes attacks smarter and faster. Experts warn that AI-powered attacks are already growing in sophistication, and many security leaders believe quantum computing will dramatically increase future risks.

    I’ve heard from pros in the field sharing nightmare stories: AI-driven phishing fools 60% of folks, just like old-school tricks, but now it’s GenAI making fakes that look too real. And quantum? It’s not decades away—threats like “harvest now, decrypt later” mean attackers are grabbing encrypted data today, waiting for quantum tech to unlock it. Without the right defenses, sectors like finance and healthcare could face chaos, losing data integrity and facing massive fines.

    The good news? Solutions are available now that can protect you for Q-day and today.

    What You’ll Walk Away With: Simple Steps to Build Resilience

    In this lively 60-minute panel, you’ll hear from rockstar experts who’ve been shaping this space. They’ll break down the hype and hand you practical ways to protect your world. No jargon overload—just straight talk on breakthroughs and how to turn them into your advantage.

    Here’s a taste of what they’ll cover:

    1. The Buzz on Quantum 2.0: Get the lowdown on how quantum computing, sensing, and comms are changing the game—and how AI supercharges it all for smarter systems.
    2. Why AI and Quantum Need to Play Nice with Security: Learn why crypto-resilient setups are a must, with tips on aligning innovations without leaving weak spots.
    3. Tackling Risks in This New World: Dive into managing threats in AI-quantum mashups, including how to spot and stop emerging dangers before they hit.
    4. Tailored Fixes for Your Industry: Whether you’re in finance, healthcare, or critical infra, grab strategies customized for high-stakes data protection.
    5. Your Roadmap from Start to Finish: Walk through planning, consulting, rollout, and ongoing services to make resilience a reality.
    6. What Leaders Need to Do Right Now: Key moves for bosses to lock in long-term security and keep things running smoothly.

    Watch this Webinar Now

    Meet the Experts

    • Dr. Michael Eggleston, Data & Devices Group Leader, Nokia Bell Labs: Leading advances in quantum tech and sensing.
    • Dr. Michele Mosca, Co-founder, evolutionQ & Programme Chair of the ETSI-IQC Quantum-Safe Cryptography Conference: Pioneer in quantum-safe crypto.
    • Donna Dodson, Former Chief Cybersecurity Advisor, NIST: Innovator in government cybersecurity.
    • Bill Genovese, CIO Advisory Partner, Global Quantum Services & Consulting Leader, Kyndryl: Strategist in emerging tech like quantum and AI.
    • Martin Charbonneau, Head of Quantum-Safe Networks, Nokia: Expert in securing networks against quantum threats.

    Ready to arm yourself with these insights? Sign up for the Webinar now and join the conversation.

    With quantum threats ramping up, adversaries using AI for slicker attacks—and reports like the Global Cybersecurity Outlook warning that 47% of orgs fear GenAI-boosted bad guys, waiting it out isn’t an option. Cyber resilience and agility isn’t just nice-to-have; it’s urgent, as quantum tech could reshape cryptography and pose risks sooner than we think. This webinar isn’t fluff—it’s your shield for the AI-quantum era, blending innovation with rock-hard resilience.

    Seats fill up fast, it’s a quick win for huge peace of mind.

    Save your seat now – See you there!

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Rethinking AI Data Security: A Buyer's Guide 

    Rethinking AI Data Security: A Buyer's Guide 

    Sep 17, 2025The Hacker NewsAI Security / Shadow IT

    AI Data Security

    Generative AI has gone from a curiosity to a cornerstone of enterprise productivity in just a few short years. From copilots embedded in office suites to dedicated large language model (LLM) platforms, employees now rely on these tools to code, analyze, draft, and decide. But for CISOs and security architects, the very speed of adoption has created a paradox: the more powerful the tools, the more porous the enterprise boundary becomes.

    And here’s the counterintuitive part: the biggest risk isn’t that employees are careless with prompts. It’s that organizations are applying the wrong mental model when evaluating solutions, trying to retrofit legacy controls for a risk surface they were never designed to cover. A new guide (download here) tries to bridge that gap.

    The Hidden Challenge in Today’s Vendor Landscape

    The AI data security market is already crowded. Every vendor, from traditional DLP to next-gen SSE platforms, is rebranding around “AI security.” On paper, this seems to offer clarity. In practice, it muddies the waters.

    The truth is that most legacy architectures, designed for file transfers, email, or network gateways, cannot meaningfully inspect or control what happens when a user pastes sensitive code into a chatbot, or uploads a dataset to a personal AI tool. Evaluating solutions through the lens of yesterday’s risks is what leads many organizations to buy shelfware.

    This is why the buyer’s journey for AI data security needs to be reframed. Instead of asking “Which vendor has the most features?” the real question is: Which vendor understands how AI is actually used at the last mile: inside the browser, across sanctioned and unsanctioned tools?

    The Buyer’s Journey: A Counterintuitive Path

    Most procurement processes start with visibility. But in AI data security, visibility is not the finish line; it’s the starting point. Discovery will show you the proliferation of AI tools across departments, but the real differentiator is how a solution interprets and enforces policies in real time, without throttling productivity.

    The buyer’s journey often follows four stages:

    1. Discovery – Identify which AI tools are in use, sanctioned or shadow. Conventional wisdom says this is enough to scope the problem. In reality, discovery without context leads to overestimation of risk and blunt responses (like outright bans).
    2. Real-Time Monitoring – Understand how these tools are being used, and what data flows through them. The surprising insight? Not all AI usage is risky. Without monitoring, you can’t separate harmless drafting from the inadvertent leak of source code.
    3. Enforcement – This is where many buyers default to binary thinking: allow or block. The counterintuitive truth is that the most effective enforcement lives in the gray area—redaction, just-in-time warnings, and conditional approvals. These not only protect data but also educate users in the moment.
    4. Architecture Fit – Perhaps the least glamorous but most critical stage. Buyers often overlook deployment complexity, assuming security teams can bolt new agents or proxies onto existing stacks. In practice, solutions that demand infrastructure change are the ones most likely to stall or get bypassed.

    What Experienced Buyers Should Really Ask

    Security leaders know the standard checklist: compliance coverage, identity integration, reporting dashboards. But in AI data security, some of the most important questions are the least obvious:

    • Does the solution work without relying on endpoint agents or network rerouting?
    • Can it enforce policies in unmanaged or BYOD environments, where much shadow AI lives?
    • Does it offer more than “block” as a control. I.e., can it redact sensitive strings, or warn users contextually?
    • How adaptable is it to new AI tools that haven’t yet been released?

    These questions cut against the grain of traditional vendor evaluation but reflect the operational reality of AI adoption.

    Balancing Security and Productivity: The False Binary

    One of the most persistent myths is that CISOs must choose between enabling AI innovation and protecting sensitive data. Blocking tools like ChatGPT may satisfy a compliance checklist, but it drives employees to personal devices, where no controls exist. In effect, bans create the very shadow AI problem they were meant to solve.

    The more sustainable approach is nuanced enforcement: permitting AI usage in sanctioned contexts while intercepting risky behaviors in real time. In this way, security becomes an enabler of productivity, not its adversary.

    Technical vs. Non-Technical Considerations

    While technical fit is paramount, non-technical factors often decide whether an AI data security solution succeeds or fails:

    • Operational Overhead – Can it be deployed in hours, or does it require weeks of endpoint configuration?
    • User Experience – Are controls transparent and minimally disruptive, or do they generate workarounds?
    • Futureproofing – Does the vendor have a roadmap for adapting to emerging AI tools and compliance regimes, or are you buying a static product in a dynamic field?

    These considerations are less about “checklists” and more about sustainability—ensuring the solution can scale with both organizational adoption and the broader AI landscape.

    The Bottom Line

    Security teams evaluating AI data security solutions face a paradox: the space looks crowded, but true fit-for-purpose options are rare. The buyer’s journey requires more than a feature comparison; it demands rethinking assumptions about visibility, enforcement, and architecture.

    The counterintuitive lesson? The best AI security investments aren’t the ones that promise to block everything. They’re the ones that enable your enterprise to harness AI safely, striking a balance between innovation and control.

    This Buyer’s Guide to AI Data Security distills this complex landscape into a clear, step-by-step framework. The guide is designed for both technical and economic buyers, walking them through the full journey: from recognizing the unique risks of generative AI to evaluating solutions across discovery, monitoring, enforcement, and deployment. By breaking down the trade-offs, exposing counterintuitive considerations, and providing a practical evaluation checklist, the guide helps security leaders cut through vendor noise and make informed decisions that balance innovation with control.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Scattered Spider Resurfaces With Financial Sector Attacks Despite Retirement Claims

    Scattered Spider Resurfaces With Financial Sector Attacks Despite Retirement Claims

    Sep 17, 2025Ravie LakshmananThreat Intelligence / Cybercrime

    Cybersecurity researchers have tied a fresh round of cyber attacks targeting financial services to the notorious cybercrime group known as Scattered Spider, casting doubt on their claims of going “dark.”

    Threat intelligence firm ReliaQuest said it has observed indications that the threat actor has shifted their focus to the financial sector. This is supported by an increase in lookalike domains potentially linked to the group that are geared towards the industry vertical, as well as a recently identified targeted intrusion against an unnamed U.S. banking organization.

    “Scattered Spider gained initial access by socially engineering an executive’s account and resetting their password via Azure Active Directory Self-Service Password Management,” the company said.

    Audit and Beyond

    “From there, they accessed sensitive IT and security documents, moved laterally through the Citrix environment and VPN, and compromised VMware ESXi infrastructure to dump credentials and further infiltrate the network.”

    To achieve privilege escalation, the attackers reset a Veeam service account password, assigned Azure Global Administrator permissions, and relocated virtual machines to evade detection. There are also signs that Scattered Spider attempted to exfiltrate data from Snowflake, Amazon Web Services (AWS), and other repositories.

    Exit or Smokescreen?

    The recent activity undercuts the group’s claims that they were ceasing operations alongside 14 other criminal groups, such as LAPSUS$. Scattered Spider is the moniker assigned to a loose-knit hacking collective that’s part of a broader online entity called The Com.

    The group also shares a high degree of overlap with other cybercrime crews like ShinyHunters and LAPSUS$, so much so that the three clusters formed an overarching entity named “scattered LAPSUS$ hunters.”

    One of these clusters, notably ShinyHunters, has also engaged in extortion efforts after exfiltrating sensitive data from victims’ Salesforce instances. In these cases, the activity took place months after the targets were compromised by another financially motivated hacking group tracked by Google-owned Mandiant as UNC6040.

    The incident is a reminder not to be lulled into a false sense of security, ReliaQuest added, urging organizations to stay vigilant against the threat. As in the case of ransomware groups, there is no such thing as retirement, as it’s very much possible for them to regroup or rebrand under a different alias in the future.

    CIS Build Kits

    “The recent claim that Scattered Spider is retiring should be taken with a significant degree of skepticism,” Karl Sigler, security research manager of SpiderLabs Threat Intelligence at Trustwave, said. “Rather than a true disbanding, this announcement likely signals a strategic move to distance the group from increasing law enforcement pressure.”

    Sigler also pointed out that the farewell letter should be viewed as a strategic retreat, allowing the group to reassess its practices, refine its tradecraft, and evade ongoing efforts to put a lid on its activities, not to mention complicate attribution efforts by making it harder to tie future incidents to the same core actors.

    “It’s plausible that something within the group’s operational infrastructure has been compromised. Whether through a breached system, an exposed communication channel, or the arrest of lower-tier affiliates, something has likely triggered the group to go dark, at least temporarily. Historically, when cybercriminal groups face heightened scrutiny or suffer internal disruption, they often ‘retire’ in name only, opting instead to pause, regroup, and eventually re-emerge under a new identity.”


    Source: thehackernews.com…