Trends that inform your 2026 cybersecurity budget
The attack landscape is evolving. The last 6 months have shown fast industry shifts and a significant increase in business impact from cyber events. I’ve listed 5 core themes - each showing an upward trend as we enter 2026 - outlining what’s changing, why this time is different with reference data.
1. Social Engineering has new superpowers!
Social engineering technologies are now better, faster and more accessible. This makes the people in your company a more viable target for attack.
Why this time is different: Voice and video cloning today is significantly more capable. Off-the-shelf AI products can now generate life-like images, videos and audio of a particular individual at close to 100% accuracy. Threat actors can now accurately impersonate a CXO or key decision maker to spoof IT and finance teams to raise access requests to sensitive systems and transfer money.
Data Points:
Microsoft VibeVoice is a novel framework designed for generating expressive, long-form, multi-speaker conversational audio - https://microsoft.github.io/VibeVoice/
OpenAI released Sora 2 with a feature called "Cameo." It allows users to upload a short video and voice sample to create a "digital puppet" of themselves (or others) that can be inserted into AI-generated scenes.
Threat intelligence firms showing a surge in attacks in late 2025 where deepfake voices are used specifically to call corporate Help Desks. Attackers clone an employee’s voice (often scraped from LinkedIn videos or webinars) and call IT support claiming to have lost their phone or been locked out.
Publicly disclosed attacks targeted Ferrari and WPP in 2024 - https://aimagazine.com/news/charting-the-light-and-dark-of-gen-ai-for-content-creation
2. LLM active use in attack campaigns
Research is showing that publicly available large language models are being used to launch end-to-end real event attack campaigns.
Why this time is different: AI is being used to discover zero days, perform reconnaissance, generate and execute malware, and target application and platform weaknesses - common steps within most killchains. However AI now means the rate / frequency of attacks will skyrocket - agentic workflows can run autonomously to discover and launch attacks.
Data Points:
The Anthropic Threat Intelligence Report (August 2025) https://www-cdn.anthropic.com/b2a76c6f6992465c09a6f2fce282f6c0cea8c200.pdf details how cybercriminals are leveraging agentic AI—specifically Claude and Claude Code. The report highlights:
"Vibe Hacking" where cybercriminals use AI coding agents to actively execute intrusions and extortion to attack at least 17 organizations across government, healthcare, and emergency sectors - automating reconnaissance, scanning thousands of VPN endpoints, harvesting credentials, and pivoting through networks.
No-Code Malware to build and sell a Ransomware-as-a-Service (RaaS) platform despite having limited technical skills - with complex malware features, anti-EDR evasion, and shadow copy deletion.
A sophisticated actor used Claude across 12 of 14 MITRE ATT&CK tactics to target Vietnamese critical infrastructure, using the AI to build custom scanning tools, fuzzing frameworks, and Linux kernel exploits.
A Russian-speaking actor used Claude to create malware with advanced evasion techniques like dynamic API calls.
Anthropic published "Disrupting the first reported AI-orchestrated cyber espionage campaign." https://www.anthropic.com/news/disrupting-AI-espionage detailing a mid-September 2025 incident where a state-sponsored (China) actor used AI agents to execute a large-scale cyberattack with minimal human intervention.
Approximately 30 global entities, including large technology companies, financial institutions, chemical manufacturers, and government agencies.
The AI performed 80–90% of the campaign's tactical work autonomously.
The attackers manipulated Claude Code to act as an "agent" by tricking it into believing it was an employee of a legitimate cybersecurity firm conducting defensive testing.
Claude Code autonomously inspected target systems, mapped infrastructure, and identified high-value databases in a fraction of the time a human team would take.
The AI researched vulnerabilities and wrote its own exploit code to test them.
It harvested usernames and passwords to gain further access.
The AI produced comprehensive files listing stolen credentials and analyzed systems to help the human operators plan the next stage.
3. Targeted attacks are more impactful and on the rise
Entire businesses are being brought to their knees with ransomware targeting core revenue generating assets.
Why this time is different: The scale of M&S, Qantas and JLR all within a 6 months period. These are organisations with (perceived) reasonable levels of cyber maturity - or at least equipped with a cyber security team and budget to manage risk. Eyewatering rebuilds are costing hundreds of millions, if not billions.
Data Sources:
M&S statutory profit before tax slumped 99% from £391.9m to £3.4m for the first half of the year, compared with the year prior - https://www.bbc.com/news/articles/c93x16zkl9do
Jaguar Land Rover, cost the British economy an estimated £1.9b ($2.55 billion) and affected over 5,000 organisations – https://www.reuters.com/sustainability/boards-policy-regulation/jaguar-land-rover-hack-cost-uk-economy-25-billion-report-says-2025-10-22/
Qantas data breach impacting executive level pay - https://www.reuters.com/sustainability/boards-policy-regulation/qantas-tightens-purse-strings-executive-pay-after-data-breach-fallout-2025-09-05/
4. Painfully complex technology supply chains
We’re seeing complex supply chain flaws (often challenging to manage) hurt business operations and in some cases lead to breaches.
Why this time is different: The assumption that technology companies can appropriately budget / staff security and resilience is wearing thin - tech companies feel market pressure (economic and competition) and can fall victim to failure / security incidents. The scale of recent events demonstrates a combination of exploitable vulnerabilities and service downtime - all high impacting.
Data Sources:
Shai Hulund (wave 1) September 2025, malicious versions of multiple popular packages were published to npm; containing a script that harvested sensitive data and exfiltrated it to attacker-created public GitHub repos. The malware exhibits worm-like behaviour; compromised packages automatically publishing malicious versions of any package it can access across the npm ecosystem. https://www.wiz.io/blog/shai-hulud-npm-supply-chain-attack
Shai Hulund (wave 2) November 2025, a second wave compromised major packages from Zapier, ENS Domains, PostHog, and Postman leading to GitHub repos populated with stolen victim data. The impact is reaching 25,000+ repos across ~500 GitHub users. https://www.wiz.io/blog/shai-hulud-2-0-ongoing-supply-chain-attack
Salesloft Drift Breach - August 2025, SaaS connector data breach affecting ~700 downstream SaaS customers providing an entry point for attackers. https://www.roupe.io/content/how-the-salesloft-drift-breach-is-a-sign-of-a-growing-risk-for-our-industry
AWS, October 2025, large scale outage caused by two automated systems updating the same data simultaneously, AWS’ DynamoDB. The outage is likely to have affected over 2,000 large organizations and nearly 70,000 organizations in total with AWS loss estimates ranging ranging from $38m to $581m. https://www.cybcube.com/news/insurance-loss-estimate-for-aws-amazonk-outage
Cloudflare, November 2025, high-profile website outage, including X and ChatGPT, due to problems affecting Cloudflare - claim to support 20% of the internet's traffic. https://www.bbc.com/news/articles/c629pny4gl7o
5. Vibe coding your way into vulnerability sprawl
Entire application and platform ecosystems are being built using AI coding tools. This is now the reality of your own software development stack and the third party business applications you rely on.
Why this time is different: Development speed is now on rocket fuel and is a direct result of AI innovation. Applications can be written with plain-english prompts and translated to common application language frameworks. The same AI model writing your code will reference vulnerable and public code repositories as training data. This drives up competition and speed, often at the cost of security; which raises an entirely different question on the value we attribute to security and resilience.
Data Sources:
Google - AI adoption among software development professionals has surged to 90%. 65% are heavily relying on AI for software development, with 37% reporting a moderate amount. Over 80% of respondents indicate that AI has enhanced their productivity. https://blog.google/technology/developers/dora-report-2025/
Github - Over 15 million developers were using GitHub Copilot by early 2025, a 400% increase in 12 months. Copilot now writes nearly half of a developer’s code.. https://github.com/features/copilot
Gitlab - 97% of organizations are using or planning to use AI in their software lifecycle. 73% of DevSecOps professionals have encountered problems with vibe code (using natural language prompts without understanding the underlying syntax).
Gartner - by 2028, 90% of enterprise software engineers will use AI code assistants. https://github.blog/ai-and-ml/github-copilot/gartner-positions-github-as-a-leader-in-the-2025-magic-quadrant-for-ai-code-assistants-for-the-second-year-in-a-row/#:~:text=September%2022%2C%202025,than%2014%25%20in%20early%202024.
Veracode - 45% of code samples failed security OWASP Top 10 security test. AI tools failed at an 86% rate to defend against cross site scripting. https://www.veracode.com/blog/genai-code-security-report/
What Next
This is pattern recognition; as we come to the end of the 2025, you’ll begin to consider next year’s cybersecurity budget and will already be warming your executives to what your security program will look like. My advice, take the time to:
Prioritise your cyber programmes against the changing landscape - not all security programmes will have the budget to effectively address all its cyber risks. i.e. cyber breach risks can be accepted, as long as your business well-informed and capable / willing to invest in rebuilds. Protect the business and not your security agenda.
Bring your security initiatives close to protecting business value drivers - for most that’s identifying critical business technologies, maintaining system uptime and having a resilient and recoverable IT estate.
Consider defensive measures that increase speed and security output (e.g. AI informed vulnerability remediation, automated SOAR workflows, deepfake simulations, and executive level wargames).
Set aside cyber security retainers and recovery budgets to recover from unfortunate events.
Reconsider your security awareness programme - there's no training tool that can appropriately address the social engineering challenges of our industry. Focus on simulation based exercises that provide real learning experiences and use these to establish business security guardrails that help.

