AI is now default – but your cyber maturity model may not know it yet

Between risk and opportunity: rebasing cyber maturity for the AI era

  • Article
  • 7 minute read
  • 23 Feb 2026

AI adoption has moved from experimentation to default deployment. Vendors are shipping AI features directly into core products, and organizations are embedding models into customer journeys, operations, and decision-making. In practice, AI is becoming inseparable from the IT estate. This creates a paradox for CIOs, CISOs and risk leaders: the more AI you deploy to drive business growth, the more your reported cyber maturity can appear to decline. Controls that looked “optimized” in a pre-AI landscape often prove insufficient against autonomous, probabilistic systems and AI-enabled adversaries. 

The implication is clear: AI must be treated as part of your security baseline, not as a bolt-on. Coverage and integration for AI risks have to become strategic drivers on the CISO’s agenda, and cyber maturity needs to be re-based for the AI era.  

The Dual-Edged Sword: AI as Defender and Adversary  

The race to leverage AI in cybersecurity is well underway. On the defensive side, AI is reinventing security operations through user behavior analytics, automated incident response, and advanced threat detection. AI can help close the cyber talent gap and automate large parts of risk and compliance activities. Without AI, we cannot achieve the speed, scale, and granularity needed to defend against AI-driven threats.

At the same time, threat actors are rapidly industrializing AI. We are observing the first attacker groups using complex AI architectures, for example concatenated attack agents that autonomously perform reconnaissance, vulnerability scanning, and exploitation without a human in the loop. One such group, detected by Anthropic, leveraged Model Context Protocol (MCP) servers to orchestrate interconnected AI agents and link them with attack tools. Our PwC Global Threat Intelligence is tracking tools such as ReaperAI and PromptLock. Additionally, we see growing volumes of deepfake-driven impersonation and disinformation attacks.

Attackers are moving ahead on the path to AI automation. Defenders have no choice but to follow suit – and to do so in a controlled, risk-aware way.

With the newly released draft of NIST IR 8596 (Cyber AI Profile), the AI-specific dimensions “Secure”, “Defend”, and “Thwart” are introduced and mapped into the NIST Cybersecurity Framework (CSF). The message is unambiguous: without securing AI systems, conducting AI-enabled defense, and thwarting AI-enabled attacks, defenders will fall behind.

The Vanishing Divide: AI and IT Convergence

The artificial boundary between AI and “traditional” IT is dissolving rapidly. In the coming years, AI capabilities will be embedded into nearly every critical system, making the merge essentially complete. That will demand unified processes for risk identification, control design, monitoring, and incident response across both AI and non-AI assets.  

Many organizations responded to the AI wave by standing up dedicated AI governance and compliance structures. While these are important developments, they must not operate in silos. AI is becoming part of the fabric of business processes and applications; it needs to be reflected in risk and control approaches as an integrated component, not an add-on.  

Without this integration, organizations face three risks:  

  1. More friction for the business through parallel approval and compliance processes.  
  2. Duplicated effort across cyber, IT risk, and AI governance teams.  
  3. A fragmented risk picture, where AI-related exposure is evaluated separately from your core cyber posture.  

An integrated approach allows you to adjust processes and organization for the AI era without slowing down innovation.  

Managing AI Security: An Integrated Approach 

To remain credible and effective, cybersecurity maturity models must explicitly incorporate AI risks, opportunities, and threat scenarios. This requires security leaders to drive AI security and AI adoption as part of their capability model – not as a separate initiative.

Following the AI-driven transformation on the business side, security organizations will need to reorganize capabilities around managing many integrated AI agents and rebuilding processes based on outcomes, not on human-centric process steps. Controls and workflows must be designed so that AI can safely take over defined tasks under appropriate oversight.

At the same time, leadership teams must be prepared for a recalibration – and, in many cases, a downgrading – of reported cyber maturity levels. Capabilities that were previously rated as quantitatively managed and continuously improving may need to be reassessed against new AI-enabled threats such as deepfake fraud, autonomous exploitation, and AI supply-chain risks. The result can be a lower maturity rating, not because you are doing worse, but because the bar has moved.

However, where gaps were already known, AI can help close them more effectively. AI-assisted detection, investigation, and compliance monitoring can rapidly increase coverage and consistency, lifting previously weak capabilities to a higher level.

Our PwC CEO Survey 2026 found that compared to the prior year, CEOs are less confident about their company’s near-term revenue growth prospects from AI. One reason: AI is often used as a generic assistant, compared to a human employee working with many data sources, rather than being deeply embedded in core processes and technology. To unlock the next step in value creation, AI needs to be plugged into your technology stack and operating model – including security.

On the technical level, this means building AI-ready security architecture. Concretely, organizations need:  

  • A data and telemetry layer that AI systems can reliably access and interpret.  
  • A tool layer prepared for AI integration, for example by deploying Model Context Protocol (MCP) servers so AI agents can orchestrate your security toolset.  
  • A semantic layer that brings together raw security data from multiple sources into a form AI can reason over safely and effectively.  

Charting the Path Forward

AI is accelerating faster than most security operating models can absorb. That creates a paradox: the more value the business unlocks through AI, the more your reported security maturity can appear to decline, because yesterday’s controls were not designed for autonomous, probabilistic systems and AI-enabled adversaries. The goal, therefore, is not to “bolt on” AI governance or chase the latest point solutions.

The goal is to raise the security bar while keeping the business fast, by treating AI as inseparable from the IT estate and embedding AI risk into the same maturity logic you already use to run cybersecurity.

The most important shift is mindset and measurement:

Expect to re-baseline maturity: capabilities that looked “optimized” in a pre‑AI threat landscape may be “defined” again when tested against deepfake fraud, autonomous recon/exploitation, and AI supply-chain risks. At the same time, AI also gives defenders leverage: you can finally gain the speed, scale, and granularity required to close long‑standing coverage gaps that were previously accepted as inevitable.

In short: AI downgrades cyber maturity but upgrades potential.

Your action plan for the AI era

To make this shift tangible, security and risk leaders can focus on five priorities:

Re‑evaluate key capabilities explicitly against AI‑driven scenarios such as autonomous recon/exploitation, deepfake‑enabled social engineering, and AI supply‑chain attacks. Update your target maturity levels to reflect the new threat baseline. 

Avoid parallel AI governance structures that sit outside existing security and risk processes. Define a single operating rhythm where AI risk and cyber risk are overseen by the same senior forums, with shared metrics and aligned accountability. 

Articulate how AI will be used within security (defensive use cases), how AI systems will be protected, and how AI‑enabled threats will be countered. Clarify roles (e.g. AI security lead), decision rights, and integration points into SDLC, risk assessments, and SOC processes. 

Upskill security teams to use AI for tasks such as log analysis, threat hunting, incident triage, and policy compliance. Identify concrete use cases where AI can augment analysts rather than increase workload, and set guardrails for responsible use. 

Define a target architecture in which telemetry, control automation, and AI agents are connected via standardized interfaces. Use mechanisms such as MCP servers and a semantic data layer so AI agents can access tools and data, execute playbooks, and propose actions – always with appropriate human oversight. 

The Authors

Henning Kruse
Henning Kruse

Senior Manager, PwC Germany

Moritz Anders
Moritz Anders

Partner, Cyber Security Leader, PwC Germany

Follow us