mobile logo

Search

The EU AI Act & the UK’s CSRB

AI Regulation

EU AI Act

Responsible AI

December 2025

What do they mean for real-world AI delivery? 

Over the past few years, my writing has often returned to a single theme: security becomes useful only when it becomes understandable. Whether discussing threat modelling, the “alignment gap” between security and product delivery, or the practical application of OWASP standards, I’ve argued that the role of a security practitioner is not to accumulate frameworks, but to help organisations make use of them. 

The accelerating regulatory landscape for artificial intelligence is now testing this principle in a very real way. 

Two major developments: the EU Artificial Intelligence (AI) Act and the UK’s upcoming Cyber Security and Resilience Bill (CSRB), are reshaping expectations for the design, deployment, and oversight of AI systems. These laws are complex, but behind the legal language lies a simple story: AI is becoming an operational capability that must be governed with the same discipline as any other critical service. 

This article unpacks what that means for UK organisations like public-sector bodies and Financial Services and Insurance (FSI) firms, as well as how practitioners can use OWASP and related security practices to navigate the change. 

Why this matters now

The EU AI Act, first published in the Official Journal of the European Union in July 2024, is the world’s first comprehensive AI regulation. Even though the UK is not in the EU, the Act has extraterritorial scope: if a UK organisation’s AI system affects people in the EU (or uses an EU provider whose systems fall under the Act) they are in scope. 

The CSRB, meanwhile, updates the UK’s NIS regime by expanding obligations on critical infrastructure, managed service providers, and digital services. For many organisations, AI workloads will sit directly on top of this newly regulated infrastructure. 

Together, these developments create a regulatory “stack”: 

  • The AI Act governs the behaviour and governance of AI systems. 
  • The CSRB governs the resilience and security of the systems hosting them. 

This interplay mirrors what I’ve previously written about in the OWASP Product Security Capability Framework (PSCF): technology decisions are never isolated; they create obligations across governance, engineering, operations, and supplier management and need to be understood as a whole in order to effectively comply. 

Understanding the EU AI Act without the jargon

The EU AI Act classifies AI systems into four categories: 

  1. Unacceptable risk – banned (e.g., social scoring). 
  2. High risk – heavily regulated. 
  3. Limited risk – transparency requirements (e.g., AI chatbots must disclose that they’re AI). 
  4. Minimal risk – largely unregulated. 

For many organisations, the key category is high-risk AI. Uses include: 

  • Creditworthiness and credit scoring 
  • Life and health insurance pricing 
  • Biometric systems linked to essential services 
  • Public-sector decision support (immigration, benefits, healthcare triage) 
  • AI managing critical infrastructure or security functions 

High-risk AI isn’t prohibited; it just must be governed. This governance means: documented, tested, monitored and controlled to a higher standard. 

If that feels familiar, it is. The structure mirrors principles from software assurance, safety engineering, and even the OWASP Software Assurance Maturity Model (SAMM). The language is new, but the duties of risk assessment, transparency, robustness, human oversight all align closely with security governance patterns we already know. 

What this means for UK public sector organisations

Public-sector bodies increasingly rely on AI for triage, case management, eligibility assessment, resource planning, and citizen-facing digital services. Some of these systems already approach high-risk classifications; others will fall into scope only if they affect EU residents. 

The significant changes: 

  • Early classification becomes essential.
    It won’t be enough to deploy an AI model; organisations must prove that they understood the regulatory risk before building it
  • “Human oversight” becomes a real operational requirement.
    Not a checkbox. Not a manager reviewing a dashboard once a month. Instead, documented decision pathways, escalation triggers, and the ability to override or disable AI output
  • Data quality and fairness become auditable artefacts.
    Public-sector teams will need evidence that training data is relevant, representative and appropriately governed
  • Legacy AI must be retrofitted.
    Inventory work will be unavoidable. Older models, spreadsheets with embedded ML, external tools with opaque algorithms all require classification, logging, documentation, and in some cases, significant redesign. 

This is where threat modelling practices like OWASP Cornucopia, help translate regulatory language into practical engineering decisions. By framing harms, misuse scenarios, and bias risks in a workshop format, teams gain clarity without drowning in legal interpretation. 

What this means for UK FSI organisations

FSI organisations already operate under layers of regulation: GDPR, DORA, PRA/FCA requirements, anti-money-laundering provisions, and operational resilience standards. AI introduces one more, but it integrates rather than replaces. 

The most affected use-cases will be: 

  • Credit decisioning and affordability modelling 
  • Insurance underwriting and pricing 
  • Fraud detection and transaction monitoring  
  • Biometric onboarding and digital identity  
  • AI-assisted customer-service automation 

FSI firms are accustomed to model risk management, but the AI Act adds: 

  • Mandatory transparency and explainability for affected EU customers 
  • Model change control with traceability and justification 
  • Continuous monitoring analogous to post-market safety oversight  
  • Documented human governance for override conditions 

FSI transformation programmes often struggle not because they lack controls, but because they lack unified language for describing those controls. The AI Act provides that unified language. OWASP provides the engineering vocabulary to support it. Together they produce a repeatable, defensible assurance model. 

The intersection of the AI Act and the UK CSRB

If the AI Act describes what responsible AI requires, the CSRB sets expectations for how that capability must be secured and operated. 

This creates several practical intersections. 

  1. AI incidents become security incidents

A mis-classification event, poisoning attack, or loss of model integrity could become a reportable CSRB incident as it impacts availability, integrity, or security of digital services. 

  1. Supplier governance becomes more rigorous

High-risk AI typically involves third-party platforms, APIs, or models. CSRB obligations on digital service providers mean UK organisations must demand both AI-specific artefacts (documentation, testing outputs) and resilience evidence (incident response, service integrity, supply-chain controls). 

  1. SecureMLOpsbecomes essential infrastructure 

Teams must adopt hardened pipelines, audit-ready training processes, model lineage tracking, and monitoring. These are practices long championed in OWASP’s AI Exchange as part of the Security and Privacy guidance. 

In previous discussions about secure delivery, I’ve emphasised that maturity comes not from buying tools but from establishing repeatable assurances. The combination of the AI Act and CSRB compels organisations to formalise this operational muscle. 

Where OWASP fits and why it matters 

Much of the technical community will take guidance from OWASP, not because it is regulatory, but because it is practical. OWASP bridges the gap between high-level law and day-to-day engineering. 

The convergence is evident. 

  • OWASP SAMM: AI governance, policy, risk assessment, secure development, validation, operations.  
  • OWASP ASVS: Security controls and logging patterns needed for AI traceability.  
  • OWASP Cornucopia: Threat modelling that captures misuse, bias, manipulation, and safety harms.  
  • OWASP AI Security & Privacy Guidance:  Direct mapping to AI Act duties around robustness, privacy, dataset security, model management and adversarial ML.  
  • OWASP PSCF: A mechanism to unify regulatory requirements, OWASP guidance, and internal controls. 

The advantage OWASP brings is not compliance with the AI Act per se, but that it provides practitioners a scaffold for implementing compliance without reinventing controls. 

What organisations should do next

No matter what sector you are in, the following steps apply universally: 

  1. Inventory all AI systems — and I do mean all systems: think low-code, embedded, and even the AI in vendor-supplied tools.  
  2. Classify each use-case per the guidance of the EU AI Act and identify CSRB implications.  
  3. Establish or uplift AI governance using SAMM-style maturity models.  
  4. Strengthen secure MLOps — lineage, testing, monitoring, incident readiness.  
  5. Perform threat modelling perhaps using tools like Cornucopia adapted to AI risk.  
  6. Map controls to the standards (through the OWASP PSCF) for traceability across regulation, standards, and implementation.  
  7. Build human oversight paths that are operational, not symbolic.  
  8. Engage suppliers early — flow down AI Act and CSRB requirements contractually. 

This is the moment where security, engineering, data science, and compliance must converge. If we frame AI governance as an extension of practices we already understand rather than an entirely new domain, it becomes manageable, measurable, and ultimately, enabling. 

Closing Thoughts

I’ve been known to argue that the role of modern security is to enable trustworthy autonomy — allowing systems and teams to operate at scale while remaining aligned with organisational intent and within its risk appetite. The EU AI Act and CSRB are not barriers to that ambition; they are catalysts. 

For practitioners, this is an opportunity to redefine assurance as something actionable and human centred. For organisations, it is a chance to build AI capabilities that are safe, transparent, and resilient from the start. 

If approached thoughtfully, these regulations can do more than impose obligations. 

They can push us toward the kind of engineering discipline we’ve always needed: one where security, governance, and innovation finally move in the same direction.