SR 26-2 Is Here: The 2026 Model Risk Guidance That Finally Gives Validators Teeth

On April 17, 2026, the Federal Reserve, the FDIC, and the OCC (collectively, "the agencies") issued SR Letter 26-2, which replaces prior model risk management guidance, the SR 11-7 issued in 2011. This update refines supervisory expectations regarding how banking organizations should calibrate their model risk management frameworks. The guidance is most directly applicable to institutions with total assets exceeding $30 billion, though smaller institutions with complex modeling activities are advised to consider its principles.



Scope and Applicability

The guidance formally excludes simple arithmetic calculations, deterministic rule-based processes, and notably, generative artificial intelligence and agentic artificial intelligence models from the definition of a model. However, the agencies explicitly state that traditional statistical, quantitative, and non-generative artificial intelligence models remain within scope. The primary audience is organizations with over $30 billion in assets, reflecting a tailored supervisory approach that recognizes the lower inherent risk profiles of most community banking institutions.

What Is Covered and What Is Not

SR 26-2 draws a clean line between two categories of artificial intelligence. On one side, traditional statistical models and non-generative, non-agentic AI models are fully within scope. This includes logistic regression for credit scoring, random forests for fraud detection, gradient boosting for loss forecasting, and any probabilistic model that applies statistical, economic, or financial theories to produce quantitative estimates. On the other side, generative AI such as ChatGPT-style models and agentic AI that makes autonomous decisions are explicitly excluded from the guidance. 

The agencies state these technologies are novel and rapidly evolving, so they are not covered here. Simple spreadsheet arithmetic and deterministic rule-based processes with no statistical underpinning are also excluded. For practitioners, this means the bank existing credit risk, market risk, and stress testing models remain subject to the full model risk management framework, while the internal productivity chatbots do not.

How to Treat Probabilistic and AI Models in Practice

For probabilistic models and non-generative AI, the guidance applies the same materiality-based framework as any other quantitative model. U.S. banks under scope must assess each model using two dimensions: exposure (portfolio size and financial impact) and purpose (regulatory significance or critical risk decisions). A machine learning fraud detection model affecting $50 million in transactions may require less rigor than a smaller logistic regression model used for regulatory capital calculations, if the latter serves a more critical purpose. The key operational change is that validators of AI models must now have organizational standing to effect change, not just technical expertise. 

For probabilistic models with inherent uncertainty, banks must document assumptions explicitly and monitor performance drift continuously, not annually. Vendor-supplied AI models receive no lighter treatment; proprietary black-box constraints do not excuse banks from validating conceptual soundness. If a vendor will not provide transparency into model design, development data, or assumptions, banks must either conduct independent back-testing using the internal own data or limit the model to immaterial use cases.

Main Changes and Technical Nuances

The most significant departure from prior guidance is the formal introduction of a materiality-driven framework. Rather than applying uniform rigor to all models, the agencies now require banking organizations to evaluate model risk through two distinct lenses:

  1. Model Exposure: The quantitative significance of a model's output to business decisions, typically measured by portfolio size or financial impact.

  2. Model Purpose: A qualitative assessment of whether the model supports regulatory requirements or manages critical financial risk exposures.

The interaction of exposure and purpose determines model materiality, which then dictates the depth of validation, monitoring, and governance required. Immaterial models require only identification and periodic monitoring for changes in conditions that could elevate their status. Conversely, higher materiality models warrant comprehensive and rigorous oversight throughout the lifecycle.

The guidance also introduces a more explicit expectation regarding aggregate model risk. Institutions must assess risk not only at the individual model level but also across portfolios of models. This includes evaluating dependencies, common assumptions, shared data sources, and correlated methodologies that could cause simultaneous failures. A single point of weakness in a shared data pipeline, for example, could manifest as aggregate risk across multiple high-stakes models.

Effective Challenge and Independence

The agencies reinforce the concept of effective challenge as a non-negotiable component of sound governance. Effective challenge is defined as critical analysis performed by objective experts who possess the technical competence to evaluate model risk, sufficient independence to maintain objectivity, and the organizational standing to compel changes. This elevates the requirement beyond mere peer review to a governance mechanism with teeth. Validation functions must be structured to avoid conflicts of interest, particularly misalignment of incentives between model development and validation reporting lines.



Vendor and Third-Party Products

A critical clarification addresses vendor and third-party models. The guidance states that the use of proprietary products, including those where underlying code or methodology is inaccessible, does not diminish the banking organization's risk management responsibilities. Validation of vendor models must include an assessment of conceptual soundness, design, development data, and ongoing performance. Customizations made to vendor models for specific business needs must be documented, justified, and evaluated as part of validation. The inability to inspect proprietary elements is not an acceptable basis for reducing validation rigor.

Model Development, Validation, and Monitoring

The guidance formalizes three components of validation:

  • Conceptual Soundness: Assessing model design, assumptions, qualitative judgments, and data selection.

  • Outcomes Analysis: Comparing model outputs to real-world results, including back-testing and outlier analysis.

  • Ongoing Monitoring: Evaluating performance against changing products, exposures, data relevance, and market conditions.

Notably, the guidance permits limited circumstances where a model may be used prior to completion of validation, such as an urgent business need. In such cases, the institution must apply heightened attention to model limitations, inform relevant stakeholders, and implement compensating controls including usage limits and closer performance monitoring.

Governance and Documentation

The agencies expect a comprehensive model inventory that supports risk management at both individual and aggregate levels. Documentation must be adequate to ensure continuity of operations, track recommendations and exceptions, and support remediation efforts. Internal audit functions are expected to evaluate the effectiveness of model risk management practices rather than duplicate validation activities.

Enforceability Context

While the guidance explicitly states that non-compliance will not result in supervisory criticism standing alone, the agencies preserve their authority to take action for any violations of law or unsafe or unsound practices stemming from insufficient management of model risk. Practically, this means the guidance defines the supervisory baseline. Deviations from its principles will be cited as evidence of inadequate risk management in the event of a model failure or material loss.

Implications for GRC Professionals

The 2026 guidance signals a maturation of model risk management from a technical validation exercise to an integrated governance discipline. GRC professionals should prioritize three actions: first, implementing a tiered inventory that clearly distinguishes material from immaterial models; second, assessing aggregate risk across model portfolios, particularly where shared assumptions or data sources exist; and third, reviewing vendor management agreements to ensure that contractual terms do not impede the validation and ongoing monitoring required by the agencies. The exclusion of generative and agentic artificial intelligence is temporary; the principles articulated in this guidance will likely inform future supervisory expectations as those technologies evolve.



Critical Implications of the Revised Model Risk Management Guidance (SR 26-2)


Four Critical Changes for Risk Managers


1. Redesign Model Tiering Using Dual-Axis Materiality Assessment

Risk managers must now classify all AI predictive models using both exposure (quantitative portfolio impact) and purpose (qualitative regulatory or risk significance), replacing single-dimension risk ratings. This materiality-based framework means a fraud detection AI model affecting $50M in transactions may warrant less rigor than a $10M credit decisioning model if the latter supports regulatory capital calculations. Organizations must rebuild model inventories to document both dimensions, as immaterial models by exposure may still be material by purpose. The tiering directly determines validation depth, monitoring frequency, and governance escalation pathways for each AI risk model.

2. Establish Effective Challenge with Organizational Authority

Validators of AI predictive models must now possess not only technical expertise but demonstrable organizational standing and influence to effect change, moving beyond advisory roles. Risk managers must restructure validation teams to ensure challengers can delay model deployment, escalate concerns to executive committees, and mandate remediation with teeth. This represents a fundamental shift from validation as documentation exercise to validation as governance gate, particularly critical for complex AI models where technical reviewers previously lacked business authority. Second-line model risk functions must now be empowered to override first-line deployment timelines when AI model risks are inadequately addressed.

3. Implement Rigorous Vendor Risk Model Governance

Third-party AI models for credit scoring, fraud detection, or risk forecasting no longer receive lighter treatment despite proprietary limitations, requiring the same conceptual soundness validation as internal models. Risk managers must negotiate with vendors for sufficient transparency into model design, development data, assumptions, and performance metrics to conduct meaningful validation, even when source code is unavailable. Ongoing monitoring and outcomes analysis are now explicitly required for vendor AI models, including documentation of any overlays or adjustments made to customize outputs. Where vendors cannot provide adequate validation evidence, risk managers must either conduct independent testing using the bank's own data or limit the model's application to lower-materiality use cases.

4. Deploy Continuous Model Monitoring Infrastructure

Ongoing monitoring is elevated from periodic review to continuous evaluation, requiring risk managers to implement real-time performance tracking for material AI predictive models across changing data distributions and market conditions. Monitoring frameworks must now explicitly assess whether AI models remain fit-for-purpose as products, client bases, or economic environments shift, with predefined thresholds triggering recalibration or redevelopment. Risk managers must establish outcomes analysis comparing AI model predictions to actual results (back-testing) as a standard validation component, not an optional add-on, particularly for models relying on expert judgment or alternative data. The guidance mandates documentation of model deterioration triggers and response procedures, forcing proactive governance rather than reactive remediation when AI risk models fail.

Priority Actions for SR 26-2 Compliance

1. Materiality Triage

Large U.S. banks should redesign model inventories around purpose and exposure, not a single generic risk score. The guidance is explicit that model materiality depends on the business importance of the use case and the significance of the output to decisions, including regulatory and financial risk use. For predictive AI models, credit loss, fraud, liquidity, and capital-related use cases should be tiered above internal analytics or convenience models. Common practice still overweights model complexity and underweights business consequence; that should be corrected.

2. Challenge Authority

Banks should formalize effective challenge as a control with authority, not as a review function. The guidance requires challengers to have sufficient expertise, independence, organizational standing, and influence to effect change throughout the model lifecycle. That means validation functions need documented rights to delay launch, require remediation, and escalate unresolved issues to executive governance forums. Common advice tends to treat validation as commentary; that is not defensible under this guidance.

3. Continuous Monitoring

Scoped banks should move material predictive AI models to ongoing monitoring with explicit deterioration triggers. The guidance requires monitoring for changes in products, exposures, activities, clients, data relevance, and market conditions, and it states that material deterioration may warrant overlays, adjustment, or redevelopment. Monitoring should therefore include pre-defined thresholds for drift, performance decay, and segmentation instability, not just periodic reporting. Common practice often relies on quarterly review cycles; that is too slow for models embedded in live decisioning flows.

4. Third-Party Validation

Banks should validate vendor and other third-party predictive models to the same conceptual standard applied to internally developed models. The guidance states that proprietary constraints do not remove the need to understand design, development data, assumptions, and performance. Where source code is unavailable, banks need compensating controls such as benchmarking, documented customization review, independent testing, and ongoing outcomes analysis. Common advice often treats SOC reports or vendor attestations as sufficient coverage; they are not.

5. Use Expansion Gate

Banks should treat any extension of model use as a new risk event requiring formal review. The guidance states that using a model beyond its intended purpose introduces additional uncertainty and requires additional analysis of limitations and controls. That means a predictive model approved for one portfolio, channel, or decision layer should not be repurposed without re-validation and governance sign-off. Common practice often extends models through informal business requests; that is a control weakness, not agility.

6. Aggregate Risk Map

The banks under scope should maintain a live inventory that maps individual and aggregate model risk, including shared data, assumptions, and dependencies. The guidance specifically calls out aggregate risk arising from interactions among models and from common methodologies or inputs that can fail simultaneously. For predictive AI models, that inventory should also identify upstream data feeds, shared calibration logic, and correlated override points. Common advice tends to validate models in isolation; that misses the concentration risk the guidance now makes explicit.



About the Author:

Hernan Huwyler is a risk and compliance executive who advises financial institutions on model risk management, AI governance, and control frameworks. He has led validation functions for global banks and regularly writes on the intersection of quantitative risk and regulatory compliance.

#ModelRiskManagement, #SR262, #SR117, #ModelValidation, #EffectiveChallenge, #AIModels, #RiskGovernance, #ModelRisk, #VendorRiskManagement, #FinancialRegulation, #FederalReserve, #FDIC, #OCC, #GRC, #Compliance, #RiskManagement, #AIGovernance, #ModelMateriality, #SecondLineOfDefense, #BankingRegulation


How to Stop Producing Risk Registers Nobody Uses

 The Painful Gap Between Risk Reporting and Risk-Informed Decisions

Most Enterprise Risk Management programs fail in the same quiet way. They produce polished registers, colorful heat maps, and quarterly reports that look impressive in board packs. Then the organization makes its next major capital allocation, acquisition, or vendor choice using a single-page summary with one projected number and zero reference to the risk framework that consumed thousands of hours to build.

I've watched this pattern destroy the credibility of risk functions across industries. The risk team works hard. Stakeholders get interviewed. Likelihood and impact get scored. And none of it touches the actual decisions that determine whether the organization wins or loses. The gap between risk reporting quality and decision quality is where ERM programs go to die.

This article addresses that gap directly. It provides a stage-by-stage implementation approach for building an ERM program that changes how your organization decides, plans, and allocates resources. Every recommendation comes from field-tested practice, not theory. If your ERM program currently produces documents that live in SharePoint between annual reviews, this post shows you how to fix that.


Core Framework: The Three Pillars of Decision-Driven ERM

Effective ERM that actually changes decisions rests on three pillars. Each one addresses a different failure mode I've seen repeatedly in organizations that mistake activity for impact.


Pillar 1: Risk-Informed Performance Management

ERM must live inside the performance management system, not alongside it. This means every major risk links to at least one strategic objective and KPI. When risk shows up in performance reviews and operating rhythms, people pay attention. When it lives in a separate portal, they don't.

The most common failure here is creating the linkage on paper but not in practice. I worked with one organization that mapped all 35 risks to strategic objectives in their GRC platform. Beautiful mapping. But the quarterly business reviews still used a completely separate slide deck with no risk content. The fix was simple but politically difficult: we added a mandatory "risk and assumption" section to the existing QBR template and made the business unit head (not the risk team) responsible for completing it. Adoption jumped from near zero to 80% within two quarters because the accountability sat with the person who owned the performance conversation.


Pillar 2: Risk Analysis Embedded in Decision Workflows

Every significant decision, from capital expenditure approvals to vendor selections to product launches, must include explicit risk reasoning. Not a generic "risk section" pasted at the end of a business case. A structured analysis of key assumptions, downside scenarios, and alignment with risk appetite.


o not try to retrofit risk analysis into existing decision workflows by adding a new form or approval gate. That creates resentment and checkbox behavior. Instead, redesign the decision paper template itself. Add three mandatory questions directly into the body of the document: "What are the top three assumptions this recommendation depends on?" "What happens if each assumption is wrong?" "How does this fit within our stated risk appetite?" When these questions sit inside the template that decision-makers already complete, risk thinking becomes part of the work rather than extra work.


Pillar 3: Distributions Replace Point Estimates

Organizations addicted to single "best guess" numbers make systematically overconfident decisions. Fighting this addiction requires replacing point estimates with ranges, scenarios, and probability distributions for all material assumptions.

Do not try to convert every number in your organization to a distribution. Start by identifying "high-leverage assumptions," the five or six variables that most affect NPV, margin, schedule, or safety in your biggest decisions. Convert those to three-point estimates (minimum, most likely, maximum) first. I made the mistake early in my career of trying to build full stochastic models for everything. The result was analysis paralysis and skepticism from leadership. Starting with just the high-leverage variables keeps the effort manageable and produces results that are visually obvious to executives who have never seen a tornado chart before.


Stage 1: Reframe ERM and Align It to the Business Cycle

The first implementation stage kills the annual risk assessment ritual and replaces it with a rolling cadence tied to how the business actually operates.

Map your organization's existing planning calendar: budgeting cycle, strategy refresh, product roadmap reviews, capital planning windows. Then attach risk input as a standard step in each of those existing processes. Risk analysis during budgeting means budget assumptions get challenged. Risk analysis during strategy refresh means strategic bets get stress-tested. Risk analysis during product roadmap reviews means launch decisions include downside scenarios.


The responsible party for each touchpoint is the business owner, not the risk function. The risk function sets the method, provides tools, and samples for quality. But the business leader presents the risk view alongside the performance view. This matters because risk ownership that sits with a central function creates a dynamic where business leaders treat risk as "someone else's job."

What to do: Collapse your risk inventory from whatever unwieldy number it has grown to (I've seen 200+) down to 10 to 20 enterprise-level risks with clear aggregation logic. Local risks roll up into enterprise themes. The board sees 15 risks, not 150. Business units manage their local registers, but reporting flows upward through defined aggregation rules.

The hardest part of this stage is getting the CEO and CFO to agree that risk content belongs in existing performance forums rather than in separate risk committee meetings. I've found the most effective argument is financial: show them a past decision where a single-point estimate led to a materially different outcome than what a range-based analysis would have predicted. One concrete example of a budget miss or project overrun that was foreseeable with basic scenario analysis does more to shift executive behavior than any amount of framework documentation. Find that example in your own organization's recent history. It exists. I guarantee it.


Stage 2: Build Risk Analysis Into Decision Templates and Workflows

This stage addresses the specific mechanics of getting risk reasoning into the documents and approval processes that govern major decisions.

Start by mapping every "decision point" where risk analysis should be mandatory. Board approvals. Capital investments above a defined threshold. Acquisitions. Large contracts. Major technology choices. Key product or market entry decisions. For each type, define a minimum level of analysis. Small decisions get a short qualitative checklist. Large, irreversible, or high-uncertainty bets get full quantitative modeling.


For every significant contract, investment, or vendor choice, attach a one to two page mini risk assessment. The template should cover: objectives, key assumptions, top five risks with likelihood and impact ratings, existing controls, residual risk rating, and proposed mitigations. This format works because it's short enough to complete in an hour but structured enough to surface real issues.


Standardize quick techniques for smaller assessments: what-if questions, simple decision trees, bow-tie diagrams, or 5x5 matrices. Reserve deeper tools like FMEA, HAZOP, or fault-tree analysis for complex technical or safety-critical decisions. Set clear thresholds (contract value, strategic impact, irreversibility, public or ESG exposure) that trigger the more advanced assessment. This way your organization runs dozens of mini-assessments per month with sensible prioritization, not bureaucratic uniformity.

Require that any recommendation comparing Option A to Option B includes risk-adjusted reasoning. Not just base-case numbers. The proposal must show what happens to each option under stress. Which option breaks first? Which option has a wider range of possible outcomes? This single requirement forces genuine analytical thinking and prevents the common dysfunction where the "highest NPV" option wins by default even when its returns depend on a single fragile assumption.

Watch out for "fake risk-based" methods. I've audited vendor and contract risk methodologies across multiple organizations and found that many rely on uncalibrated scoring, arbitrary matrices, or vague checklists that produce a number but do not actually improve the decision. The test is simple: can you show me a specific instance where this risk methodology changed the selection of a vendor, the structure of a contract, or the design of a project? If the answer requires more than 30 seconds of thought, the methodology is theater. Replace it with structured identification, explicit assumptions, harmonized scales, and wherever possible, quantification tied to financial or operational impacts.

Stage 3: Replace Point Estimates With Ranges and Simulations

This is where decision-driven ERM gets quantitatively serious. Most organizations plan using single numbers for exchange rates, commodity prices, demand volumes, system uptime, and dozens of other variables. Every experienced professional knows these numbers are wrong. But the organization plans as if they're certain, then acts surprised when reality differs.

For key drivers, require ranges or probability distributions instead of single numbers. Start with three-point estimates (minimum, most likely, maximum) because they're intuitive and fit into existing spreadsheet workflows. Show P10, P50, and P90 outcomes next to the traditional single case. Standardize a small set of "risk views" for every major item: base case, conservative (P80 to P90), aggressive (P20), and stress case. Make approval documents reference which profile management is accepting.

For large projects, site selections, portfolio decisions, and annual budgets, run Monte Carlo simulations on the combined distributions of key assumptions. Report results in terms executives can act on: probability of loss, probability of meeting budget or schedule, value at specific percentiles, and which variables contribute most to variance. Tornado charts that show "FX drives 40% of your outcome variance" focus mitigation efforts far better than a color-coded heat map ever could.

Build simple internal libraries of typical distributions for recurring drivers. FX volatility ranges. Load factor distributions. Failure rate curves. Price curve bands. When teams can reuse validated assumptions instead of inventing numbers from scratch, the quality of analysis goes up and the time required goes down. I spent months building these libraries at one organization and it cut the time to produce a quantified risk view from two weeks to three days.

The cultural shift matters more than the technical one. I watched a capital allocation committee change their decision after seeing simulation output for the first time. The "highest NPV" option had a 35% probability of delivering negative returns once you modeled realistic input ranges. The second-ranked option had lower expected returns but only a 12% probability of loss. They chose robustness over optimism. That single moment did more to establish the credibility of quantitative risk analysis than two years of framework presentations. Find your version of that moment. Run the simulation on a decision that's already been made and show leadership what they would have seen if they'd had this view at the time. The reaction will tell you whether your organization is ready.

Stage 4: Governance, Ownership, and Culture Infrastructure

Without accountability structures, everything in the previous three stages degrades within 12 months. I've seen it happen. An organization builds beautiful decision templates, runs impressive simulations, and then slowly reverts to old habits because nobody's performance goals include risk-adjusted outcomes.

Define risk ownership at the level of specific "risk objects": products, processes, portfolios, or business units. Each risk object gets a named owner. That owner's performance goals explicitly include risk-adjusted outcomes. Not just revenue. Not just volume. This connects risk management to compensation and career progression, which is the only reliable driver of sustained behavior change.

Run short monthly "risk clinics" with each business unit. These replace the annual committee meeting that tries to cover everything and covers nothing well. In a 60-minute clinic, review changes in the unit's risk profile, challenge key assumptions, and adjust plans. The risk function facilitates. The business unit leads. Keep the format consistent: what changed since last month, what are the top three risks to this quarter's objectives, what decisions are coming up that need risk input.

Build an explicit expectation that major decisions (capex approvals, acquisitions, product launches, outsourcing) must reference key risks and mitigations from the ERM system. Treat the absence of this reference as a process failure. Not a documentation gap. A process failure that gets flagged in the same way a missing financial approval would get flagged. This is a governance design choice that signals organizational seriousness.

The single most common dysfunction I see in ERM governance is the "risk owner in name only" pattern. Someone's name appears next to a risk on the register, but their actual performance review, bonus criteria, and promotion case make zero reference to how they managed that risk. The fix requires executive sponsorship from the CEO or CFO to mandate that risk-adjusted KPIs appear in performance scorecards for anyone who owns a top-20 enterprise risk. Without this, risk ownership is decorative. I failed to get this done at one organization because I tried to push it through the risk committee instead of the compensation committee. The lesson: risk ownership is a people and incentives problem, not a risk framework problem.


 Implementation Tips

These four tips apply across all stages and address the patterns that most commonly cause decision-driven ERM programs to stall or revert.

Tip 1: Maintain Method Integrity Over Time

Original implementation tip: ERM methods degrade naturally. Templates get shortened. Simulation steps get skipped when deadlines are tight. Scoring scales drift as new people join and interpret criteria differently. Schedule a semi-annual "method health check" where the risk function reviews a sample of recent decision papers, mini-assessments, and simulation outputs against the defined standards. Flag deviations. Retrain where needed. Publish a short "quality scorecard" that shows which business units are maintaining standards and which are slipping. Transparency creates peer pressure that formal compliance never matches.

Tip 2: Handle the "Risk Champion" Role Carefully

Original implementation tip: Many organizations appoint "risk champions" in each business unit to act as liaisons with the central risk function. This works when champions have genuine credibility and seniority in their unit. It fails when the role gets assigned to the most junior person available or treated as administrative overhead. Require that risk champions hold a position at least one level below the unit head. Give them explicit time allocation (minimum 10% of their role). Include champion effectiveness as a factor in their performance review. I've seen champion networks transform ERM adoption when they're staffed with respected operators. I've seen them become an excuse for everyone else to ignore risk when they're staffed with interns.

Tip 3: Document Decision Rationale, Not Just Decision Outcomes

Original implementation tip: Create a simple "decision record" template that captures: the options considered, the risk analysis for each option, the trade-offs discussed, the risk appetite alignment, and the rationale for the final choice. Store these records in a searchable repository. Review a sample annually to check whether risk information was captured, how it influenced the choice, and how outcomes compared to expectations. This feedback loop is where organizational learning happens. Most organizations skip it entirely. The ones that do it consistently develop a pattern-recognition capability that makes future decisions measurably better. One organization I worked with found that 60% of project overruns in a three-year sample traced back to the same two assumption categories that were consistently treated as deterministic when they should have been modeled as ranges.

Tip 4: Be Skeptical of Dashboard-First GRC Platforms

Original implementation tip: Before committing to any ERM or GRC platform, ask the vendor one question: "Show me three examples where your platform's output changed an actual decision at a client organization." If they can only show you dashboards, taxonomies, and workflow automations, proceed with extreme caution. The best platforms provide centralized risk repositories, standardized taxonomies, automated data feeds from incidents and audit findings, scenario analytics, and integration with the BI tools and project portfolio systems your leaders already use daily. The worst platforms produce beautiful screens that no decision-maker ever opens. Run a pilot focused on one specific decision type before scaling. Measure whether the pilot improves option selection or outcome quality, not just reporting speed.

Key References

The following standards and frameworks provide authoritative guidance for building decision-driven ERM programs:


ISO 31000:2018, Risk Management Guidelines, provides the foundational principles and process for integrating risk management into organizational governance and decision-making

COSO ERM Framework (2017), Enterprise Risk Management: Integrating with Strategy and Performance, directly addresses the linkage between risk management and strategic planning

IEC 31010:2019, Risk Assessment Techniques, catalogs and guides the selection of specific risk assessment methods (Monte Carlo, FMEA, bow-tie, fault tree, and others) matched to decision context

ISO 31022:2020, Guidelines for the Management of Legal Risk, extends risk management principles to legal and contractual decision-making

NIST Risk Management Framework (SP 800-37), while focused on information systems, provides a strong model for embedding risk analysis into system acquisition and authorization decisions

The Orange Book (HM Treasury, UK), Managing Public Money risk guidance, offers practical templates for integrating risk analysis into investment and spending decisions

IIA Three Lines Model (2020), provides the governance structure for separating risk ownership, risk oversight, and independent assurance

Closing

When ERM stays a compliance artifact, it consumes budget, absorbs staff time, and produces documents that create an illusion of control. Decisions continue to rely on single-point estimates, gut feel, and the loudest voice in the room. The risk register gets updated annually, presented quarterly, and referenced never. The organization pays the full cost of risk management and receives almost none of the benefit.

When ERM operates as a living decision system, every major choice carries an explicit view of uncertainty, a structured comparison of options under stress, and a clear statement of which risks leadership is consciously accepting. The risk register becomes a hub connected to controls, incidents, KPIs, and projects. Simulations replace single guesses. Performance conversations shift from "you missed the number" to "where did we land in the distribution, and what did we learn?" The difference between these two states determines whether your organization manages risk or merely documents it.

What's one major decision your organization made in the last year that would look completely different if someone had modeled the downside honestly?

Skills for Compliance Officers, Risk Managers, and Auditors

7 Career Capabilities That Will Separate Compliance Officers Who Thrive in 2026 From Those Who Get Replaced by Algorithms


ING just announced 1,250 job cuts in its compliance operations. ABN Amro plans to replace 35% of its AML division with AI. The Dutch audit office published a report questioning whether the €1.4 billion the banking sector spends annually on anti-money laundering checks actually produces effective outcomes.

Read that last sentence again. The government auditor is asking whether the entire manual compliance model works.

This is not a future scenario. This is happening now, across multiple banks, in one of Europe's most regulated markets. And it raises a question that every compliance officer, risk manager, and internal auditor should be asking themselves today: if my primary value comes from executing manual processes that AI can do faster and more consistently, what exactly is my professional future?

The answer depends entirely on skills. Not certifications. Not years of experience. Skills.

I have spent the last fifteen years working with compliance functions across financial services, industrials, and technology companies. The pattern I see repeating is consistent: the professionals who can quantify risk, challenge AI outputs, and translate regulatory complexity into financial terms the business can act on are becoming more valuable every quarter. The ones who built careers around checklist execution, manual alert processing, and qualitative risk scoring are watching their roles disappear. Sometimes gradually. Sometimes overnight.

This post identifies the seven skills that will define professional survival and advancement in compliance, risk, and audit roles through 2026 and beyond. Each one is grounded in what I see organizations actually hiring for, paying premiums for, and struggling to find.

 

Quantification: The Skill That Changes Everything

Here is the dividing line. A compliance officer who says "this risk is high" is offering an opinion. A compliance officer who says "the expected annual loss from this obligation failure is €2.3 million, with a severe but plausible exposure of €9.6 million at the 95th percentile" is offering a decision.

Boards act on the second one. They file the first one.

Quantification means expressing compliance exposure in currency. Expected annual loss. Value at Risk. Conditional Value at Risk. Return on compliance investment. Loss exceedance curves. These are not exotic financial instruments. They are the basic vocabulary of every other risk function in the organization. Credit risk quantifies. Market risk quantifies. Operational risk quantifies. Compliance still shows up with colors.

The Dutch audit office report captures the consequence of this gap perfectly. The Netherlands spends €1.4 billion per year on AML compliance. Nobody can demonstrate whether it works. That is what happens when a function operates for decades without measuring its own effectiveness in terms that finance and strategy teams can use.

Original implementation tip: Start with your five most material compliance obligations. For each one, estimate a frequency (how often could this go wrong, expressed as events per year) and a severity range (what would it cost when it does, expressed as a currency interval with a confidence level). Feed those into a compound Poisson-Lognormal Monte Carlo simulation. You can do this in a free Google Colab notebook with code available on GitHub. No statistics degree required. The output is a loss distribution that tells you more about your compliance exposure in one afternoon than your entire qualitative risk register has told the board in the last five years.

Judgment: The One Thing AI Cannot Automate

AI can process 10,000 transaction alerts in the time it takes a human analyst to review three. It can scan contracts for misaligned clauses, monitor sanctions lists in real time, and flag anomalous expense patterns across the entire organization.

What it cannot do is decide.

An AI model that flags a suspicious transaction has produced a signal. Whether that signal warrants investigation, escalation, a suspicious activity report, or closure with documented rationale requires judgment. Judgment about regulatory expectations in that specific jurisdiction. Judgment about the customer relationship and its commercial context. Judgment about whether the pattern represents genuine risk or a false positive that, if escalated, would waste investigative resources and potentially harm a legitimate customer.

The Dutch audit office report noted that the current system of strict AML controls "does not always lead to useful investigations" and can have "serious consequences for ordinary people." That is a judgment failure, not a technology failure. The controls generated activity. Nobody ensured the activity produced outcomes.

When ING reduces 1,250 FTEs and shifts to AI-driven processing, the compliance professionals who remain need better judgment than the ones who left. They are handling the cases that the algorithm could not resolve. They are calibrating the thresholds that determine what the algorithm escalates. They are explaining to the regulator why a particular decision was made. Every one of those tasks requires experience, context, and the ability to exercise discretion under uncertainty.

Original implementation tip: When you review an AI-generated alert or risk flag, document not just your decision but your reasoning. Write two sentences explaining why you escalated or closed the case. After twelve months, review those documented rationales. You will find patterns in your own judgment that improve future decisions and create an auditable record that regulators value far more than a closed-case count.

AI Fluency: Working With the Machine, Not Around It

AI fluency for compliance professionals has nothing to do with writing code. It has everything to do with understanding what the model is doing well enough to trust it where it is reliable and challenge it where it is not.

This means knowing how to ask the right questions. What data was the model trained on? What assumptions drive the alert thresholds? Where are the known blind spots? What is the false positive rate, and what is the cost of each false positive in analyst time? What happens when the input data quality degrades?

I worked with a financial institution that deployed an AI-powered transaction monitoring system. The compliance team treated it as a black box. Alerts came in, analysts processed them, case counts went into the quarterly report. Nobody asked whether the model was actually catching the right things. When an external review tested the system against known typologies, the detection rate for a specific category of trade-based money laundering was below 12%. The model was generating thousands of alerts for low-risk patterns while missing the high-risk ones entirely.

The compliance team did not lack intelligence or dedication. They lacked the fluency to interrogate the tool they were using every day.

Original implementation tip: Ask your technology team to show you the model's confusion matrix for the last quarter. It will tell you how many true positives, false positives, true negatives, and false negatives the system produced. If nobody can produce this information, you have a tool, but you do not have a control. A control you cannot measure is a control you cannot defend.

Regulatory Mapping: Conflicts, Overlaps, and the Before-the-Fact Discipline

Regulatory fluency in 2026 is not about memorizing rules. It is about mapping obligations across jurisdictions, identifying conflicts before they create exposure, and translating regulatory expectations into operational requirements that the business can actually execute.

The complexity is real and accelerating. The EU AI Act imposes requirements on high-risk AI systems that may conflict with data minimization principles under GDPR. Cross-border data localization requirements in one jurisdiction clash with centralized processing mandates in another. Anti-corruption reporting thresholds differ between the FCPA, the UK Bribery Act, and local legislation in every market where the organization operates.

A compliance officer who can identify these conflicts, quantify the exposure on each side, and recommend a documented compliance path with a defensible rationale is providing strategic value. One who simply flags the conflict and asks the business to "seek legal advice" is adding a step to the process without reducing risk.

The most important application of regulatory fluency happens before the organization accepts the obligation. Before signing the contract. Before announcing the ESG commitment. Before entering the new market. Before launching the AI-powered product. At that point, terms can be changed, commitments narrowed, controls built first, and deal structures adjusted. Once the promise is made, the options get slower and more expensive.

Original implementation tip: For every new market entry, major contract, or public commitment, create a one-page obligation conflict map. List the top five obligations the decision creates. For each, identify whether any conflict exists with obligations in other jurisdictions where the organization operates. Where conflicts exist, quantify the exposure for each compliance path and document the chosen approach with its rationale. This single artifact will be the most valuable document in your file if a regulator ever asks why you chose one path over another.

Making Your Decisions Defensible

A compliance decision that cannot be reconstructed and explained six months later is not a decision. It is a liability.

Evidencing is the discipline of documenting risk assessments, treatment decisions, control design rationale, and residual risk acceptance in a way that creates a defensible record. Not for the sake of documentation, but because regulators, auditors, and courts evaluate compliance programs based on what can be demonstrated, not what was intended.

ISO 37301 explicitly links a robust compliance management system to evidence of due diligence that can mitigate corporate liability. In jurisdictions that recognize effective compliance programs as a mitigating or exonerating factor (Spain, France, Brazil, the UK, and increasingly the US through DOJ guidance), the quality of your evidence directly affects the severity of your sanctions.

I have seen organizations with excellent compliance programs receive harsh regulatory treatment because they could not produce the documentation to prove what they had done. And I have seen organizations with modest programs receive favorable treatment because they could demonstrate a clear, documented chain of risk assessment, decision, control, and monitoring.

The difference was not the quality of the compliance work. It was the quality of the evidence.

Original implementation tip: For every risk that exceeds your stated tolerance, create a treatment decision record with five elements: the quantified exposure before treatment, the treatment option selected with its cost, the expected reduction in exposure, the residual risk explicitly accepted, and the name and level of the person who approved the acceptance. This record takes ten minutes to create and can save millions in regulatory proceedings.

Spending Where It Matters

Compliance budgets are finite. The obligation universe is not. Every organization faces more compliance requirements than it can address with maximum intensity simultaneously. The skill that separates effective compliance leaders from overwhelmed ones is the ability to allocate resources where the expected loss reduction justifies the cost.

This sounds obvious. Watch how many organizations skip it.

The standard approach is to apply roughly uniform compliance intensity across all obligation domains, driven by checklist coverage rather than risk-weighted exposure. The result is predictable: the organization spends heavily on low-risk obligations where the expected loss is modest and underinvests in high-risk obligations where the expected loss is material. The qualitative risk register cannot reveal this misallocation because it does not express exposure in comparable units.

The return on compliance investment formula is simple. Take the reduction in expected annual loss attributable to a control, subtract the annual cost of the control, divide by the annual cost of the control. If a new automated monitoring system costs €400,000 per year and reduces expected annual AML-related compliance losses from €2.8 million to €1.5 million, the ROCI is 225%. That is a compelling business case expressed in language that finance teams understand and approve.

Every compliance budget request should be framed this way. "This €200,000 investment reduces our expected annual loss by €750,000" works. "We need this because the regulation requires it" does not.

Original implementation tip: Rank your top ten compliance obligations by expected annual loss. Then rank them by current compliance spending. Compare the two lists. In my experience, the correlation is disturbingly low. The mismatch between where you spend and where your exposure actually sits is the single highest-value finding your quantitative risk assessment will produce.

Speaking the Language of the Business

Every skill described above becomes useless if the compliance officer cannot communicate findings in terms that decision-makers act on. Translation is the ability to convert regulatory complexity, risk quantification, and control recommendations into the financial and operational language that executives, boards, and business unit leaders use to make decisions.

Decision-makers act on euros. They do not act on risk ratings. They do not act on colors. They do not act on compliance jargon.

A compliance officer who tells the board "we have 47 high risks" has produced information that prompts no specific action. A compliance officer who tells the board "our expected annual compliance loss is €4.2 million, with a P80 exposure of €8.1 million and a tail at P99 of €95 million, and our current reserves cover only to the 55th percentile" has produced a statement that triggers a budget discussion, an insurance review, and a strategic conversation about which obligations create the most exposure.

The difference between these two presentations is not sophistication. It is professional utility. The first produces documentation. The second produces decisions.

Original implementation tip: Before every board or committee presentation, test your key message against this question: "Can a CFO act on this statement without asking for additional information?" If the answer is no, rewrite it until the answer is yes. Replace "high risk" with a currency range. Replace "significant exposure" with a percentile from your simulation. Replace "we recommend enhanced controls" with "this €300,000 investment reduces our P80 exposure from €8 million to €3.5 million." The reaction in the room will change immediately.

Understanding the 2026 skills model for GRC roles

The mistake many firms make is treating future skills as a training catalogue problem.

It is not.

This is a control model problem, a governance design problem, and a workforce economics problem. Once AI and automation take over repetitive tasks, the remaining human work changes shape. That means the required skills also change shape. Fast.

A useful way to think about this is through three buckets.

Transferred skills

These are the capabilities that still anchor professional credibility.

Regulatory judgment still matters. Ethical judgment still matters. Clear documentation still matters. Institutional memory still matters. If you cannot explain why a decision was made, or what a regulator is likely to focus on, no analytics tool will save you.

Original implementation tip: do not treat legacy expertise as “old knowledge.” Extract it systematically. Build decision logs from experienced staff before attrition or restructuring removes your best practical judgment.

Sharpened skills

These are existing skills that now need a higher level of precision.

Communication is the best example. In 2026, compliance, risk, and audit professionals must explain complex risks to business leaders who are moving faster, using more technology, and tolerating less ambiguity. The old style, long memos, defensive language, generic caveats, gets ignored.

Risk prioritization also sits here. You now need to distinguish quickly between a control issue, a design issue, a model issue, and a real exposure issue.

Original implementation tip: if your team still writes findings that cannot be converted into a business decision within five minutes, your communication model is already outdated.

New skills

This is where the real shift sits.

Data literacy. AI oversight. Workflow design. Model skepticism. Cross-border digital regulatory fluency. These were once specialist capabilities. They are becoming baseline skills for high-value governance roles.

This does not mean every compliance officer must code, every auditor must become a data scientist, or every risk manager must build models. It means they must understand enough to challenge outputs, spot weak assumptions, and defend positions under scrutiny.

Original implementation tip: stop designing training around job families alone. Design it around decisions your team must make in the next 12 months.

[Suggested visual: a simple three-column diagram titled “Transferred, Sharpened, New Skills for 2026 GRC Roles”]

Stage 1: Build data literacy before you talk about AI

Start here.

Most teams want to jump straight into AI training because it sounds urgent and visible. In practice, the bigger failure point is much more basic. People cannot interpret dashboards, exception reports, alert quality metrics, model performance summaries, or data lineage issues with enough confidence to challenge what they see.

That weakness creates a quiet professional risk. A compliance officer who cannot interrogate data becomes dependent on whoever built the dashboard. A risk manager who cannot question assumptions behind thresholds becomes a consumer of outputs, not an owner of risk judgment. An auditor who cannot test data reliability properly ends up auditing process theatre.

What to implement:

  • Basic data fluency training for all GRC roles
  • A standard review method for dashboard quality
  • A simple model of data quality checks for governance teams
  • Case exercises on false positives, false negatives, and threshold design
  • Role-specific training on how data feeds decisions

The responsible parties should be shared. Compliance leadership defines use cases. Data teams explain structures and limitations. Internal audit helps design challenge routines. Risk functions connect metrics to exposure.

One detail matters a lot here. Use your own data examples. Not vendor demos. Not generic training screenshots. Real internal dashboards, real alert patterns, real escalation logs. Teams learn faster when the examples are familiar and slightly uncomfortable.

I learned this the hard way. Years ago, I helped run analytics training using polished external case studies. People liked the sessions. Nothing changed. When we switched to the organization’s own ugly, inconsistent reports, participation dropped for a week, then quality of challenge went up sharply. That is when the training started working.

Original implementation tip: teach governance teams to ask four questions every time they see a dashboard. What is missing, what changed, what is the threshold logic, and what decision should this support.

Stage 2: Move from AI enthusiasm to AI oversight

This is where many teams get exposed.

There is a big difference between using AI and governing AI. Most organizations are still much better at the first than the second. They can buy the tool, run the pilot, automate the workflow, and announce efficiency gains. They are far less prepared to answer harder questions about explainability, control effectiveness, model drift, fairness, escalation logic, and accountability.

That gap is now a live governance issue.

For compliance officers, the skill is not coding. It is understanding what the tool is doing, where it can fail, how decisions are documented, and when human review must override automation. For risk managers, the skill is understanding model assumptions and residual exposure. For auditors, the skill is testing whether the governance around the tool is real or decorative.

What to implement:

  • AI use case inventory with clear ownership
  • Minimum control requirements for AI-enabled decisions
  • Challenge sessions for model outputs and thresholds
  • Documentation standards for explainability and overrides
  • Audit steps tailored to automated workflows

Responsible parties should be explicit. The first line owns operational use. Risk sets challenge and model governance expectations. Compliance tests legal and regulatory implications. Audit reviews design and operating effectiveness.

One common failure deserves attention. Firms often reduce headcount before they upgrade capability. That creates the worst possible sequence. Work is automated, people leave, and the remaining staff have not yet learned to govern the new environment. The operation becomes cheaper. The exposure becomes harder to see.

Original implementation tip: never approve an AI-related staff reduction unless the governance capability map has been signed off first. Efficiency without oversight maturity creates hidden regulatory debt.

Stage 3: Strengthen regulatory fluency for digital and cross-border change

Regulatory fluency in 2026 means more than knowing the rulebook.

You need to understand how fast-moving digital regulation, privacy regimes, AI laws, outsourcing standards, conduct expectations, and cross-border obligations interact. That interaction is where real mistakes happen. A team can know each rule in isolation and still fail badly when obligations collide across products, jurisdictions, and data flows.

This is now a daily problem. AI systems cross borders. Customer data moves through vendors. Marketing claims create exposure in one jurisdiction and trigger evidence duties in another. Outsourcing arrangements carry regulatory, contractual, and operational obligations at the same time. Governance teams that cannot work across these layers become bottlenecks, or worse, false comfort providers.

What to implement:

  • Cross-border obligation maps for material products
  • Regulatory horizon scanning tied to business decisions
  • Decision templates for conflicting obligations
  • Joint reviews between legal, compliance, risk, and technology
  • Escalation rules for unresolved jurisdictional conflicts

The critical artifact here is not a long legal memo. It is a decision-ready summary that tells management what changed, why it matters, where the exposure sits, and what options exist.

There is also a capability tradeoff. Small firms cannot build deep expertise in every jurisdiction. Large firms often drown in fragmented expertise. Both need a clearer model of when to centralize interpretation and when to localize execution.

Original implementation tip: when a new digital or cross-border rule appears, do not ask first “what does the rule say?” Ask “which current decisions, products, or claims become harder to defend because of this?”

Stage 4: Replace checklist execution with risk-based prioritization

A lot of compliance, risk, and audit work still suffers from equal treatment of unequal problems.

That made some sense when workflows were mostly manual and teams needed visible consistency. It makes far less sense now. In a world of automated monitoring, large-scale data, and constrained headcount, the real differentiator is prioritization quality.

This skill is becoming central across all three functions.

Compliance officers must know where to intensify monitoring and where a control can be simplified. Risk managers must know which exposures deserve scenario work and which do not. Auditors must know where testing depth should increase and where assurance effort is no longer worth the cost. This is no longer just a planning issue. It is a professional judgment issue.

What to implement:

  • Risk-based planning linked to expected loss or materiality
  • Segmentation of issues by decision impact
  • Dynamic review cycles for emerging risk indicators
  • Monitoring plans tied to control value, not legacy frequency
  • Documentation of why low-value work was reduced

Here is where many teams hesitate. They fear that reducing low-value work will look careless. In reality, supervisors and boards increasingly expect the opposite. They want to know that scarce governance resources are being directed where they matter most.

I have seen teams spend months polishing low-risk control evidence while material third-party and data governance exposures sat under-reviewed. Nobody intended that outcome. It came from inherited planning habits that were never seriously challenged.

Original implementation tip: each year, require every governance team to identify 15 percent of recurring work that no longer justifies its cost. If nobody can name it, the planning process is too passive.

[Suggested visual: sample matrix comparing “effort spent” versus “risk value created”]

Stage 5: Turn judgment into a visible professional skill

Judgment used to hide behind experience.

That is no longer enough. As automated systems handle more routine work, human contribution must become more explicit. This means professionals need to show how they interpret ambiguity, challenge outputs, weigh tradeoffs, and defend decisions.

This is especially important because supervisors, boards, and executives now expect more than procedural compliance. They expect reasoning. They want to know why a case was escalated, why a model output was overruled, why a business request was delayed, or why a regulatory interpretation was considered proportionate.

Judgment also has to be teachable. This is where many organizations struggle. Senior people often have excellent instinct but poor transfer discipline. They know when something feels wrong, but they do not articulate the reasoning path clearly enough for others to learn from it.

What to implement:

  • Decision logs for difficult cases
  • Review sessions on borderline escalations
  • Written rationale requirements for overrides
  • Judgment-based case discussions in team meetings
  • Mentoring focused on reasoning, not just outcomes

The responsible parties here are mostly leaders. Team heads must create space for reasoning, not just throughput. Senior reviewers need to model their thought process in real cases. Audit leaders should document why a finding matters, not only what failed.

One vulnerable truth. Many experienced professionals are less prepared for this shift than they think. Deep experience with manual processes does not automatically translate into strong explicit judgment. I have seen very senior people struggle when asked to explain why they trusted one control output and challenged another. Experience gave them confidence. It had not given them a repeatable method.

Original implementation tip: after any material review or escalation, ask the decision-maker to write five lines explaining the reasoning. Over time, this becomes one of the best judgment training tools in the function.

Stage 6: Build influence across the first, second, and third lines

Governance functions lose value when they arrive late.

This is true in contract review, product change, AI deployment, customer segmentation, vendor onboarding, and issue remediation. If compliance, risk, and audit only appear once the decision is mostly made, they do not shape outcomes. They document concerns around decisions already moving forward.

The skill behind early influence is not authority. It is relevance.

Compliance officers need to explain business implications in terms leaders care about. Risk managers need to connect exposure to choices, timing, and tradeoffs. Auditors need to shift part of their credibility from post-event review to pre-event insight, while preserving independence.

What to implement:

  • Early-stage governance gates for key business changes
  • Short decision memos, not only long reports
  • Financial framing of major compliance exposures
  • Joint workshops with business, legal, data, and operations
  • Clear escalation routes when tradeoffs remain unresolved

A good practical test is simple. Can your team explain, in three minutes, why a proposed control change matters to revenue, cost, risk, or supervisory defensibility? If not, the technical analysis may be fine, but the influence skill is weak.

This is where careers widen or narrow. The professionals who can connect risk to business reality become trusted participants in decision-making. The ones who stay in narrow technical phrasing become background reviewers.

Original implementation tip: teach teams to present every material issue with three elements only. Exposure, decision options, and recommendation. Everything else can sit in the appendix.

Cross-cutting implementation tips for 2026 GRC skills

Skills do not hold unless the operating model supports them.

That is where many development programs break down. They train people in isolation while leaving workflows, incentives, reporting lines, and documentation unchanged. The result is temporary awareness with no durable capability.

Here are four cross-cutting practices that make the shift stick.

1. Tie skills to real decisions

Training works when it connects directly to live work.

Use current alerts, current controls, current dashboards, current regulatory changes. Build training around decisions the function must make this quarter, not abstract capability aspirations.

Original implementation tip: before approving any skills program, ask which three live decisions it will improve within 90 days. If nobody knows, the program is too generic.

2. Protect documentation quality during automation

As automation rises, documentation often gets weaker.

People assume the system record is enough. It usually is not. You still need rationale, evidence of challenge, override logic, and clear ownership of decisions. This matters in supervisory review, audit defense, and internal accountability.

Original implementation tip: for every automated control or AI-supported workflow, define what the system records automatically and what human rationale must still be documented manually.

3. Design for capability transfer, not heroics

Too many GRC functions still depend on a few very experienced people.

That model breaks under restructuring, attrition, or rapid technology change. Capability must sit in methods, playbooks, decision logs, review routines, and mentoring structures. Not only in memory.

Original implementation tip: if a critical governance task can only be defended by one person in the team, you do not have a skill. You have a dependency.

4. Measure whether the skill shift changes outcomes

This is the test that matters.

Did alert review quality improve. Did time to escalation fall. Did issue prioritization become sharper. Did audit findings become more decision-useful. Did management decisions change earlier in the process. If none of these move, your skills program may be producing awareness, not value.

Original implementation tip: define three operational metrics before the training starts and compare them after 90 and 180 days. Skill development without outcome measurement becomes corporate theatre.

Key references for 2026 compliance skills, risk management skills, and audit skills

The following standards, guidance sources, and institutional references are especially relevant for building 2026-ready skills in compliance, risk, and audit functions:

  • ISO 37301, Compliance management systems
  • ISO 31000, Risk management guidelines
  • IIA Global Internal Audit Standards
  • NIST AI Risk Management Framework
  • EU AI Act
  • GDPR and related EDPB guidance
  • Basel Committee guidance on operational risk and governance
  • FATF guidance on digital transformation, AML, and risk-based controls
  • EBA guidelines on internal governance, outsourcing, and ICT risk
  • DOJ guidance on evaluation of corporate compliance programs
  • ECB supervisory expectations for governance and risk control functions
  • Industry reports from PwC, KPMG, Deloitte, and major banking supervisory bodies on AI, compliance, and governance capability trends

Use these as anchors. But do not stop at reading them. Convert them into capability design, workflow changes, and role-specific expectations.

The capabilities that now matter most

If you need a simple shortlist, this is it.

Analytics

  • Turn alerts into quantified, decision-ready risk signals
  • Data literacy now matters more than checklist experience

Automation

  • Automate routine KYC, redeploy humans to complex judgment
  • Efficiency without capability shift increases regulatory exposure

Governance

  • AI needs oversight, not blind operational dependence
  • Model decisions must remain explainable to supervisors

Judgment

  • Compliance value shifts from processing to defensible decisions
  • AI flags risk, humans must interpret and escalate

Regulation

  • Cross-border compliance now requires multi-jurisdictional legal fluency
  • Digital rules expand faster than legacy compliance models

Prioritization

  • Risk-based planning beats uniform compliance effort allocation
  • Focus resources where expected loss is materially concentrated

Treat these skills as a soft HR topic and you will get a well-designed learning calendar with very little impact on control quality.

Treat them as part of your governance operating model and they become something else entirely. Better judgment. Faster escalation. Stronger supervisory defensibility. Clearer decisions. That is where the real value sits.

The professionals who stay valuable in 2026 will not be the ones who process the most checklists. They will be the ones who can explain, challenge, and defend risk in a system where more of the first pass is done by machines.

The Real Test: Does Your Work Change Decisions?

All seven skills point to a single criterion. Does the compliance risk process demonstrably change organizational decisions?

Does it alter contract terms before signature? Does it delay a market entry until controls are in place? Does it narrow a public commitment to what the organization can actually substantiate? Does it redirect compliance investment from low-exposure obligations to high-exposure ones? Does it produce reserve levels and insurance coverage calibrated to a loss distribution rather than to last year's budget plus 5%?

If the answer to all of these is no, the process is producing documentation, not decisions. And documentation without decision impact is precisely the kind of compliance work that AI will replace.

The professionals who build these seven skills will find themselves more valuable in 2026 than they are today. The compliance function needs fewer people who can process alerts and more people who can interpret signals, calibrate models, quantify exposure, challenge AI outputs, map regulatory conflicts, evidence decisions, and translate risk into financial terms.

One question worth asking yourself this week: which of these seven skills would you be most uncomfortable being tested on in front of your board or your regulator? Start there.

Risk Workshops That Improve Decisions

Why Most Risk Workshops Underperform

Most risk workshops fail for a simple reason. They confuse documentation with decision support.

Teams gather experienced people in a room, collect a long list of concerns, assign broad scores, and leave with a register that looks complete. What they often do not leave with is a sharper understanding of which uncertainties matter most, which scenarios require deeper analysis, and what management should do next. The process produces artifacts, but not always insight.

That gap matters. Risk is the effect of uncertainty on objectives, and risk management should be integrated into governance, strategy, planning, and decision-making, not treated as a separate administrative exercise. Risk management should support value creation, preservation, and realization by informing choices under uncertainty. A workshop that does not improve a decision, refine a treatment decision, or guide further analysis has limited value no matter how polished the documentation appears.

A well-run risk workshop has a narrower and more useful purpose. It helps participants identify credible risk scenarios, distinguish plausible exposures from noise, structure uncertainty around objectives, and determine where analysis will improve a management decision. It creates shared understanding before quantitative assessment, treatment planning, or escalation. In practical terms, it is a disciplined mechanism for focusing organizational attention.

That is the standard worth aiming for. Not a more colorful discussion. Not a longer register. Better judgment.


 

What A Risk Workshop Is Actually For

Risk workshops are often presented as broad brainstorming sessions. That is part of the problem. Brainstorming can surface possibilities, but risk management requires more than possibility. It requires relevance, plausibility, and decision utility.

risk work should focus on the sources of risk, events, consequences, likelihood, and controls relevant to defined objectives. The assessment techniques should be selected based on the purpose of the analysis, the available information, and the decision context. 


This means a workshop should do four things well.

  • clarify the objective, scope, and context. Without that, participants discuss risk in generic terms and quickly drift into abstractions.
  • generate and refine scenario-based risk statements rather than disconnected issue labels. A useful risk statement is not data privacy risk or third-party risk. It is a scenario that links a source of risk, an event, and a consequence in business terms.
  • filter and prioritize scenarios based on plausibility, materiality, and relevance to a decision-maker. Every risk workshop produces more candidate scenarios than the organization can analyze in depth. Selection is part of the job.
  • define what happens next. Some scenarios need quantitative assessment. Some need immediate treatment. Some need monitoring. Some can be parked because they are outside the current decision scope.

This orientation changes the quality of the conversation. Instead of asking what risks do we have, the group asks which scenarios could materially affect our objectives in this context, what evidence supports them, and what action follows.

How To Design Risk Workshops That Produce Better Risk Information

Design quality determines workshop quality. Most problems that appear during the session are actually planning failures that happened before the session.

Start with the decision context. The facilitator should define the objective of the workshop in operational terms. For example, the purpose may be to identify and prioritize the risk scenarios that could materially affect the launch of a new product, the resilience of a critical service, the performance of a strategic supplier arrangement, or the compliance exposure associated with a new use of artificial intelligence. This framing is stronger than asking participants to identify top risks because it anchors discussion in an objective and a time horizon.

Participant selection matters just as much. The right group combines operational knowledge, control insight, and informed challenge. Process owners, business leaders, technology specialists, compliance or legal representatives, and internal control professionals often all have a place. In many cases, it is also useful to include one or two constructive challengers who are independent of the area under review and can test assumptions without carrying delivery pressure. Homogeneous groups tend to converge too fast and overlook blind spots. Research on group judgment has repeatedly shown that diversity of information and viewpoint improves estimation quality when discussion is structured well.

Pre-read materials should be concise and evidence-based. The goal is to reduce reliance on memory and anecdote. Depending on the topic, effective pre-read content may include recent incident data, audit findings, control performance indicators, key business metrics, customer impact records, near misses, external threat intelligence, regulatory developments, and prior assessment outputs. Risk management should use the best available information while recognizing its limitations. That principle begins before the workshop starts.

The structure of the session should also be explicit. In most settings, the workshop should move through context, scenario identification, plausibility testing, prioritization, and next-step determination. It should not collapse these stages into one conversation. When groups identify scenarios and score them at the same time, anchoring and groupthink increase. When they discuss controls too early, they often dilute the clarity of the underlying scenario.

A semi-structured discussion guide is usually more effective than a rigid script. Open questions help participants explain objectives, dependencies, assumptions, failure points, and operational realities. More focused prompts then help refine scenarios by asking what could happen, how it would unfold, what vulnerabilities would matter, what consequences would follow, and what evidence supports the view.

How To Keep Risk Workshops Focused On Plausible Scenarios

One of the most important facilitation tasks is distinguishing what is theoretically possible from what is practically useful to analyze.

This is where many workshops lose discipline. A group starts with a legitimate concern and then moves into edge cases, severe but remote events, or scenarios that require so many unusual conditions that they add little value to current decision-making. This is common in cybersecurity, third-party risk, resilience planning, and emerging technology discussions, especially when highly technical participants are in the room.

The facilitator needs a practical filter. A scenario should remain in scope if it is credible in the organization’s operating context, relevant to the stated objective, and capable of informing a decision, treatment plan, or monitoring action. If it fails those tests, it should not consume scarce time.

This does not mean dismissing uncertainty or claiming that rare events never occur. It means using disciplined judgment about what is decision-useful within a defined time horizon. Organizations to tailor risk management to internal and external context. It does not ask organizations to treat all imaginable events as equally deserving of analysis.

In practice, plausible scenarios usually have at least one of three characteristics. They are supported by internal evidence such as incidents, near misses, or control weaknesses. They are consistent with external evidence from peers, industry reporting, or supervisory findings. Or they arise naturally from the organization’s business model, asset profile, dependency structure, or threat environment.

This is where scenario wording matters. A weak scenario says cyberattack causes business disruption. A stronger scenario says a threat actor exploits an internet-facing vulnerability in a customer portal, causing unauthorized access to customer data and regulatory reporting obligations in multiple jurisdictions. The second version is easier to challenge, analyze, and act on because it connects a threat, a vulnerability, an event, and consequences.

Practical facilitation also means knowing when to bracket speculative debates. Some questions are valid but not useful in the moment. Participants may raise existential uncertainties, unknown unknowns, or highly remote systemic events. The right response is not to dismiss them aggressively. It is to acknowledge them, record them where appropriate, and return the group to the objective of the session. That keeps the workshop productive without pretending that all uncertainty can be resolved in one meeting.

How To Reduce Bias In Risk Workshops

Risk workshops are vulnerable to predictable judgment errors. This is not a criticism of participants. It is a feature of human decision-making. If the facilitator does not actively manage bias, the workshop will likely overweight recent events, familiar narratives, senior voices, and dramatic scenarios.

Availability bias is one of the most common problems. Participants naturally recall the incidents they have seen recently or those that received media attention. This can distort prioritization. A grounded way to counter this is to begin with evidence. Internal incident logs, near misses, service interruptions, customer complaints, audit exceptions, and external event data help anchor discussion in observed patterns rather than memory alone.

Anchoring is another frequent problem. If the group sees the prior year’s scores, a previous risk register, or an executive view too early, discussion often converges prematurely around those positions. A better method is to gather initial judgments independently before exposing the group to historical views. Independent elicitation before discussion has been shown to improve the quality of group judgment because it preserves information diversity.

Confirmation bias also matters. Participants often look for evidence that supports an existing view of the business, the control environment, or the maturity of a process. The facilitator should deliberately test the opposite case. Useful prompts include asking what evidence would suggest the exposure is worse than currently believed, what assumptions are least secure, or which dependencies could fail together.

Overconfidence is especially damaging in technical and operational workshops. Experts often underestimate uncertainty, particularly when they are deeply familiar with a domain. One practical method is to ask for ranges and assumptions rather than single-point judgments. Where feasible, asking for confidence levels and documenting the rationale improves transparency. In more mature environments, calibrated estimation techniques can improve the quality of expert judgment over time, as shown in applied measurement work by Hubbard (Hubbard 2020).

Groupthink and hierarchy effects require active countermeasures. Smaller breakout groups can help surface divergent views before reconvening. Rotating a challenge role can normalize constructive dissent. In some settings, collecting judgments anonymously through digital tools before open discussion reduces conformity pressure. The facilitator should also protect the session from dominance by senior or highly verbal participants. A workshop should use expertise, not deference, as its decision rule.

How To Use ISO Risk Management Principles In Workshops

Organizations often cite ISO 31000 or COSO in workshop materials, but many do not translate the underlying principles into facilitation practice. That is a missed opportunity.

ISO 31000 states that risk management should be integrated, structured and comprehensive, customized, inclusive, dynamic, and based on the best available information while taking human and cultural factors into account. Those principles are directly relevant to workshop design.

Integrated means the workshop should connect to governance, planning, change initiatives, and management decisions. If the output goes nowhere except a register, integration has failed.

Structured and comprehensive means the workshop should use a clear process for identification, analysis, evaluation, and treatment planning or escalation. This is one reason unbounded brainstorming sessions underperform.

Customized means the workshop must reflect the organization’s context, operating model, regulatory environment, critical assets, and strategic objectives. A generic list of enterprise risks copied from another firm adds little value.

Inclusive means the process should involve relevant stakeholders in a way that improves awareness and informed participation. Inclusive does not mean inviting everyone. It means involving the people who hold the necessary knowledge and accountability.

Dynamic means the workshop should not assume that risk information remains valid indefinitely. Significant incidents, changes in controls, shifts in the external environment, new technologies, and evolving threat patterns all require reassessment.

The best available information principle is particularly important in technical workshops. Information should be traceable where possible, limitations should be recognized, and uncertainty should be documented rather than hidden. This is fully consistent with ISO 31010, which stresses that assessment techniques should be chosen and applied with awareness of assumptions, uncertainties, and constraints .

COSO adds a useful managerial lens. It places risk management within performance and strategy, not outside them. In workshop terms, this means risk scenarios should be linked to business objectives, performance drivers, and decision thresholds. If the discussion cannot explain how a scenario affects objectives, it is not yet ready for management attention.

How to Document Risk Workshops for Analysis and Action

Documentation quality is often poor because teams capture only the final label and score. That is not enough.

A useful workshop record should preserve the logic of the discussion. At minimum, this includes the objective and scope, the scenario wording, the source of risk, relevant vulnerabilities, the potential event, the likely consequences, the key controls already in place, the assumptions used, the evidence cited, the main uncertainties, and any material disagreements. This level of recordkeeping supports later analysis, challenge, and treatment design.

Where the organization uses risk criteria, those criteria should be applied consistently. However, the workshop should avoid giving false precision to weak judgments. If the data is poor or the uncertainty is high, that should be documented explicitly. ISO 31000 and ISO 31010 both support transparency about information quality and uncertainty (ISO 2018; IEC and ISO 2019). This is better practice than forcing consensus where none exists.

Action capture also needs more rigor. Every material scenario should leave the workshop with one of several defined next states. It may require deeper analysis, immediate treatment planning, monitoring through indicators, escalation to a committee, or closure because it falls outside tolerance and has already been addressed. Ambiguous follow-up is a common reason workshop outputs die in local files.

Where technology platforms are used, they should support traceability rather than bureaucracy. A good system helps record assumptions, assign actions, track deadlines, and link scenarios to owners, controls, metrics, and governance reporting. A weak system turns every workshop into a data entry exercise. The tool should serve the method, not drive it.


 

How To Make Risk Workshops More Data-Driven

A mature workshop does not rely on opinion alone. It combines informed judgment with evidence. The form of evidence depends on the subject matter, but the principle remains consistent.

For operational and technology risks, useful evidence may include incident records, service availability data, vulnerability trends, patching performance, recovery test results, penetration testing findings, and supplier service-level failures. For compliance risks, it may include control testing outcomes, issue closure delays, investigation data, audit findings, complaint patterns, and regulatory observations. For strategic and financial risks, it may include forecast variance, concentration metrics, market indicators, stress scenarios, customer attrition data, and project delivery performance.

The point is not to eliminate expert judgment. It is to discipline it.

NIST guidance on risk assessment and cyber risk management repeatedly emphasizes the need to use data and analytic judgment together, while documenting assumptions and uncertainty (NIST 2012; NIST 2022). The same principle appears in broader decision science literature. Structured judgment supported by relevant evidence generally outperforms unguided intuition, especially for complex risk problems.

One practical technique is to separate evidentiary discussion from prioritization discussion. First, ask what information is available and what it suggests. Then ask what the scenario means for the objective. This simple sequence reduces the tendency to declare a scenario important before examining whether there is evidence to support that view.

Another useful practice is to identify data gaps explicitly. Not all risks can be assessed well in the workshop itself. Some scenarios need further analysis, data collection, or specialist review. Capturing that need is a sign of quality, not weakness. The risk process should distinguish between known exposure, uncertain exposure, and insufficiently understood exposure.

How To Facilitate Risk Workshops For Artificial Intelligence Governance

Artificial intelligence governance creates special demands for risk workshops because many organizations still treat artificial intelligence as a technology category rather than a business capability with distinct risk pathways. That leads to shallow discussions.

A stronger workshop starts by defining the use case. The risk profile of a large language model for internal productivity support is not the same as that of a model used in customer-facing decisions, fraud detection, medical advice, recruitment screening, or autonomous control. Context matters.

The workshop should then map the relevant objectives, dependencies, and failure modes. For artificial intelligence systems, common concerns include data quality, model performance, explainability, bias and fairness, security, privacy, third-party dependency, legal noncompliance, operational resilience, and human oversight. But those labels are only useful if turned into scenarios. For example, a more decision-useful scenario would describe how degraded training data quality causes materially inaccurate outputs in a customer-facing application, leading to consumer harm, remediation cost, and regulatory scrutiny.

International standards now provide stronger scaffolding for this work. ISO 42001 establishes a management system standard for artificial intelligence. ISO 23894 provides guidance on artificial intelligence risk management. The NIST AI Risk Management Framework offers practical governance functions centered on govern, map, measure, and manage (ISO 2023; ISO 2023a; NIST 2023). Across these frameworks, several consistent themes emerge. Artificial intelligence risk management should be context-specific, lifecycle-based, documented, accountable, and tied to human oversight and performance monitoring.

Workshops for artificial intelligence governance should therefore include business owners, technology teams, legal or compliance input, information security, data specialists, and where relevant, model risk or ethics expertise. They should review the intended purpose, training or input data dependencies, model limitations, human intervention points, downstream impacts, third-party model components, and monitoring arrangements. They should also distinguish between development-stage uncertainty and live-operating risk.

This is where risk workshop discipline is essential. Artificial intelligence discussions can become abstract very quickly. A decision-useful workshop keeps returning to the specific use case, the concrete failure scenario, the affected objective, and the management action required.

A Practical Flow for Running a High-Value Risk Workshop

A practical workflow is more useful than a generic checklist because it shows how the pieces fit together.

Before the session, define the objective, scope, and decision context. Select participants who bring knowledge, accountability, and challenge. Circulate targeted pre-read materials with relevant evidence. Prepare a structured discussion guide and decide what outputs the workshop must produce.

At the start of the session, establish the objective, scope boundaries, and working rules. Confirm that the purpose is to improve understanding and decision quality, not to assign blame.

During the identification phase, use scenario-based prompts rather than labels. Ask how the objective could fail, what dependencies matter, what vulnerabilities exist, what events could occur, and what consequences would follow.

During the refinement phase, test plausibility. Ask whether the scenario is credible in the current context, what evidence supports it, whether it aligns with observed internal or external patterns, and whether it would inform a treatment or governance decision.

During the prioritization phase, compare scenarios using agreed criteria tied to objectives, consequences, uncertainty, and urgency. Avoid rushing to false precision.

During the close, assign next actions clearly. Some scenarios require detailed assessment. Some require immediate treatment planning. Some require enhanced monitoring or escalation. Document assumptions, evidence, and unresolved questions before the group leaves.

After the session, consolidate outputs, validate wording where needed, launch follow-up actions, and ensure that material outcomes reach the relevant governance forum. Review the quality of the workshop over time and improve the method. Continual improvement is not an abstract principle. It is the difference between a recurring ritual and an increasingly valuable management practice.

Final Perspective

Risk workshops are not clerical exercises. They are structured judgment forums that should improve how an organization understands uncertainty around its objectives. When designed and facilitated well, they help teams move from vague concern to credible scenario, from anecdote to evidence, and from discussion to action. That is the standard risk leaders should hold.

The most effective workshops do not try to analyze everything. They focus attention where it matters, use the best available information, make uncertainty explicit, and produce outputs that support governance and management decisions. That is true in enterprise risk management generally, and it is especially true in artificial intelligence governance, where fast-moving technology can easily outpace weak facilitation. Strong workshops do not eliminate uncertainty. They make it more intelligible and more governable.

References

  • Committee of Sponsoring Organizations of the Treadway Commission. 2017. Enterprise Risk Management: Integrating with Strategy and Performance.
  • Hubbard, Douglas W. 2020. The Failure of Risk Management: Why It Is Broken and How to Fix It. 2nd ed.
  • IEC and ISO. 2019. IEC 31010: Risk Management, Risk Assessment Techniques.
  • ISO. 2009. ISO Guide 73: Risk Management, Vocabulary.
  • ISO. 2018. ISO 31000: Risk Management, Guidelines.
  • ISO. 2022. ISO/IEC 27005: Information Security, Cybersecurity and Privacy Protection, Guidance on Managing Information Security Risks.
  • ISO. 2023. ISO/IEC 42001: Information Technology, Artificial Intelligence, Management System.
  • ISO. 2023a. ISO/IEC 23894: Information Technology, Artificial Intelligence, Guidance on Risk Management.
  • Kahneman, Daniel, Olivier Sibony, and Cass R. Sunstein. 2021. Noise: A Flaw in Human Judgment.
  • NIST. 2012. Guide for Conducting Risk Assessments. SP 800-30 Rev. 1.
  • NIST. 2022. Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations. SP 800-161 Rev. 1.
  • NIST. 2023. AI Risk Management Framework 1.0.