Consumer Duty has fundamentally changed the stakes for AI in financial services contact centres. Before July 2023, firms could deploy AI and measure success primarily through operational metrics: call deflection rates, average handling time, cost per interaction. Those metrics still matter. But they are no longer sufficient.
Under Consumer Duty, every firm must now demonstrate that its customer service- including any AI-assisted or AI-led interactions - delivers good outcomes for retail customers. The FCA is explicit: good intentions are not evidence. Firms need data, audit trails, and governance structures that prove the AI is working in customers' interests.
The practical challenge is that most AI deployment guides are written for technology teams, not compliance-aware financial services leaders. They cover integration, configuration, and go-live. They rarely cover what happens when the FCA asks for evidence of good outcomes six months after deployment.
This guide fills that gap. It covers how to deploy AI in a financial services contact centre in a way that meets Consumer Duty requirements - from pre-deployment governance through to ongoing monitoring and outcome measurement.
Key Takeaways
- Consumer Duty requires AI to deliver good outcomes across all four outcome areas - not just handle queries efficiently
- A named SM&CR Senior Manager must be accountable for AI compliance before go-live
- Pre-deployment governance (impact assessment, escalation triggers, outcome metrics) is where most compliance failures originate
- Standard contact centre metrics are insufficient; firms need outcome-specific measures such as resolution quality rate and vulnerability detection rate
- A phased rollout (pilot, governance review, scaled deployment) builds the FCA evidence trail before it is needed
What this guide covers:
Consumer Duty is built around four outcomes. Each one has direct implications for how AI is deployed and monitored in a contact centre context.
|
Consumer Duty Outcome |
What It Requires from AI |
|---|---|
|
Products and services |
AI must only present or discuss products that are appropriate for the customer's needs and circumstances |
|
Price and value |
AI interactions must not obscure fees, charges, or terms in ways that disadvantage customers |
|
Consumer understanding |
AI must communicate clearly and confirm customer comprehension, particularly for complex products |
|
Consumer support |
AI must help customers achieve their goals - and escalate when it cannot do so effectively |
The consumer support outcome carries the most weight for contact centre AI. It requires firms to ensure customers can get the help they need, when they need it - which means AI cannot simply deflect queries. It must resolve them, or hand them to someone who can.
The critical implication for AI deployment: a system optimised purely for deflection and handling time will likely breach the consumer support outcome. Firms must build resolution quality into their AI success metrics from the outset. The FCA's Consumer Duty final rules (PS22/9) make this explicit: firms cannot treat operational efficiency as a proxy for good customer outcomes.
The most common Consumer Duty compliance failures happen before the AI goes live, not after. Firms that skip governance steps in the rush to deploy create problems that are significantly harder to fix once the system is in production.
Before any AI system touches a customer interaction, a named Senior Manager under SM&CR must be designated as accountable for its performance and compliance. This is not a nominal role. The designated individual must have sufficient visibility into AI performance data and enough authority to intervene when the system falls short of Consumer Duty requirements.
The FCA requires firms to measure outcomes, not just activity. Before deployment, define what a good outcome looks like for each AI use case. For a query-handling virtual agent, that means three things: the query was resolved without escalation, the customer confirmed understanding, and no complaint was raised within 30 days. Those definitions become the basis for your ongoing monitoring framework.
Every AI deployment needs documented escalation triggers - the specific conditions under which the AI hands off to a human agent. These must include:
Before go-live, document how the AI deployment affects each of the four Consumer Duty outcomes. This assessment should identify risks, mitigation measures, and the monitoring approach for each outcome area. It becomes your evidence that compliance was considered before deployment - not after a problem emerged.
Why this matters: the FCA's supervisory approach to Consumer Duty is outcomes-focused and retrospective. The FCA's guidance on the Consumer Duty is clear that firms bear the burden of demonstrating compliance - not the regulator the burden of proving a breach. Firms that cannot demonstrate pre-deployment governance are in a significantly weaker position when things go wrong.
Deployment is not the finish line. Under Consumer Duty, firms must continuously monitor and evidence that their AI is delivering good outcomes. This requires an operational monitoring framework built around the outcome metrics defined before go-live.
Standard contact centre metrics - average handling time, first contact resolution, CSAT scores - are useful but insufficient for Consumer Duty purposes. Firms need metrics that speak directly to customer outcomes:
Monthly monitoring is the minimum for a newly deployed AI system. Firms should review outcome metrics monthly for the first six months, then move to quarterly reviews once performance is stable - with the ability to revert to monthly monitoring if metrics deteriorate.
All monitoring should be documented and retained. The FCA may request evidence of ongoing oversight as part of supervisory engagement. When it does, the monitoring record is the primary evidence of a firm's Consumer Duty compliance posture.
The question to ask at every review: "If the FCA asked us today to prove this AI is delivering good outcomes for customers, what would we show them?" If the answer is uncertain, the monitoring framework needs strengthening before the next review cycle.
Financial services firms that deploy AI contact centre solutions successfully under Consumer Duty tend to follow a phased approach that builds compliance evidence before scaling.
|
Phase |
Timeframe |
Key Activities |
Success Criteria |
|---|---|---|---|
|
1: Controlled pilot |
Weeks 1-8 |
Deploy on a single low-complexity use case (account queries, scheduling, basic product information) |
Outcome metrics collected; no material complaint uplift; escalation triggers validated in live conditions |
|
2: Governance review |
Weeks 8-12 |
Review pilot data; refine escalation triggers and vulnerability protocols; update impact assessment |
Clean outcome data documented; governance framework adjusted and signed off by SM&CR owner |
|
3: Scaled deployment |
Month 3 onwards |
Expand to additional use cases and higher-complexity interactions; repeat impact assessment for each |
Each new use case has its own outcome baseline; monitoring cadence in place before go-live |
This approach takes longer than a full deployment from day one. It also produces a significantly stronger compliance position - and a documented evidence trail that holds up under FCA scrutiny.
Consumer Duty has created a new compliance threshold for AI in financial services contact centres. The firms that will struggle are not those that deploy AI slowly - they are those that deploy it without governance, without outcome metrics, and without a documented evidence base.
The framework in this guide is not a compliance overhead. It is the foundation that makes AI deployment sustainable. A named accountability owner. Pre-defined good outcome metrics. Documented escalation triggers. A phased rollout that generates evidence before it generates risk.
The FCA's position is clear: firms are expected to demonstrate that their AI is working in customers' interests, not simply assert it. The time to build that capability is before go-live, not after a supervisory visit.
Fortay Connect advises UK financial services firms on AI contact centre deployments that meet FCA and Consumer Duty requirements from the outset. If you are planning a deployment or reviewing your current AI governance framework, contact us to discuss your requirements
1. What does Consumer Duty require from AI in a financial services contact centre?
Consumer Duty requires AI to deliver good outcomes across four areas: appropriate product presentation, transparent pricing, clear communication, and effective customer support. AI cannot simply deflect queries; it must resolve them or escalate to a human agent when it cannot do so effectively.
2. Who is accountable for AI compliance under Consumer Duty?
A named Senior Manager under the Senior Managers and Certification Regime (SM&CR) must be designated as accountable for the AI system's performance and compliance before go-live. This person must have sufficient visibility and authority to act when the AI falls short of Consumer Duty requirements.
3. What metrics does the FCA expect firms to use for AI contact centre oversight?
The FCA expects outcome-based metrics, not just operational ones. Firms should track resolution quality rate, escalation appropriateness rate, complaint rate by interaction type, vulnerability detection rate, and consumer understanding confirmation rate alongside standard measures like CSAT and first contact resolution.
4. What is a Consumer Duty impact assessment for AI deployment?
A pre-deployment impact assessment documents how an AI system affects each of the four Consumer Duty outcomes. It identifies risks, mitigation measures, and the monitoring approach for each outcome area, creating a compliance evidence trail before the system goes live rather than after a problem emerges.
5. How often should firms review AI performance under Consumer Duty?
Monthly reviews are the minimum for a newly deployed AI system. Firms should maintain monthly monitoring for the first six months, then move to quarterly reviews once performance is stable, with the ability to revert to monthly if outcome metrics deteriorate. All reviews must be documented and retained.