AI Agent vs Chatbot: What FCA-Regulated Firms Need to Know

Fortay Connect helps UK financial services firms select and deploy the right AI technology for their contact centre - whether that is a chatbot, an AI agent, or a combination of both. Contact us to discuss your requirements and we will help you build the business case and governance framework to get

The terms "chatbot" and "AI agent" are used interchangeably across the financial services industry. Vendors use them loosely. Procurement teams conflate them. And firms end up deploying technology that does not match what they actually needed, or taking on compliance obligations they did not anticipate.

The distinction matters enormously in a regulated environment. A chatbot and an AI agent are not different names for the same thing. They represent fundamentally different levels of capability, autonomy, and, critically for FCA-regulated firms, accountability.

Key Takeaways

  • A chatbot matches inputs to pre-written answers. An AI agent reasons, decides, and acts.
  • Chatbots suit high-volume, low-complexity queries; AI agents handle end-to-end customer journeys.
  • AI agents carry a significantly heavier compliance footprint under SM&CR, Consumer Duty, and FCA expectations around explainability.
  • Most firms need both, deployed for different use cases with governance frameworks matched to the risk level of each.
  • The FCA has explicitly flagged agentic AI as an area of increasing regulatory focus; firms deploying autonomous agents now should be building governance infrastructure ahead of that scrutiny.

This piece sets out the practical differences between the two technologies, what each is suited for in a financial services context, and the compliance considerations

What a Chatbot Actually Is

A chatbot is a rule-based or intent-matching system. It works by identifying keywords or phrases in a customer's message and returning a pre-defined response. More sophisticated versions use natural language processing (NLP) to handle variations in phrasing, but the underlying logic is the same: the system matches input to a pre-written answer.

Chatbots are deterministic. Given the same input, they will always produce the same output. That predictability is both their strength and their limitation. According to IBM's research on conversational AI, well-scoped chatbot deployments can deflect 40 to 60 per cent of routine inbound contact volume, reducing cost-per-contact significantly. The operative word is "well-scoped."

What Chatbots Do Well

  • Answering frequently asked questions with consistent, pre-approved responses
  • Routing customers to the right department or agent
  • Collecting information before a human agent takes over (name, account number, query type)
  • Handling high-volume, low-complexity interactions at scale

Where Chatbots Fall Short

  • They cannot handle queries that fall outside their pre-defined scope
  • They cannot adapt responses based on context, customer history, or changing circumstances
  • They cannot take actions; they can only provide information
  • They frequently frustrate customers when queries are even slightly outside the expected pattern

For financial services firms, chatbots are a reasonable solution for narrow, well-defined use cases. They are not a solution for complex customer journeys, and deploying them as such is one of the most common and costly AI missteps in the sector. For a broader view of how AI-powered contact centres can transform customer experience, the use cases go well beyond FAQ deflection.

What an AI Agent Actually Is

An AI agent is a fundamentally different category of technology. Rather than matching inputs to pre-defined outputs, an AI agent reasons about a situation, determines what action to take, and executes that action, often across multiple systems simultaneously.

AI agents are built on large language models (LLMs) and designed to handle open-ended tasks. They can access customer data, interpret context, make decisions, and take actions such as updating records, processing requests, or escalating based on their assessment of the situation. The defining characteristic is autonomy: an AI agent does not just answer questions; it acts.

The Spectrum of AI Agent Capability

AI agents exist on a spectrum from assisted to fully autonomous. Understanding where a deployment sits on that spectrum is essential for scoping the governance framework correctly.

Agent Type

Autonomy Level

Example in FS Context

Copilot / assistant

Low: suggests actions, human confirms

Recommends next-best response to a human agent

Semi-autonomous agent

Medium: acts within defined boundaries, escalates edge cases

Processes standard account queries, flags complex ones

Autonomous agent

High: acts independently across full scope

Handles end-to-end customer journeys without human involvement

Most financial services deployments in 2026 sit in the semi-autonomous category: capable enough to handle meaningful customer interactions, but with defined boundaries and human oversight for higher-risk decisions. Gartner projects that by 2026, agentic AI will autonomously resolve at least 80 per cent of common customer service issues without human intervention, up from less than 10 per cent in 2023.

What AI agents can do that chatbots cannot:

  • Understand context and adapt responses based on customer history and circumstances
  • Handle multi-turn conversations where the customer's need evolves mid-interaction
  • Take actions: update records, process requests, initiate workflows
  • Detect vulnerability signals and escalate appropriately
  • Operate across voice and digital

The Compliance Implications: Why the Distinction Matters for FCA-Regulated Firms

This is where the chatbot vs. AI agent distinction becomes a compliance question, not just a technology one. The FCA's approach to AI in financial services has become progressively more structured. Its AI and Machine Learning discussion paper (DP5/22) and subsequent engagement through the AI Lab have made clear that the regulator's concern is not with AI per se, but with accountability gaps: who is responsible when an automated system causes harm to a customer?

Chatbots: Lower Complexity, Narrower Risk

Because chatbots operate within pre-defined rules and cannot take actions, their compliance footprint is relatively contained. The main risks are:

  • Providing incorrect or outdated information (mitigated by regular content review)
  • Failing to escalate vulnerable customers (mitigated by defined escalation triggers)
  • Creating a poor customer experience that breaches the consumer support outcome under Consumer Duty

These are manageable risks with straightforward mitigations. Chatbot deployments typically require less governance overhead than AI agent deployments, which is precisely why they are the right starting point for firms new to AI in the contact centre.

AI Agents: Greater Capability, Greater Accountability

AI agents introduce a different compliance profile. Because they make decisions and take actions, the governance requirements are significantly more demanding.

Key Compliance Considerations for AI Agent Deployments

  • SM&CR accountability: a named Senior Manager must own the AI agent's risk. Because the agent acts autonomously, the accountability question is sharper than for a passive chatbot. The FCA's Senior Managers and Certification Regime does not create carve-outs for automated decision-making.
  • Explainability: the FCA expects firms to be able to explain how decisions affecting customers were made. AI agents must be deployed on platforms that provide decision logging and audit trails.
  • Consumer Duty, consumer support outcome: an AI agent that takes the wrong action, whether processing an incorrect instruction, failing to identify a vulnerable customer, or providing misleading product information, creates a direct Consumer Duty breach, not just a service failure.
  • Agentic AI oversight: the FCA's Innovation and Technology team has signalled that autonomous AI systems will face increasing scrutiny. Firms deploying fully autonomous agents should treat this as a near-term regulatory risk, not a future consideration.

The governance rule of thumb: the more autonomous the AI, the more robust the governance framework needs to be. A chatbot needs content governance. An AI agent needs accountability governance.

Which Technology Is Right for Your Firm?

The honest answer is that most financial services firms need both, deployed for different use cases, with governance frameworks matched to the risk level of each.

Start with a Chatbot If

  • Your primary need is handling high-volume, low-complexity queries (FAQs, account balance, branch hours)
  • You want to reduce inbound call volume without significant governance overhead
  • You are new to AI in the contact centre and want to build confidence before deploying more autonomous systems

Move to an AI Agent When

  • You need to handle complex, multi-turn customer journeys end-to-end
  • You want AI that can take actions, not just provide information
  • You have the governance infrastructure in place to manage a more autonomous system
  • Your contact centre handles regulated activities where outcome quality, not just query deflection, is the measure of success (and you have, or are building, the AI governance infrastructure to support it)

The Sequencing That Works in Practice

The firms getting this right are not choosing between chatbots and AI agents. They are sequencing them deliberately:

  1. Deploy a chatbot for FAQ deflection and routing, building confidence and baseline compliance infrastructure
  2. Use the chatbot interaction data to identify the use cases where AI agents would add the most value
  3. Deploy semi-autonomous AI agents on those use cases with robust governance from day one
  4. Expand agent autonomy incrementally as the compliance track record builds

This approach avoids the two most common mistakes: deploying a chatbot when an AI agent is needed (and wondering why it does not solve the problem), or deploying a fully autonomous agent without the governance infrastructure to manage it safely.

The business case in plain terms: a well-scoped chatbot typically pays back within six to twelve months through call deflection alone. An AI agent, deployed correctly, compounds that return by reducing average handling time, improving first-contact resolution, and cutting the cost of compliance failures. The governance investment is not a cost to minimise; it is what makes the return sustainable.


Fortay Connect helps UK financial services firms select and deploy the right AI technology for their contact centre, whether that is a chatbot, an AI agent, or a combination of both. Contact us to discuss your requirements and we will help you build the business case and governance framework to get it right.



 

FAQs

1. What is the difference between a chatbot and an AI agent?

A chatbot matches customer inputs to pre-written responses using rules or intent-matching. An AI agent reasons about a situation, decides what action to take, and executes it across systems. The key distinction is autonomy: chatbots inform, AI agents act.

2. Do AI agents require more compliance oversight than chatbots under FCA rules?

Yes. Because AI agents make decisions and take actions autonomously, they trigger sharper accountability requirements under SM&CR, Consumer Duty, and FCA expectations around explainability. A named Senior Manager must own the AI agent's risk.

3. When should a financial services firm deploy an AI agent instead of a chatbot?

When the use case requires taking actions (updating records, processing requests), handling complex multi-turn conversations, or delivering regulated outcomes end-to-end. Chatbots are suited to high-volume, low-complexity queries; AI agents handle the rest.

4. What is the FCA's position on agentic AI in financial services?

The FCA has signalled increasing scrutiny of autonomous AI systems. Its AI and Machine Learning discussion paper (DP5/22) and AI Lab work focus on accountability gaps. Firms deploying autonomous agents should build governance infrastructure ahead of that regulatory focus.

5. Can a financial services firm use both a chatbot and an AI agent?

Yes, and most should. The recommended approach is to deploy a chatbot first for FAQ deflection and routing, then use that interaction data to identify where AI agents add most value, before deploying agents with robust governance in place.