In 2024, agentic AI was barely mentioned in boardroom conversations about contact-centre technology. In 2025, it is the centrepiece of every vendor roadmap and most of the RFPs I review.
BCG's Build for the Future 2025 study quantifies what that shift looks like in value terms: agents already account for 17% of total AI value generated by the companies surveyed. By 2028, that figure is expected to reach 29% — nearly doubling in three years. And when companies were asked to name their top five priority functions for agentic AI deployment, customer service came first, cited by 50% of respondents.
Source: BCG Build for the Future 2025, n = 1,250 senior executives.
That is not a surprise to anyone who has spent time in contact-centre technology. The contact centre is, structurally, an ideal environment for agentic AI: high-volume, repetitive, data-rich, with clear success metrics, and meaningful consequences when things go wrong. It is also an environment where the gap between vendor demos and production reality is wider than almost anywhere else in enterprise software.
I want to talk about what agentic readiness actually means in this context. Not the slide. The reality.
What an agentic contact centre actually looks like
The vendor version: a customer contacts the company, an AI agent handles the entire interaction — authenticates the customer, understands intent, retrieves information from multiple systems, resolves the issue, and closes the interaction. No human involved unless the customer requests one.
That version exists. I have seen it work. I have also seen it fail spectacularly in ways that cost companies customers and regulatory goodwill in the same afternoon.
The production version — the one that actually scales — looks different.
An agentic contact centre does not replace human agents wholesale. It redesigns how work is distributed between human and digital workers. The AI agent handles the structured, predictable, high-volume interactions: balance enquiries, policy lookups, appointment scheduling, status updates, standard complaints with clear resolution paths. The human agent handles the exceptions, the emotionally complex interactions, the situations requiring judgement that the AI cannot yet model reliably.
What changes is the proportion. In a well-deployed agentic contact centre, the AI is handling sixty to seventy percent of total interaction volume — not just deflecting calls before they reach an agent, but completing them. The human agents are handling interactions that genuinely require human capability.
This is not science fiction. Genesys, Cognigy, Amazon Connect, and several specialist vendors all have production deployments at this scale. What differentiates the deployments that succeed from the ones that struggle is not the technology.
The three prerequisites
BCG is clear about what must be in place before agentic AI can deliver value. I have translated each through the CCaaS lens.
Prerequisite one: Strong data foundations.
Every agentic AI system in a contact centre needs to reach data — customer data, product data, policy data, transaction history — in real time, reliably, with appropriate access controls. This sounds obvious. It is rarely achieved.
Most contact centres I encounter have customer data distributed across three to seven systems of record that were not designed to interoperate. The CRM has profile data. The policy admin system has coverage details. The billing system has payment history. The case management system has interaction history. None of them were built with an API-first architecture in mind.
Building an agentic contact centre on top of this infrastructure is possible — but it requires a data integration layer that most organisations have not invested in. The AI agent is only as capable as the data it can access in the moment.
Before you ask "which vendor should we use for our AI agent?", ask "can our systems provide the right data to the right place in under two seconds?" If the answer is no, the agent will fail at the point of interaction, and that failure will be visible to the customer.
Prerequisite two: Scaled AI capabilities — specifically, a working conversational AI layer.
Agentic AI in the contact centre is not a first deployment. It is a next step. You need a functioning conversational AI foundation — whether that is a Genesys Bot Flow, a Cognigy AI Agent, or an Amazon Lex integration — before you layer in agentic orchestration.
Organisations that try to skip this step and deploy agentic AI before they have conversational AI working reliably are building on sand. The agent needs to understand intent, manage context across a multi-turn conversation, and hand off gracefully when it reaches the edge of its competence. That capability must exist before you give the agent tools to act.
Prerequisite three: Clear governance — including for failure.
BCG's research surfaces a number that I think deserves more attention than it gets: 72% of companies already report unmanaged AI-security risks. In a contact centre, an AI agent that behaves incorrectly has an immediate customer-facing impact. It is not a backend process that can be quietly corrected. It is an interaction that a real customer experienced, and which they may share.
Governance for agentic AI in the contact centre means: who approves the intent library? Who reviews interaction logs for anomalies? Who has the authority to pull an agent from production and replace it with a human queue, immediately, if something goes wrong? Who owns the policy for what the agent can and cannot do?
These are not technology questions. They are operational design questions. And most organisations have not answered them before they start deploying.
Where to start, and where not to
The most successful agentic contact-centre deployments I have seen share a common sequencing: they start with the interaction type that has the highest volume, the clearest resolution path, and the lowest consequence for failure.
In insurance, that is usually first-notice-of-loss triage and status updates. In banking, it is balance and transaction enquiries. In telecoms, it is bill explanation and basic technical troubleshooting. In each case, the AI agent can handle the interaction end-to-end without escalation for the majority of cases, the data requirements are well-understood, and a failure — while inconvenient — does not put the customer at financial or safety risk.
Deployments that start with complex, emotionally charged, or high-stakes interaction types — complaints, claims settlements, credit decisions — almost always struggle. Not because the technology is incapable, but because the governance and oversight requirements for those interactions are significantly higher, and most organisations are not yet equipped to meet them.
BCG's recommendation is direct: companies should view agentic AI as the next step in AI implementation, not as the starting point. The prerequisites must be in place. The deployment should target a few high-value, clearly defined workflows, not an enterprise-wide rollout.
The honest risk
I want to close with something that does not usually appear in vendor materials.
Agentic AI in the contact centre creates new risks that traditional automation does not. A static IVR, when it fails, fails in a predictable way — the customer navigates a dead end and reaches an agent. An agentic system, when it fails, can fail in unpredictable ways — it may take an action it should not have taken, or refuse to take one it should, or provide information that is accurate but contextually wrong.
The solution is not to avoid agentic AI. The solution is to build feedback loops — mechanisms to detect when the agent is behaving unexpectedly, and to route those interactions to human review before harm occurs. Future-built organisations, in BCG's framing, are those that design these guardrails in from the beginning, rather than adding them reactively after an incident.
The contact centre is the highest-visibility AI deployment most organisations will ever make. Every interaction with an AI agent is a moment when a customer forms an opinion about the company. Getting it right — in the sense of building something that actually works in production, at scale, over time — requires more rigour than most vendor roadmaps suggest.
That rigour is available. It is not mysterious. But it has to be applied before deployment, not after.
Data in this article draws on BCG's Build for the Future 2025 global study (September 2025), covering 1,250 senior executives across 25 sectors and 68 countries. Courtesy: Boston Consulting Group.