Hearing What Your Customers Are Really Saying

A Deep Dive into Amazon Connect Contact Lens

Most contact centres are flying partially blind. They know their call volumes. They know their average handle times and service levels. They might know their CSAT scores, at least for the fraction of customers who respond to surveys. But what's actually happening inside those conversations? What are customers frustrated about? What are agents struggling to answer? What compliance language is being missed? What patterns are emerging in contact reasons before they become problems?

For the vast majority of organisations, the answer to those questions is: we don't really know. Supervisors listen to a sample of calls. QA teams review a small percentage of interactions. The rest, the overwhelming majority of customer conversations, pass through the contact centre unanalysed, their insights lost.

Contact Lens for Amazon Connect was built to change this. It's Amazon Connect's native analytics and intelligence engine, and it transforms every customer interaction, regardless of channel, from a closed conversation into a searchable, analysable, actionable data point. This post explains what it does, how it works, and why it matters.

What Contact Lens Actually Is

Contact Lens is a set of AI and machine learning capabilities built natively into Amazon Connect. It operates on voice calls, chats, and other contact types, and it works in both real time, during an active interaction, and post-contact, after the interaction ends. It doesn't require a separate tool, a separate login, or a separate data pipeline. It's part of the Amazon Connect platform, and its outputs flow directly into the same dashboards, workflows, and agent interfaces that your team already uses.

The core functions of Contact Lens are real-time transcription, sentiment analysis, keyword and phrase detection, automated categorisation, post-contact summarisation, and agent performance evaluation. Each of these is worth understanding individually, but their real power comes from how they work together as a system.

Real-Time Transcription: The Foundation Layer

Everything Contact Lens does is built on real-time transcription, the conversion of voice conversations into searchable, analysable text as the conversation happens. Amazon Connect's transcription engine is built on AWS's speech recognition technology, supports multiple languages, and handles the acoustic variability of real contact centre environments, different accents, background noise, telephony quality variation, with high accuracy.

For chat interactions, transcription is native, the text is already there. For voice, real-time transcription creates a live text record of the conversation that Contact Lens can then analyse for sentiment, keywords, compliance phrases, and other signals while the call is still in progress.

The transcript is also retained post-contact, creating a searchable record of every interaction. This has obvious compliance and quality assurance value, but it's also the raw material for everything else Contact Lens does.

Sentiment Analysis: Reading the Emotional Register of Every Conversation

Sentiment analysis has been a feature of Contact Lens for some time, but the 2024 enhancements significantly deepened its capability. The current version analyses sentiment at multiple levels of granularity, not just a binary positive/negative classification, but a nuanced, turn-by-turn assessment of emotional trajectory throughout a conversation.

Tonal Detection

New tonal analysis capabilities in Contact Lens detect not just what is being said but how it is being said, the acoustic properties of speech that signal frustration, distress, confusion, or satisfaction. This goes beyond keyword detection, a customer saying "I'm frustrated", to identifying the emotional tone of an utterance even when the words alone don't make it explicit.

In practice, this means Contact Lens can identify a customer becoming increasingly frustrated even when they're still being polite, a signal that, if detected early enough, allows a supervisor to intervene or an agent to shift their approach before the conversation deteriorates.

Granular Emotion Recognition

Contact Lens now supports granular emotion categories beyond the basic positive/negative spectrum. This richer emotional vocabulary allows for more precise categorisation of customer sentiment, distinguishing between, for example, a customer who is confused versus one who is angry, or one who is satisfied but disengaged versus one who is genuinely pleased with the resolution.

For QA and coaching purposes, this granularity enables much more targeted agent feedback. Rather than "this call had negative sentiment", supervisors can identify specifically what type of emotional dynamic occurred and at what point in the conversation it emerged.

Generative AI-Powered Post-Contact Summaries

Contact Lens's generative AI-powered post-contact summaries: concise, structured, actionable, without requiring agents to read transcripts or listen back to calls. © 2024 Amazon Web Services, Inc. All rights reserved.

Post-contact summarisation is one of the most immediately impactful Contact Lens features in terms of day-to-day operational value. After every interaction, Contact Lens uses generative AI to produce a concise, structured summary of what was discussed, what the customer's issue was, what was agreed, and what follow-up actions are required.

The summary is available in the agent's after-call work view and in supervisor dashboards, and it's generated automatically. The agent doesn't need to write it. This has several significant consequences.

First, after-call work time drops substantially. Agents no longer spend minutes after each interaction typing notes or filling in wrap-up forms. The summary is already there, accurate and structured. Neo Financial reported saving agents an average of 90 seconds per interaction using this feature, which, at scale across hundreds of agents and thousands of daily contacts, represents a very significant efficiency gain.

Second, quality and consistency of post-contact documentation improves. Human-written notes vary in quality, detail, and accuracy. AI-generated summaries are consistent, complete, and drawn directly from the transcript, reducing the risk of important follow-up actions being missed or poorly documented.

Third, supervisors and team leaders can review interactions more efficiently. Instead of listening to recordings or reading full transcripts, they can scan AI-generated summaries to identify interactions that warrant closer attention. Gamma reported an 8% reduction in average handle time from post-call summarisation alone. Priceline noted that it allows agents to focus on the most important thing, the customer.

Real-world results from Contact Lens post-contact summaries across Neo Financial, Priceline, and Gamma. © 2024 Amazon Web Services, Inc. All rights reserved.

Automated Agent Performance Evaluations

Traditionally, agent quality evaluations are conducted by supervisors or QA specialists who manually review a sample of recorded interactions and complete evaluation forms. This process is time-consuming, inherently limited in coverage, most organisations can evaluate only a small fraction of total contacts, and subject to evaluator inconsistency.

Contact Lens's generative AI-powered automated performance evaluations address this directly. The system uses generative AI to assess interactions against customisable evaluation criteria, answering the same questions a human evaluator would answer, with references to specific points in the transcript that support each assessment.

This means every interaction can be evaluated, not just a sample. It means evaluation results are consistent across agents and over time. And it means supervisors can spend their review time on the interactions and agents that genuinely need attention, rather than working through a randomly sampled queue.

The evaluation framework is customisable. Organisations define the criteria that matter for their context, whether that's compliance language, call control behaviours, empathy markers, resolution accuracy, or any other dimension of quality. The AI applies those criteria consistently across every evaluated contact.

Semantic Match for Categorisation

Contact categorisation, understanding why customers are contacting you and routing or tagging interactions accordingly, has traditionally relied on keyword rules. If a customer says "cancel" or "cancellation", tag the contact as a cancellation enquiry. This approach has a fundamental limitation: customers don't always use the words you've anticipated.

Contact Lens's semantic match for categorisation uses natural language understanding to categorise contacts based on meaning rather than literal keyword matching. A customer asking "how do I get out of my contract" and a customer saying "I want to cancel" will both be categorised correctly as cancellation-related, even though the specific words are different.

This dramatically improves the accuracy and coverage of automated categorisation, which in turn improves the quality of contact reason reporting, routing logic that depends on contact type, and any downstream workflows triggered by contact categorisation.

Real-Time Alerts and Supervisor Monitoring

Contact Lens operates in real time, meaning it can identify patterns and conditions during an active conversation and surface them to supervisors immediately. Configurable real-time alerts can notify supervisors when specific keywords appear, when sentiment drops below a threshold, when a call has been running longer than expected, or when a combination of signals suggests a conversation that warrants intervention.

This transforms supervisor monitoring from a reactive, retrospective activity into a proactive, live capability. Instead of reviewing yesterday's difficult calls in today's QA session, supervisors can see today's difficult calls as they happen and intervene, connecting to the call, prompting the agent via whisper, or transferring the contact to a specialist, before the customer experience deteriorates.

Contact Lens's 2024 enhancements: automated performance evaluations, improved sentiment analysis with tonal detection, and semantic match for categorisation. © 2025 Amazon Web Services, Inc. All rights reserved.

The Compliance Use Case: Why This Matters in Regulated Industries

For organisations operating in regulated industries, financial services, insurance, healthcare, Contact Lens has a particularly valuable compliance application. Regulatory requirements often mandate that specific disclosures, warnings, or language be included in customer conversations. Manually auditing compliance with these requirements across all interactions is not feasible at scale.

Contact Lens can automatically monitor every conversation for required compliance language, flag interactions where it was absent, and generate reports that provide auditable evidence of compliance, or highlight areas of non-compliance for remediation. This reduces compliance risk, supports audit readiness, and gives compliance teams visibility across the full interaction volume rather than a sampled subset.

Getting the Most From Contact Lens

Contact Lens is included as part of Amazon Connect. It's not a separate add-on requiring additional procurement. But having access to a capability and fully utilising it are different things. In our experience deploying Contact Lens for New Zealand organisations, the difference between basic activation and genuinely extracting its value lies in three areas.

Configuration: Defining the right keyword categories, sentiment thresholds, compliance rules, and evaluation criteria for your specific context. Generic defaults will give you some insight. Properly configured rules will give you transformative insight.

Integration with workflows: Contact Lens outputs become dramatically more valuable when they feed into operational workflows, triggering supervisor alerts, populating CRM records, driving coaching conversations, informing routing decisions.

Analytics cadence: Building a regular practice of reviewing Contact Lens data, weekly or monthly reports on contact reasons, sentiment trends, compliance rates, agent performance distributions, is how the insight translates into continuous improvement rather than a one-time audit.

At Easycoder, we help organisations configure Contact Lens properly from the start, integrate its outputs into their operational workflows, and build the analytics habits that make it a living, improving part of their contact centre operation.

If you're running Amazon Connect and not fully utilising Contact Lens, you're leaving significant operational and customer experience value on the table. We'd love to help you unlock it.

There are a few remaining lines where the original structure is still a little heavy, but it is now clean and publish-ready without changing the actual substance. If you want, I can do the same again and turn this into a tighter final copy version with the same meaning but smoother prose.