In our last instalment of this mini-series, we’re going to focus on Agent Evaluations. Within Amazon Connect, Agent Supervisors can search for a contact record, listen to audio recordings and choose an evaluation form to help score their Agent’s performance. Additionally, using Contact Lens, call summarisations, transcriptions, and Agent-Customer sentiment can be utilised, providing further insight.
This enables the Supervisor to evaluate any Agent-Customer interaction, and understand how enquiries are dealt with, and issues resolved, for any given contact.
Evaluation feedback can then be used for training purposes to help improve Agent knowledge and performance, and ultimately ensure the best Customer experience.
As part of CloudInteract’s suite of Amazon Connect reports, we visualise a range of metrics to help you understand and track:
· The number of evaluations performed, and how they trend over time
· Evaluation scores, and how they trend over time
· The overall percentage of contacts that were evaluated
· The number of evaluations completed per Agent and/or Queue
· If Supervisors are hitting volume targets, e.g. XX number of evaluations performed per Agent per month
Contact Centre Supervisors are continually reviewing calls, to learn how to improve the Customer experience. Agent Evaluation scores can reveal:
· Quality and depth of training
· Product knowledge
· Clarity of speech and language
· Caller expectation management
· Professional manner and attitude
· Concentration and customer focus
Using Sentiment or CSAT dashboards as a start-point for Agent Evaluation enables the Supervisor to contextualise their analysis, and focus on clear hypotheses, such as:
· Agents have been well trained in learning about a new product
· New starters take approx. 4 weeks to achieve consistent CSAT scores
· There is a knowledge shortfall in XX queue
· Agent X has a great method for calming frustrated callers
Not only do evaluations ensure that Supervisors are regularly reviewing their Agents, but they also provide insight to the Supervisor’s ownability to drive their team forward. If scores are consistently poor, has additional training been requested? Has negative behaviour been addressed?
Sometimes, Agents can request a re-score. Frequently disputed scores could indicate that a Supervisor is showing signs of bias, lack of understanding, or unrealistic expectations. Perhaps the question scores are incorrectly weighted.
Do the evaluators need further training? Do criteria need to be broken out to pinpoint the disputed questions/clarify what’s being evaluated, so there’s no ambiguity between what the evaluator is scoring against, and what the Agent thinks they’re being scored for?
However Amazon Connect’s Agent Evaluations are used, there is no doubt that they bring opportunities to drive value for your business. Here’s a reminder:
· Highlights areas for further training
· Pinpoints knowledge deficit
· Reveals trending topics and customer expectations
· Questions whether tools are up to scratch e.g. knowledge base
· Identifies ‘champions’; live examples of troubleshooting, demonstrating a positive attitude, and remaining calm under pressure