AWS Architecture Blog
Field Notes: Improving Call Center Experiences with Iterative Bot Training Using Amazon Connect and Amazon Lex
This post was co-written by Abdullah Sahin, senior technology architect at Accenture, and Muhammad Qasim, software engineer at Accenture.
Organizations deploying call-center chat bots are interested in evolving their solutions continuously, in response to changing customer demands. When developing a smart chat bot, some requests can be predicted (for example following a new product launch or a new marketing campaign). There are however instances where this is not possible (following market shifts, natural disasters, etc.)
While voice and chat bots are becoming more and more ubiquitous, keeping the bots up-to-date with the ever-changing demands remains a challenge. It is clear that a build>deploy>forget approach quickly leads to outdated AI that lacks the ability to adapt to dynamic customer requirements.
Call-center solutions which create ongoing feedback mechanisms between incoming calls or chat messages and the chatbot’s AI, allow for a programmatic approach to predicting and catering to a customer’s intent.
This is achieved by doing the following:
- applying natural language processing (NLP) on conversation data
- extracting relevant missed intents,
- automating the bot update process
- inserting human feedback at key stages in the process.
This post provides a technical overview of one of Accenture’s Advanced Customer Engagement (ACE+) solutions, explaining how it integrates multiple AWS services to continuously and quickly improve chatbots and stay ahead of customer demands.
Call center solution architects and administrators can use this architecture as a starting point for an iterative bot improvement solution. The goal is to lead to an increase in call deflection and drive better customer experiences.
Overview of Solution
The goal of the solution is to extract missed intents and utterances from a conversation and present them to the call center agent at the end of a conversation, as part of the after-work flow. A simple UI interface was designed for the agent to select the most relevant missed phrases and forward them to an Analytics/Operations Team for final approval.
Amazon Connect serves as the contact center platform and handles incoming calls, manages the IVR flows and the escalations to the human agent. Amazon Connect is also used to gather call metadata, call analytics and handle call center user management. It is the platform from which other AWS services are called: Amazon Lex, Amazon DynamoDB and AWS Lambda.
Lex is the AI service used to build the bot. Lambda serves as the main integration tool and is used to push bot transcripts to DynamoDB, deploy updates to Lex and to populate the agent dashboard which is used to flag relevant intents missed by the bot. A generic CRM app is used to integrate the agent client and provide a single, integrated, dashboard. For example, this addition to the agent’s UI, used to review intents, could be implemented as a custom page in Salesforce (Figure 2).
A separate, stand-alone, dashboard is used by an Analytics and Operations Team to approve the new intents, which triggers the bot update process.
Walkthrough
The typical use case for this solution (Figure 4) shows how missing intents in the bot configuration are captured from customer conversations. These intents are then validated and used to automatically build and deploy an updated version of a chatbot. During the process, the following steps are performed:
- Customer intents that were missed by the chatbot are automatically highlighted in the conversation
- The agent performs a review of the transcript and selects the missed intents that are relevant.
- The selected intents are sent to an Analytics/Ops Team for final approval.
- The operations team validates the new intents and starts the chatbot rebuild process.
During the first call (bottom flow) the bot fails to fulfil the request and the customer is escalated to a Live Agent. The agent resolves the query and, post call, analyzes the transcript between the chatbot and the customer, identifies conversation parts that the chatbot should have understood and sends a ‘missed intent/utterance’ report to the Analytics/Ops Team. The team approves and triggers the process that updates the bot.
For the second call, the customer asks the same question. This time, the (trained) bot is able to answer the query and end the conversation.
Ideally, the post-call analysis should be performed, at least in part, by the agent handling the call. Involving the agent in the process is critical for delivering quality results. Any given conversation can have multiple missed intents, some of them irrelevant when looking to generalize a customer’s question.
A call center agent is in the best position to judge what is or is not useful and mark the missed intents to be used for bot training. This is the important logical triage step. Of course, this will result in the occasional increase in the average handling time (AHT). This should be seen as a time investment with the potential to reduce future call times and increase deflection rates.
One alternative to this setup would be to have a dedicated analytics team review the conversations, offloading this work from the agent. This approach avoids the increase in AHT, but also introduces delay and, possibly, inaccuracies in the feedback loop.
The approval from the Analytics/Ops Team is a sign off on the agent’s work and trigger for the bot building process.
Prerequisites
The following section focuses on the sequence required to programmatically update intents in existing Lex bots. It assumes a Connect instance is configured and a Lex bot is already integrated with it. Navigate to this page for more information on adding Lex to your Connect flows.
It also does not cover the CRM application, where the conversation transcript is displayed and presented to the agent for intent selection. The implementation details can vary significantly depending on the CRM solution used. Conceptually, most solutions will follow the architecture presented in Figure1: store the conversation data in a database (DynamoDB here) and expose it through an (API Gateway here) to be consumed by the CRM application.
Lex bot update sequence
The core logic for updating the bot is contained in a Lambda function that triggers the Lex update. This adds new utterances to an existing bot, builds it and then publishes a new version of the bot. The Lambda function is associated with an API Gateway endpoint which is called with the following body:
{
“intent”: “INTENT_NAME”,
“utterances”: [“UTTERANCE_TO_ADD_1”, “UTTERANCE_TO_ADD_2” …]
}
Steps to follow:
- The intent information is fetched from Lex using the getIntent API
- The existing utterances are combined with the new utterances and deduplicated.
- The intent information is updated with the new utterances
- The updated intent information is passed to the putIntent API to update the Lex Intent
- The bot information is fetched from Lex using the getBot API
- The intent version present within the bot information is updated with the new intent
7. The update bot information is passed to the putBot API to update Lex and the processBehavior is set to “BUILD” to trigger a build. The following code snippet shows how this would be done in JavaScript:
const updateBot = await lexModel
.putBot({
...bot,
processBehavior: "BUILD"
})
.promise()
9. The last step is to publish the bot, for this we fetch the bot alias information and then call the putBotAlias API.
const oldBotAlias = await lexModel
.getBotAlias({
name: config.botAlias,
botName: updatedBot.name
})
.promise()
return lexModel
.putBotAlias({
name: config.botAlias,
botName: updatedBot.name,
botVersion: updatedBot.version,
checksum: oldBotAlias.checksum,
})
Conclusion
In this post, we showed how a programmatic bot improvement process can be implemented around Amazon Lex and Amazon Connect. Continuously improving call center bots is a fundamental requirement for increased customer satisfaction. The feedback loop, agent validation and automated bot deployment pipeline should be considered integral parts to any a chatbot implementation.
Finally, the concept of a feedback-loop is not specific to call-center chatbots. The idea of adding an iterative improvement process in the bot lifecycle can also be applied in other areas where chatbots are used.
Accelerating Innovation with the Accenture AWS Business Group (AABG)
By working with the Accenture AWS Business Group (AABG), you can learn from the resources, technical expertise, and industry knowledge of two leading innovators, helping you accelerate the pace of innovation to deliver disruptive products and services. The AABG helps customers ideate and innovate cloud solutions with customers through rapid prototype development.
Connect with our team at accentureaws@amazon.com to learn and accelerate how to use machine learning in your products and services.
Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.