AWS Contact Center
How contact center leaders can evaluate using generative AI for customer experience
Generative artificial intelligence (AI) is an area of interest for many businesses. Gartner estimates that by 2024, 40% of enterprise applications will have embedded conversational AI. In 2020, this number was less than 5%. At Amazon Web Services (AWS), our customers are often asking about how they can use generative AI across various business segments. Customer experience (CX) is an area of strong interest.
In part one of this 3-part blog post series, we discussed what generative AI is, how it is changing the CX landscape, and the business outcomes it can help deliver. We also showcased an “art of the possible” demo with Amazon Connect.
In this second part of our series, we will focus on when to use generative AI versus other methods, to solve for a business challenge. We will also help you determine the best technology to apply for your use cases, by working backwards from your problem statement.
Applying Large Language Models (LLMs) for improved customer experiences
Most LLMs are built using a foundation model (FM). An FM is trained on a broad spectrum of generalized and unlabeled data. FMs are capable of performing a wide variety of general tasks such as understanding language, generating text and images, and conversing in natural language. The FM provides a foundation on which other models can be built, or they can be used directly themselves.
Since LLM-created outputs can appear as if a human wrote them, this makes them ideal for interfacing between humans and technology in customer experience applications. However, like any technology, there are pros and cons to using generative AI. One should weigh these carefully in order to determine their appropriate application.
Areas of opportunity
Generative AI provides opportunities for improved customer experiences that aren’t easily possible with other machine learning technologies. LLMs are incredibly flexible. The same model can perform multiple tasks such as answering questions, summarizing documents, translating languages, and completing sentences. The ability to generate natural “human-like” content means you no longer need to rely on canned information, which allows for hyper-personalization in responses. Let’s look at a real-world use case of generative AI.
Using generative AI, Amazon.com is now providing summarization of multiple reviews of a single product. In our example below, Amazon.com has used generative AI to summarize over three thousand reviews of a storage cabinet, into a single simple-to-read, data-rich review. This summary helps the buyer quickly make a decision about their purchase, instead of spending time perusing through multiple reviews.
Generative AI can also be used for knowledge intensive natural language processing, a technique used by LLMs to answer specific questions from a knowledge base archive. This is extremely useful for agent assistance applications in a contact center.
In the example below, a customer contacted a car rental business to inquire about a cancellation fee.
Using speech transcription, the system detected that a question related to cancellation was asked, and intelligent search was used to find internal documents related to that question. Then, generative AI enhances this use case by summarizing the exact answer derived from those documents.
It returns a concise response with a solution that the agent can then, provide to the customer. This saves agent time to peruse through multiple knowledge articles and formulate responses saving average handle times (AHT).
This “art of the possible” demo provides a complete walk through of the scenario.
Areas of challenges
Alongside the opportunities, there are some challenges with generative AI as well. These include the accuracy of responses, cost control, speed and ease of use. Because LLMs are very large and powerful models, they can be slower and costlier than other traditional AI models, or alternative automation techniques. They can also be riskier than other methods. They can produce outputs that look plausible but may be fabricated (a phenomenon called “hallucinations”), contain bias, or are not in line with your company’s specific values. Not being able to fully control the outputs may have governance or compliance implications that you need to take into consideration.
Addressing the challenges of generative AI
At AWS, we build FMs with responsible AI in mind throughout design, development, deployment, and operations. We consider a range of factors, including accuracy, fairness, intellectual property and copyright considerations, appropriate usage, and privacy. These factors are addressed throughout the processes used to acquire training data, into the FMs themselves, and into the technology that we use to pre-process user prompts and post-process outputs. Although we actively improve our features using feedback from customers, no customer data is used to train the models, which provides more privacy and security for users.
As a consumer of generative AI, there are mitigation strategies that you can account for. A key one is keeping a human in the loop. As your company starts to adopt and learn how to use generative AI, begin with internal use cases, instead of customer facing ones. When you are directly interacting with generative AI models (such as Anthropic’s Claude 2 or Amazon Titan), the field of prompt engineering becomes key – learning how to craft inputs that will control the outputs a model gives and shape it to your specific needs. To address privacy or security concerns, you could opt to fully self-host models like Llama-2 or other hostable models on your own servers. However, you can also choose to use managed services from trusted providers like AWS who have a clear focus on these areas and a strong track record. The best contribution that you, as a consumer of generative AI, can make towards responsible AI is carefully choosing your use cases.
Working backwards: How to determine if it’s a good use case for generative AI
At Amazon, we talk about ‘working backwards’ – starting with the customer’s needs and getting to the root of the problem(s) they face. Instead of starting with the solution, work backwards from the problem to evaluate what the right solution is. Then find the right tools for the job (there may be more than one). As you work backwards from a problem, you might identify different approaches to solve for it. In some cases, you might pursue a manual process or apply logic and rules. In others, you may use traditional AI, and/or turn to generative AI. We’ll explain each option in detail and evaluate its pros and cons. Be aware this is a journey rather than a checklist.
Manual Process– The manual process requires a person(s) at each step of the journey. This is good for low-volume, highly sensitive or complex workloads where decisions may be based on more than just data drivers. For instance, you would want insurance claims that are dealing with loss of life to be handled by a human using high judgement and empathy. However, manual processes are difficult to scale, especially with the common challenges across an agent/skilled-worker base. Training, scheduling, managing churn/attrition, and the cost of labor all factor into the complexity of using this method at scale.
Logic and rules– Simple and repetitive tasks can be solved using a logical, decision-based flow. These do not need generative AI. An example of this would be routing a customer to an agent using a traditional Interactive Voice Response (IVR) system or using a menu-driven chatbot for self-service. In those cases, the flow is simple, repeatable, logical, and rule driven. The advantage here is low cost (requires very simple automation and non-complex services) and low risk (easy to audit/test/understand/change). It also is a skillset you likely already have in your organization. The challenge with this approach is that it does not scale to more complex or decision-based requirements.
Traditional AI– As the problems become more complex, traditional AI comes into play when simple rules and logic would become too complicated. For instance, you might use traditional conversational AI in your IVR to detect the user’s intent, and handle simple tasks like resetting a password. While generative AI would be capable of doing the same, it may be, depending on the use case, like using a sledgehammer to drive a thumbtack. Traditional AI systems are trained to do one type of job, but this means they generally do it very well. If you only need that functionality, then running a smaller single purpose model will generally be lower-cost and faster.
It’s worth noting that traditional AI will still play a large part in contact center use cases such as transcription, classification, natural language understanding etc. Generative AI will augment, rather than replace, the existing AI solutions. Amazon Connect leverages traditional AI in many ways, and was created with AI at its core. For example, Amazon Connect Contact Lens provides contact center analytics using transcriptions and comprehension models.
Generative AI– We want to emphasize on use cases where generative AI is a differentiator and it offers significant improvements over other noted options. These will often be scenarios where generating new outputs, (something traditional AI did not do), is key. For example, we know from our earlier example of the Amazon.com review that generative AI is excellent at summarization.
How would that look in the contact center?
A) Let’s look at an example of a call from a customer booking their vacation. Below you have a screenshot of the transcript of the call. Beside the transcript you will see a detailed, concise summary the call, created using generative AI.
Any manager or supervisor reviewing this call can now quickly browse through the summary without spending time to read the entire transcript.
B) Another complex example is using the reasoning ability of generative AI to create an agent coaching tool.
One can now take the call transcription generated and use that summary to automatically answer evaluation form questions for supervisors. The screenshot below shows a sample transcript and an evaluation form associate with the same. The evaluation form was automatically answered using the analysis provided by generative AI.
Unlike a simple classifier, we can get more than a yes/no to a question. Now, we can also see the reasoning behind that. This allows supervisors and managers to quickly validate the answers the generative AI provided. These can then be adjusted by the supervisors, if needed.
In both the examples, generative AI is augmenting human processes, increasing efficiency and giving supervisors and agents more time to focus on customers. However, it’s important that you are educating your staff on how to best utilize these outputs. It may take some iteration to tune your system to get the optimal outputs for your specific company and use-cases. It’s important that these generative AI outputs should not be blindly accepted.
While generative AI solved for the mentioned examples, it may not be the right fit for every use case. We want to be responsible with how we choose our use cases. Here are a few questions you can use to decide whether generative AI is the right solution:
- Is this a use case where it would be irresponsible to use AI (one where potential harms from things like inaccurate outputs would outweigh any value)?
- Is this a use case that is already being solved by an existing method that is unlikely to be significantly improved by switching to generative AI?
- Is this a use case where we could use generative AI to enhance, rather than replace, our existing solution?
- How much upfront cost will it take to experiment and evaluate value? Are there existing managed services or out-of-the-box features we can use?
- Can we safely test our implementation, allowing time to iterate and learn?
- Do we have the right skills to make this decision?
Generative AI is a powerful tool, but it also takes time and investment to learn. Each use case and business has their own nuances, and will need different considerations.
Conclusion
Generative AI is still a very new technology with massive potential and room for growth. Experimenting with generative AI early will enable your business to be ready to adopt it for appropriate opportunities as new features emerge. To learn more, check out the AWS generative AI hub where you can find other examples of real-life use-cases, educational material, technology such as Amazon Bedrock, and of course, experts you can connect with. We’re ready to collaborate with you on how to work backwards to find the right technology solution for your customer service use case. We encourage you to talk to your account team about getting started with generative AI and Amazon Connect.
About the authors:
Mike Wallace leads the Americas Solution Architecture Practice for Customer Experience at AWS. | |
Gillian Armstrong is a Builder Solutions Architect. She is excited about how the Cloud is opening up opportunities for more people to use technology to solve problems, and especially excited about how cognitive technologies, like conversational AI, are allowing us to interact with computers in more human ways. |