AWS for Industries
How retailers use generative AI for smart data operations and software delivery with Amazon Bedrock
Traditionally, large retailers used to store data depending on the system that processes them and simply copied only the data they needed into the Enterprise Data Warehouse (EDW). As they grew, retailers began to build data lakes, where they could gather all the data in one place. And those data lakes grew. Today, most large retailers have data lakes that are at petabyte scale.
Data lakes support critical business domains like digital and e-commerce data, including data coming from third-party integrations, such as Salesforce, Adobe, and more. Data lakes also support essential business functions across retail including supply chain, enterprise planning, sales and inventory, Order Management System (OMS), Wholesale EDI, Real-time store traffic, and more. Some of the complexities in such situations include multiple integrations, different deployment patterns, and ever-growing toolsets. However, the goal for the enterprise is to deliver value by remaining agile and adapting to a changing business and technology landscape.
In this blog, we discuss how enterprises can use generative AI for smarter data operations in complex scenarios. We showcase how a customer used Amazon Bedrock’s generative AI features for test case generation and code lineage, enhancing the software development lifecycle (SDLC).
Challenges
Organizations face all kinds of challenges during the development process. These can be technical challenges with integrations, or support nightmares created because features were released too early. Here is a list of typical challenges you might find familiar:
- Unclear requirements and insufficient test cases: Lack of clearly defined requirements and inadequate testing leading to issues during implementation.
- Code quality and performance issues: Problems with the codebase, including inefficient logic and suboptimal implementation.
- Degraded data quality and accuracy: Deterioration in the quality and reliability due to lack of data governance, data input errors, inconsistent data sources, or outdated/incomplete data.
- Heavy SME (Subject Matter Expert) intensive process: The system requires significant involvement and expertise from Subject Matter Experts, leading to higher costs and slower turnaround.
- High cost of ownership: The overall cost of maintaining and operating the system is high.
- High maintenance for data and operations: The system requires extensive effort to maintain the data and operational aspects.
- Information silos: Existence of isolated data and information repositories, hindering cross-functional collaboration and decision-making.
Enhancing efficiency by leveraging generative AI
While many of the challenges we’ve listed are complex, regardless of where they are in the software development lifecycle, they can be avoided. If you follow some simple tenets and apply general design principles, you can assure the success of any generative AI solution. Here are some examples:
- Responsible by Design: Ethical development and deployment through key responsible AI principles like explainability, fairness and security.
- Portable Modular Design: Design should be modular, allowing reuse of the entire solution or individual components as needed. Modularity promotes ease in maintenance, testing and deployment and reusability across solutions to keep up with technological advancements.
- Smart AI Foundation: Along with modular components, a solid foundation in AI creates accelerators for all future initiatives. This means that all implementations focus on end-to-end integration of AI into every business process.
- Flexible and Scalable: Designed to be flexible in terms of the underlying large language model (LLM) used. By selecting the most appropriate LLM for the task, your team has what it needs to optimize for cost and performance.
A customer success story
The customer discussed here is a renowned American fashion brand. It is a global leader in designing, marketing, and distributing premium lifestyle products including apparel, accessories, home furnishing and fragrances. Known for its impeccable craftmanship and classic aesthetic, its diverse range of products continues to captivate consumers worldwide. The customer’s dedication to quality and innovation has solidified its status as a symbol of elegance and enduring fashion.
The challenge
The customer’s data landscape has evolved into a sprawling “data ocean” over time, growing to over 4 Petabytes spread across more than 40 disparate sources including 5 ERP systems. It served over 7,000 users producing more than 25,000 reports supported by 2,000 or more data pipelines. Changes to complex business rules (ensuring data accuracy and quality) became increasingly difficult. Inconsistent business rules and a heavy reliance on niche domain expertise created gaps in knowledge and human-dependencies. It took weeks and sometime months to understand the full impact of new initiatives, or fix production issues, as developers were forced to rely on subject matter experts or dig through convoluted code.
The solution
While the technology stack, comprising Amazon Simple Storage Service (Amazon S3), AWS Glue, Amazon EMR, Amazon Kinesis, Amazon QuickSight, Python, Airflow, and MicroStrategy provided a solid foundation, the pressing need was to reduce time-to-market for new initiatives by driving efficiencies across the entire software development lifecycle. This is where generative AI came to the rescue. Duplicated business rules, complex code and process dependencies is where generative AI can shine a flashlight into dark corners. Here, we’ve listed just 5 areas where generative AI helped improve efficiency, reduce complexity, and bring the cost of running the entire system down:
Amazon Bedrock made it possible to enable these capabilities quickly, because of simpler integrations, superior outputs, and reduced limitations such as token limits in the respective LLMs. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.
- Code Explainer with Complete Lineage: Used across multiple SDLC Stages including requirements gathering, impact analysis, design, and operations with automated code explainer for the entire codebase by integrating with code repository. The resulting code explanation covers summary, flow, inputs, outputs, SQL explanations, and column-level lineage. In addition, integration is done with enterprise data collaboration and specific integration tools, including Confluence, and data governance tools like Atlan for data democratization.
- Requirements Enrichment: Used across multiple SDLC stages including requirement/user story grooming, impact analysis, and design. The solution is integrated with requirements and project management tools like Jira and Rally. It provides the capability for product owners to refine requirements, problem statements, and success criteria, including example generation.
- Test Case Generation: Primarily used during the testing stage to generate functional, integration, and unit test cases for the entire codebase. Test case generation integrated with project management tools, like Jira, allow product owners to generate test cases based on specific requirements and to link to them.
- Code Optimization: Used in the build stage to develop a code optimizer which scans the entire code repository and generates new, optimized code or suggestions. This solution is used for both enhancements and new projects.
- Chatbot: Used across multiple SDLC stages including requirements gathering, impact analysis, design, and operations. The solution provides an integration with incident management tools like ServiceNow, collaboration tools like Confluence, and other enterprise applications hosting standard operating procedures and policies. It aims to reduce incident analysis and triage time by providing a summary of past incidents related to the issue. In addition, it improves turnaround times, service levels, and reduces dependency on SMEs.
Figure 1, shown below, illustrates where Amazon Bedrock is now integrated into the process, informing the Advanced Analytics Platform, enriching outputs.
Figure 1 – Integrating LLM results in 3rd party applications
The results
Overall productivity has increased by at least 20—30 percent, along with numerous other improvements, which includes the following:
- Achieved between 20 percent and 30 percent improvement in development while enhancing productivity.
- Enabled developers to generate unit, integration, and functional test cases while maintaining 100 percent coverage.
- Delivered two defect-free, critical business initiatives that expanded the customer’s e-commerce presence.
- Improved turnaround time for issue resolution by up to 30 percent with up to 80 percent savings in test case generation. Additionally, 100 percent of the scripts are documented with automated refresh capabilities, and 30 percent reduction in P90 resolution days.
- Contributed to achieving up to 20 percent infra savings in compute cost, 10—30 percent reduction in CPU utilization, 10—20 percent reduction in execution time, and 10—15 percent reduction in memory usage.
The power of partnership
Infosys, an AWS Partner, helped this customer navigate its complex technology stack. Even with several AWS services that include newer offerings like Amazon Bedrock, combined with several third-party ISV solutions, and home-grown applications, Infosys was able to lead the customer to achieve better outcomes. There is a strong 3-way partnership among AWS, Infosys, and this customer. Communication is key and includes regular bi-weekly calls to identify opportunities, plan, fund, execute, and track these programs. Infosys drove these initiatives to help foster customer confidence, validate with a pilot, and quickly embrace the generative AI offerings by AWS.
Conclusion
In this customer case study, Infosys and AWS jointly demonstrated how Amazon Bedrock can be leveraged by large enterprise customers to automate and simplify the software development lifecycle. Through this effort, the customer has reduced the overall cost of delivering projects and improved time-to-market for new features and functions. This approach, methodology, and use of best practices are scalable within different organizations and can be shared with multiple large enterprise customers to drive value.
Infosys – AWS Partner spotlight
Infosys is an AWS Premier Tier Services Partner and MSP that enables clients to outperform the competition and stay ahead of the innovation curve.