AWS Public Sector Blog

Introducing the Amazon Trusted AI Challenge

AWS branded background design with text overlay that says "Introducing the Amazon Trusted AI Challenge "

Amazon has launched the Amazon Trusted AI Challenge, a global university competition to drive secure innovation in generative artificial intelligence (generative AI) technology. This year’s challenge focuses on responsible AI and specifically on large language model (LLM) coding security.

University students will compete in a tournament-style challenge as either model developer teams or red teams to enhance the AI user experience, prevent misuse, and enable users to build more secure code. Model developer teams will build security features into code-generating models, while red teams will develop automated techniques to test these models. Each round will allow teams to refine their models and techniques based on multi-turn interactions, identifying strengths and weaknesses.

Amazon will select up to 10 teams for the competition starting November 2024, which will run through the academic year. Each of the 10 selected teams will receive $250,000 in sponsorship along with monthly AWS credits, and winning teams have a chance to win an additional $700,000 in cash prizes.

Advancements and opportunities in AI-assisted software development

The Amazon Trusted AI Challenge aims to enhance the safety, reliability, and trustworthiness of LLMs powering AI-assisted software development tools. With the rise of generative AI coding assistants, these technologies demonstrate unprecedented, innovative capabilities and offer exciting opportunities to ensure responsible and reliable use. This challenge aims to inspire developers, scientists, and researchers to create solutions that enhance the ability of AI-assisted coding tools’ ability to protect users and systems.

Tournament structure

Through four tournaments and a live finals event, red teams will test model developer teams’ AI models to uncover vulnerabilities and improve their security. Red teams will be ranked on their success in forcing models to breach their policies through automated conversational red-teaming. Model developer teams will create code-generating models to enhance security, identify threats, and prevent unintended behavior. They will be ranked on their ability to build and reinforce successful defenses through techniques such as fine-tuning and alignment. The goal is to discover innovative ways for LLM creators to mitigate risks and implement effective safety measures. The top model developer team wins $250,000, with $100,000 for second place. The red team demonstrating the most effective vulnerability identification also wins $250,000, with $100,000 for second place.

Proposals to participate are due Sept 1, 2024. For more information about the challenge and details on how to apply, visit the Amazon Trusted AI Challenge landing page.