Today, Amazon is announcing the Amazon Trusted Ai Challenge, a global university competition to run secure innovation in generative AI technology. This year’s challenge focuses on responsible AI and specifically on the Large Language Model (LLM) Coding Security.
“We focus on promoting the capabilities of coding LLMs, exploring new techniques to automatically identify possible vulnerabilities and effective secure these models,” said Rohit Prasad, senior vice president and main scientist Amazon AG. “The goal of the Amazon chair of the AI ​​challenge is to see how students’ innovations can help forge a future where Genératif AI is consistently developed in a way that gives confidence while highlighting effective methods to protect LLMS courage to improve their safety.
University students will compete in a tournament style challenge as either model developer teams or red teams to improve the AI ​​user experience, prevent abuse and allow users to build more secure code. Model Development Team will build security features in code generation models, while red teams develop automated techniques to test these models. Each round allows teams to refine their models and techniques based on multiple turns, identifying strengths and weaknesses.
Amazon selects up to 10 teams for the competition from November 2024, which runs through the study year. Each of the 10 selected teams receives $ 250,000 in sponsorship along with monthly AWS credits, and winning teams have a chance to win an additional $ 700,000 in cash prizes.
Progress and opportunities within AI-ASSISTED SOFTWARE DEVELOPMENT
Amazon confidence in the AI ​​Challenge aims to improve security, livelihood and reliability of LLMS that operates AI-ASSIED software development tools. With the emergence of generative AI coding assistants, these technologies demonstrate unmatched innovative capabilities and offered exciting opportunities to ensure responsible and amber use. This challenge seems to inspire developers, researchers and researchers to create solutions that improved AI-assisted coding tools’ ability to protect users and systems.
Tournament structure
Through four tournaments and a live finale event, Red Team will test the Model Developer Teams’ AI models to uncover vulnerabilities and improve their security. Red teams are ranked by their success in forcing models to break their policy through automated conversation red-teaming. Model developing team will create code generation models to improve security, identify threats and prevent unintended behavior. They will be ranked according to their ability to build and reinforce successful defense through techniques such as fine tuning and adaptation. The goal is to discover innovative ways for LLM creators to mitigate risks and effective implication security measures.
Top Model Developer Team wins $ 250,000 with $ 100,000 for second place. The Red Team, which demonstrates the most effective vulnerability identification, also wins $ 250,000 with $ 100,000 for second place.
For more information about the challenge, included rules and frequently asked questions, visit Amazon Trusted Ai Challenge Landing page.