Amazon Research Awards recipients announced

Amazon Research Awards (ARA) provides unrestricted funding and AWS Promotional Credits to academic researchers investigating diverse research topics across multiple disciplines. This cycle, ARA received many outstanding research proposals and today publicly announces 10 awardees representing 10 universities.

This announcement includes awards funded under three calls for proposals during the Winter 2024 and Spring 2024 cycles: AI for Information Security, Foundation Model Development, and Sustainability. The proposals were reviewed for the quality of their scientific content and their potential to influence both the research community and society.

In addition, Amazon encourages the publication of research results, presentations of research at Amazon offices worldwide, and the release of related code under open source licenses.

Recipients have access to more than 300 public Amazon datasets and can use AWS AI/ML services and tools through their AWS Campaign Credits. Recipients are also assigned an Amazon Research Contact, who offers consultation and advice, along with opportunities to attend Amazon events and training sessions.

Recommended reading

From reinforcement learning and supervised fine-tuning to bumper models and image watermarking, responsible AI underpinned the design and development of the Amazon Nova models.

“Security is critical to Amazon, and artificial intelligence has been instrumental in making progress in this area. The ARA program allows us to engage with the broader academic community to tackle important issues at this intersection of AI and cybersecurity,” said Baris Coskun, Senior Scientist at GuardDuty. “The response to our AI for Cybersecurity call for proposals has been fantastic and we have received a large number of high quality proposals. We look forward to supporting the new recipients in their development of effective new technologies that provide meaningful security value.”

Security Control Generation Process.png

Recommended reading

New tool leverages large language models to create rules for configuring AWS services and processing alerts.

“The response to Amazon’s first Foundation Model CFP was excellent. We awarded the largest Amazon Research Awards grant to date with $250,000 in cloud credits for work on Trainium-enhancing foundational models. Momentum in AI is only getting stronger; with the Build on Trainium program, AWS will invest $110MM to support AI research at universities around the world,” said Emily Webber, Principal Solutions Architect at Annapurna. “We look forward to working with exceptional PIs to develop kernels and algorithms that improve the future of AI for all. The scaling of model growth, in size and applications, provides a strong case for future work at the lowest levels of stack. There’s never been a better time to dive into computer optimization for AI – join us!”

ARA funds proposals throughout the year in a number of research areas. Applicants are encouraged to visit the ARA call for proposals page for more information or send an email to be notified of future open calls.

The tables below show, in alphabetical order by last name, cycle call recipients winter 2024 and spring 2024 sorted by research area.

Spring 2024

AI for information security

Container University Research title
Z. Berkay Celik Purdue University Time-Preserving Audit Log Reduction: A Scalable Approach to Precise Attack Investigation and Anomaly Detection
Kaize Ding Northwestern University Label-Efficient Graph Anomaly Detection for Information Security: Detection, Automation, and Explanation
Christopher Kruegel University of California, Santa Barbara Combating false positives in ML-based security applications with context-aware classification
Sijia Liu Michigan State University Advancing Reliable Generative AI: The Role of Machine Learning
Chongjie Zhang Washington University in St. Louis Towards practical preference-based offline reinforcement learning for information security
Yue Zhao University of Southern California Label-Efficient Graph Anomaly Detection for Information Security: Detection, Automation, and Explanation

Sustainability

Container University Research title
Fengqi you Cornell University Large language model co-pilot for transparent and reliable LCA

Winter 2024

Development of basic model

Container University Research title
Lu Cheng University of Illinois at Chicago Reliable large-scale model tuning via uncertainty quantification
Samet Oymak University of Michigan, Ann Arbor Beyond Transformer: Optimal Architectures for Language Model Training and Tuning
Hua Wei Arizona State University Reliable large-scale model tuning via uncertainty quantification

Leave a Comment