Optimizing AI/ML workloads for sustainability

In June 2022, Amazon re:MARS, the company’s in-person event exploring advances and practical applications in machine learning, automation, robotics and aerospace (MARS), took place in Las Vegas. The event brought together thought leaders and technical experts building the future of artificial intelligence and machine learning, and included keynote speeches, innovation spotlights and a series of breakout session talks.

Now, in our re:MARS revisited series, Amazon Science takes a look back at some of the keynotes and breakout session talks from the conference. We have asked the presenters three questions about their talk, and we provide the full video of their presentation.

Focus on sustainability

Amazon is calling for updated carbon accounting to measure where renewable energy projects will have the most impact.

On June 27, Amogh Gaikwad, Solutions Developer at Amazon Web Services (AWS), presented the talk “Optimizing AI/ML Workloads for Sustainability”. His session focused on best practices for efficient retraining of multiple machine learning models using minimal computational resources and computationally efficient built-in algorithms.

What was the central theme of your presentation?

Building and training machine learning models with higher accuracy can be an energy-intensive process that requires large computational resources requiring significant energy consumption. This session explores guidance from the sustainability pillar of the AWS Well-Architected Framework for reducing the carbon footprint of AI/ML workloads.

An animation shows a stack of boxes being slowly reduced in number to arrive at an optimal pack of boxes for packing items as part of Amazon's PackOpt system

Sustainability at Amazon

Pioneering web-based PackOpt tool has resulted in an annual reduction of cardboard waste of 7% to 10% in North America, saving approximately 60,000 tons of cardboard annually.

This guide covers best practices for efficient retraining of multiple models using minimal computational resources and leveraging computationally efficient built-in algorithms. In addition, customers can learn about the AWS tools available for monitoring models during training and deployment.

In which applications do you expect this work to have the greatest impact?

This guidance will have the greatest impact on machine learning applications that require large, energy-intensive computational resources. In addition, the guidance is applicable to applications where the focus is on reducing carbon emissions and designing machine learning workloads with sustainability in mind.

What are the main points you hope the audience takes away from your talk?

  • How to design ML workloads using the well-architected machine learning lifecycle and sustainability best practices
  • How to optimize resources for developing, training and tuning ML models
  • How to reduce the environmental impact of machine learning workloads in production
  • Familiarity with AWS tools for monitoring machine learning workloads

Amazon re: MARCH 2022: Optimizing AI/ML workloads for sustainability

Leave a Comment