Sustainable AI can be done today
May 5, 2021
Caroline Adam

Sustainable AI can be done today

AI Strategy
Enterprise Architecture
Ethical AI

As artificial intelligence is getting more and more bad press as a non-sustainable technology, we asked our ethical AI expert, Caroline Adam, to shed some light on these allegations and how companies can do sustainable AI today and tomorrow. 


In this blogpost, she outlines four major technology trends she notices the AI R&D communities and industry are actively working towards sustainability in AI. She also shares ML6’s key principles for AI development as we believe that implementing AI can be done sustainable. At ML6, we actively follow and iterate on these development principles to push us and our customers to build responsible and sustainable technology.

Caroline strongly believes that each company that innovates with a broad multidisciplinary team with enough focus on ethics and sustainability, will increase the agility of the company in the long run. So don’t hesitate to start with AI. 


___________________________

Sustainable AI


AI is getting more and more bad press as a non-sustainable technology.  Examples are “GPT-3 an AI game-changer or an environmental disaster” and “Environmental Sustainability And AI” which mentions that training 1 ML transformer based model generates 5 times the CO2 of the entire lifetime of an American car. This data is based on a study by the University of Massachusetts, Amherst: Energy and Policy Considerations for Deep Learning in NLP.

Several of these sustainability concerns are marketed by companies and research institutions that want to sell technology to make AI more sustainable. A lot of the high-level articles that blame AI are based on the training of large NLP models such as the famous GPT-3 model.

In reality, AI uses a wide range of algorithms so the computational requirements and sustainability vary enormously.

Additionally, depending on, for example, the size of the data set, the way the ML model is retrained and the number of times the ML model is used, the computational resources required will be different for each use case.

On top of that, the value and sustainability depend on the impact on the business process. The ML model can be only a relatively small piece of the puzzle.

The sustainability of an AI use case has to be checked end to end.
It’s essential to assess the technology and the sustainability of the business processes you improve with AI.

AI use cases

AI can be used to optimize the yield of a production process, to reduce defects in productions, minimize the maintenance of machines, monitor solar and wind parks, optimize battery performance…

In these cases, AI has an immediate impact on the sustainability of the process, the use of natural resources and the behaviour of the people in your organisation. A great example is predictive maintenance for remote infrastructure such as offshore wind farms to avoid the CO2 emissions of the spare parts and people that need to travel to the remote location.

For plenty of use cases that we’ve implemented we’ve noticed that the impact on the environment is positive despite the resources used for the AI component and IT stack.

Technology Trends

From a technology point of the view, the AI R&D communities and industry are actively working on sustainability.

We notice 4 trends in the market.

First of all, a race between teams for accurate, extremely large, but compute-intensive models for specific domains. In the NLP space, GPT-3 by OpenAI is a famous example. These models are offered as efficient APIs so the ML model is not retrained for each individual use cases and the impact on the environment is spread over thousands of use cases. The providers of these APIs immediately increase revenue if they improve the ML model training and inference efficiency, so there is an economic incentive to invest in sustainability.

On the other hand, a large number of researchers and companies are working on optimizing the current state of the art ML models so they remain accurate but require less computational power.
A recent example is NFNets by Deepmind. It achieves similar accuracy to EfficientNet-B7 while being 8.7×faster to train, and their largest model sets a new overall state of the art without extra data of 86.5% top-1 accuracy.

At the same time, we see huge investments, for example by IMEC, Google, in more efficient edge and data centre hardware to speed up ML model training and inference. From a software point of view multiple options, such as quantization and model pruning are available to reduce the size and/or computational required of existing ML models. We optimized an ML model to be deployed on embedded camera hardware in agricultural machinery.

From a long term point of view, several initiatives are also starting to share data across industries and work towards open-source ML models trained for specific verticals. Examples are the data spaces and goals of the data strategy proposed by the EU.

Processes

AI only has a positive long term impact if it’s integrated into the business processes of a company. This needs to be done sustainably.
A quick win is to use compute-intensive AI only when it’s relevant and adds value. In a lot of cases, we’ve seen that the computational impact can be reduced by for example using multiple less power-hungry algorithms often together with a rules-based system instead of a large complex ML model.

ML6 AI development principles

At ML6, we take our responsibility to build sustainable technology seriously.

To work in a sustainable way we follow these principles.

  1. Use the most cost-efficient algorithm or combine multiple less resource-intensive ML models that meet the requirement of your customer. Our chapters are actively following the newly released ML models, hardware and best practices.
  2. Use pre-trained open-source ML models or APIs if the accuracy is within expectations and budget.
  3. Use transfer learning to retrain an ML model on new unseen customer-specific data so training the entire model is avoided.
  4. Optimize the ML model size so inference can be done on lower specification machines.
  5. Develop and deploy using an IT stack that can scale to zero or right-size the edge device.
  6. Monitor the ML model in production and only retrain if it’s required.
  7. Use a carbon-neutral cloud provider and shared infrastructure for large scale ML model training and deployment.

Summary

So don’t hesitate to start with AI.
Innovate with a broad multidisciplinary team with enough focus on ethics and sustainability and make sure to pick the most appropriate ML algorithm for your use-case, budget and sustainability goals.

In plenty of cases implementing AI in a modern way inspires companies to modernize the entire IT stack. This increased the overall sustainability and agility of the company in the long run.


Related posts

Want to learn more?

Let’s have a chat.
Contact us