Artificial Intelligence’s ability to augment and support progress and development over the past few decades is inarguable. However, when does it become damaging, contradictory even? In our latest Beyond Data podcast AI’s Climate Jekyll & Hyde – friend and foe, Tessa Jones (our VP of Data Science, Research & Development) and Sophie Chase-Borthwick (our Data Ethics & Governance Lead) discuss exactly this with Joe Baguley, Vice President and Chief Technology Officer, EMEA, VMware.

Our speakers explore the multifaceted topic of energy consumption and AI – from whether all applications are equal for energy consumption (or reflecting if there are some ‘better’ than others), to creating visibility and responsibility of energy consumption for all stakeholders. Here we try to give clarity to some of the grey areas that were discussed.

Should we consider all applications equal?

“AI and machine learning are about huge things, huge data sets, huge computation actions … all of those have huge implications in terms of energy,” Joe observes, before dropping in hugely sobering stats such as the total annual energy consumption of bitcoin being the same as Norway. Even when considering the often-touted argument of 57% of the energy for bitcoin mining using renewables, Joe counters: “But those renewables could have been used for something else, right? Those solar panels… and those hydropower stations and those wind turbines, we could be using them for something else.”

This raises the ethical question of whether there should be stricter governance, standards, and precedent set on more ‘moral’ applications for their energy consumption. Should we be more closely considering the difference in energy consumption between server farms that support minimizing food waste versus those that are focused on mining digital currency, for example?

“Is there an opportunity for [greater] regulation?” Tessa ponders. Would this regulation help challenge the current status quo for all applications’ energy consumption being considered equal? While Sophie observes: “We’ve had certain European nations start to put rules around data center expansion, where you’re allowed and not allowed to build because of the capacity there, which isn’t regulating the use of it. But it does have that knock-on effect that if you literally can’t build the data center support, you have to start thinking about other ways to build [models].”

When considering Sophie’s point on alternative ways to build models, Joe notes: “We’re using AI to deal with the symptoms, but maybe there’s some better ways we could be using AI to deal with the cause as well”.

And this all raises the next question – who should ultimately be making these ongoing moral calls for the environment and energy usage?

Embedding Environmental, Social, and Governance (ESG) by design

Environmental, Social, and Governance (ESG) is shorthand for a framework that helps stakeholders understand how an organization is managing risks and opportunities related to environmental, social, and governance criteria. Our speakers untangle the idea of ESG and how companies could use it to help triage the different applications they use.

Joe asks: “Is there an ESG-led marketing opportunity here? Your AI might be the same as my AI, but my AI is better from an ESG perspective. They both get the same results at the same time for the same cost, but this one’s better from an ESG perspective, in terms of sustainability, in terms of social good, in terms of environmental.”

By placing more emphasis on ESG as the criterion for measuring impact and success, it could help with embedding sustainability in the heart of the application’s deployment, rather than a siloed approach. Sophie agrees: “We have privacy by design, we have security by design. Why not have ESG by design?”

Following on from this thought, our speakers consider the cost implications of AI and ESG with Joe observing, “There’s a lot of businesses right now that can’t afford AI because it’s expensive…but I believe they will come to a tipping point where they can’t afford not to”.

Are we over-prioritizing accuracy?

“There’s a hyper-focus on the accuracy,” according to Tessa. “It ends up not even being about the motivation for green, it’s a motivation for fast training, fast tuning. Unfortunately, it’s how most data scientists are motivated; be faster without having to compromise their accuracy.”

Often, the increase in accuracy can be mapped on a logarithmic graph. Good gains at first, but quickly tapering off to minimal increase. Is it useful to be that much more accurate, often by points of a decimal? “Some are good, more must be better … people just keep going, as opposed to saying actually good enough is good enough,” Joe summarizes.

Instead of chasing marginally better accuracy each time, we should be considering the application in a holistic view. The increase in accuracy might be 0.01%, but would cost heavily for energy consumption – is it worth it? Should we be better at exposing these costs more vigorously throughout a team so everyone can feel more empowered and have the visibility to interrogate more closely?


To hear about how our speakers untangle these controversial questions and more, tune in now to Beyond Data podcast episode 3: AI’s Climate Jekyll & Hyde – friend and foe.