Sustainable AI

I wanted to use my summer “downtime” (you know, in between weird kid camp hours and vacation chaos) to complete some training, so I went through terra.do‘s Sustainable AI program. I confess to falling back on late-summer catch-up to get through my final project, but I’m excited to now be able to share some of my insights here.

Just in case you need a tl;dr, I’ll drop a takeaway here…

Does that mean you should stop using AI? NO. Definitely not. This genie is out of the bottle and there’s no putting that cat back in the bag. You should not forgo what is a tremendous business advantage because you feel bad that tech companies are prioritizing innovation over the environment.

Why do we worry about the environmental impact of AI?

You may already know this, but technology in general, and AI specifically, is not environmentally friendly. At least not by default. Using these tools requires tremendous amounts of energy, which has to be created – whether by coal, solar, or some other source. All that energy also generates heat, which in turn requires more energy or water to cool things down so that the tools can continue to function. 

That means that in 2023, data centers consumed 4.4% of U.S. electricity, according to the Department of Energy (DOE). And by 2028, that total is estimated to be somewhere between 6.7% and 12%.

A big contribution to that increase is coming from AI, which is driving the demand for data centers with the training and “inference” (daily use) of large language models.

How much environmental impact is there from my daily AI use?

It depends. (I know, I’m sorry, but of course it does).

It depends how much you use AI, what models you use, if you’re generating text, or images, or video. It even varies based on how you craft your prompts.

The downside of this variability, is that it makes it extremely difficult to calculate impact. Adding to that, many technology companies do not share their energy or water usage, or try to break it down to understand the cost as compared to prompts, output, or other metrics. Much of the data we would use to make educated decisions about the tools we use are not available.

In general, generated text is going to be less impactful than generating an image (or multiple images, depending on which tool you use). That said, using a research model for a complex prompt or series of prompts is going to be more environmentally expensive. According to a recent paper from Google, the “median” prompt on their AI models utilizes 0.24 Wh of energy, 0.03 gCO2e, and 0.26 mL of water.

While I think it’s outstanding that Google has shared this data, there’s no definition or example for a “median” prompt. So while we do get some idea of measurement, we’re still a long way from truly understanding our impact.

How do I minimize my impact when using AI?

Bad news: There is not enough data right now to truly know the impact that you have by using AI.

Does that mean you should stop using AI? NO. Definitely not. This genie is out of the bottle and there’s no putting that cat back in the bag. You should not forgo what is a tremendous business advantage because you feel bad that tech companies are prioritizing innovation over the environment.

Good news: There is enough information that we can find and/or extrapolate to make smart decisions. Here are some action items…

  • You can check the impact of a prompt using the ScaleDown browser extension. This will let you improve your prompt ahead of time, potentially making it shorter and/or more AI-friendly. It also let’s you save prompts and monitor your impact.
  • A really simple step – so simple it’s almost silly – is to limit the LLM output. These are “generative” models – they love to listen to (watch??) themselves talk. Adding a simple “…in 500 words of less” to your prompt can get a more succinct output. Or save yourself time and write instructions into your LLM profile to keep the responses tight.
  • If you’re using ChatGPT and want to get actionable data on your impact, Ecolytics has released OffsetAI. This tool will both monitor the impact of your Open AI prompts, and help you fund carbon offsets to mitigate your impact. It’s not available for other models yet, but there are plans to extend the functionality.
  • As I’ve mentioned, there is a project on Hugging Face which indexes the AI Energy Score of major models. Per the project details, this is updated 2x a year, so while the data is getting old (Feb ’25), new data is due any day now.

Final Thoughts

My final thoughts are that I HAVE SO MANY MORE THOUGHTS. As you might imagine, there’s a lot to this. Let me know if you want more about choosing a platform, understanding variables like data center locations, or minimizing your impact while meeting your needs. I’m also happy to offer training on this topic for you or your team – please reach out for more!

FRIDAY, SEPT 19

Founders Summit

Join me for FYSO Friday, 12-5PM at Raleigh Founded – Gateway, 
as we discuss Operating in the Age of Disruption.

Where to Find Us


Sept. 17 & Oct. 15: Open Office Hours at Blush (The Coven)

Sept. 23: Open Office Hours at Cary Founded

Sept. 29 – Oct. 3: Online Session for NC Climate Week

All Upcoming Events

A BIG Marit Welcome to Tosh Comer!

We’re thrilled to welcome Tosh Comer to the team! Tosh is an ecosystem builder dedicated to strengthening North Carolina’s entrepreneurial landscape. She’s also a great project and operational guide for the team, and we’re grateful to have her help coordinating our projects and services.

Also, a big welcome BACK to our intern Kairavi Gardé, who is settling in for fall semester at NC State.

Meet the Team

What We’re Reading

Mmmm… reading.

In a stroke of astounding irony, Google itself, in its response to the ruling, highlighted these privacy risks. When Google finds common ground with privacy advocates, that should be a warning sign. The ruling may take a small bite out of Google’s competitive edge, but it does so by turning consumers into collateral damage.

– Tom Snyder, Google’s antitrust ‘victory’ is a loss for consumer privacy

Marit Plus

Fractional digital leadership for your organization.