Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts112/v4/2a/61/a8/2a61a894-a347-d156-a115-9a5135d0b1d2/mza_14652890763549213143.png/600x600bb.jpg
Environment Variables
Green Software Foundation
123 episodes
2 weeks ago
Each episode we discuss the latest news regarding how to reduce the emissions of software and how the industry is dealing with its own environmental impact. Brought to you by The Green Software Foundation.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
Technology
Business,
News,
Non-Profit,
Tech News
RSS
All content for Environment Variables is the property of Green Software Foundation and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Each episode we discuss the latest news regarding how to reduce the emissions of software and how the industry is dealing with its own environmental impact. Brought to you by The Green Software Foundation.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
Technology
Business,
News,
Non-Profit,
Tech News
Episodes (20/123)
Environment Variables
Software Architecture for Sustainability

Guest host Anne Currie speaks with Karthik Vaidhyanathan, Assistant Professor at IIIT Hyderabad, about integrating sustainability into AI development. They discuss how the world can balance digital growth with renewable energy goals and how AI systems can be designed to be energy-efficient rather than energy-intensive. Karthik shares insights from his research on sustainable AI and MLOps, including dynamically selecting and retraining models to cut energy use and costs without compromising performance. The conversation underscores the importance of dynamic system design and collaboration across academia, industry, and government to make sustainability a core principle in software engineering.


Learn more about our people:

  • Anne Currie: LinkedIn | Website
  • Karthik Vaidhyanathan: LinkedIn | GitHub | Website


Find out more about the GSF:

  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter


Resources:

  • Attention Is All You Need [06:34]
  • Environment Variables Ep119 Backstage: The Green Software Movement Platform [13:16] 
  • SustAInd [33:01]
  • HarmonE [36:00] 
  • SA4S @ SERC [36:33]


If you enjoyed this episode then please either:

  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 weeks ago
44 minutes 22 seconds

Environment Variables
The Week in Green Software: Sustainability along the DevOps Lifecycle

Guest host Anne Currie is joined by software engineer and sustainability advocate Julian Gommlich to explore how green practices can be embedded throughout the DevOps lifecycle. They discuss how modern operational practices like continuous delivery, automation, and agile iteration naturally align with sustainability goals, helping teams build more efficient, resilient, and energy-aware systems. The conversation covers real-world examples, from migrating to newer, more efficient software versions to understanding the carbon impact of data centers, and highlights why adopting a DevOps mindset is crucial for driving both environmental and business value in today’s rapidly changing digital landscape.


Learn more about our people:

  • Anne Currie: LinkedIn | Website
  • Julian Gommlich: LinkedIn | Website


Find out more about the GSF:

  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter


Resources:

  • Power in Numbers: Mapping the electricity grid of the future w/ Olivier Corradi [31:02] 
  • Electricity Maps [31:58]
  • Google’s huge new Essex datacentre to emit 570,000 tonnes of CO2 a year [41:06] 
  • Compute Gardener Scheduler
  • Scalable Platform for Reporting Usage and Cloud Emissions 


Events:

  • BetterSoftware – October 3 · Turin, Italy
  • Sustainable AI: Energy, Water, and the Future of Growth – October 6 · San Francisco, USA 
  • Sustainable Coding: Rust Meets the Right to Repair – October 16 · ’s-Hertogenbosch, Netherlands



Hosted on Acast. See acast.com/privacy for more information.

Show more...
3 weeks ago
58 minutes 44 seconds

Environment Variables
Building Energy Awareness into Operating Systems

Host Chris Adams speaks with Didi Hoffmann, CTO of Green Coding Solutions, about building energy awareness into operating systems and making sustainability a first-class concern in software development. They discuss Didi’s journey from Linux kernel programming to climate-focused tech and many more!


Learn more about our people:

  • Chris Adams: LinkedIn | GitHub | Website
  • Didi Hoffmann: LinkedIn | Website

Find out more about the GSF:

  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:

  • Green Coding Solutions [02:32]
  • BioHof Potsdam [07:12]
  • PowerLetrics: An Open-Source Framework for Power and Energy Metrics for Linux | IEEE [12:32]
  • GitHub - green-kernel/powerletrics: Powermetrics for Linux 
  • Green Metrics Tool | green-coding.io [13:13]
  • Green Screen Catalyst Fund [17:15]
  • ProcPower | Prototype Fund [22:19]
  • Green Kernel Wordpress Plugin | GitHub [27:09]
  • GitHub - Green-Software-Foundation/real-time-cloud: Real Time Energy and Carbon Standards for Cloud Providers [36:48] 
  • Blue Angel for Software- Certificate Services | green-coding.io [38:52]
  • Paris Conference December 10th and 11th 2025 | Green IO [56:58]
  • Green Kernel · GitHub
  • Why We’re Exploring SCI Disclosure Certification | GSF  

Events:

  • ecoCompute conference [59:41]
  • Code: ENVIRONMENT-VARIABLES

Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 month ago
1 hour 5 minutes 29 seconds

Environment Variables
Sustainable AI
Guest host Anne Currie speaks with Boris Gamazaychikov, Head of AI Sustainability at Salesforce, about aligning artificial intelligence with environmental responsibility. They explore the wide range of energy impacts across AI models, the development of the AI Energy Score benchmarking tool, and why transparency is essential for sustainable choices.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 month ago
43 minutes 34 seconds

Environment Variables
Backstage: The Green Software Movement Platform

WIN FREE TICKETS TO GREEN IO LONDON:

  • CLICK THIS LINK AND COMMENT BELOW TO WIN 


Learn more about our people:

  • Chris Skipper: LinkedIn | Website
  • Gosia Fricze: LinkedIn | Website


Find out more about the GSF:

  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter


Resources:

  • Green Software Movement | GSF [04:33] 
  • Green Software Practitioner Course | GSF [17:56] 
  • Environment Variables Podcast | Ep 84 Backstage: SOFT (Previously TOSS) Project [24:42] 


Events:

  • Green IO London Conference September 23 & 24 2025 [20:37] 
  • Events - Green Software Movement | GSF


If you enjoyed this episode then please either:

  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!



Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 month ago
29 minutes 44 seconds

Environment Variables
The Week in Green Software: AI Energy Scores & Leaderboards
Host Chris Adams is joined by Asim Hussain to explore the latest news from The Week in Green Software. They look at Hugging Face’s AI energy tools, Mistral’s lifecycle analysis, and the push for better data disclosure in the pursuit for AI sustainability. They discuss how prompt design, context windows, and model choice impact emissions, as well as the role of emerging standards like the Software Carbon Intensity for AI, and new research on website energy use. 

Learn more about our people:
  • Chris Adams: LinkedIn | GitHub | Website
  • Asim Hussain: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

News:
  • A Gift from Hugging Face on Earth Day: ChatUI-Energy Lets You See Your AI Chat’s Energy Impact Live [04:02]
  • Our contribution to a global environmental standard for AI | Mistral AI [19:47]
  • AI Energy Score Leaderboard - a Hugging Face Space by AIEnergyScore [30:42]
  • Challenges Related to Approximating the Energy Consumption of a Website | IEEE [55:14]
  • National Drought Group meets to address “nationally significant” water shortfall - GOV.UK 

Resources:
  • GitHub - huggingface/chat-ui: Open source codebase powering the HuggingChat app [07:47]
  • General policy framework for the ecodesign of digital services version 2024 [29:37]
  • Software Carbon Intensity (SCI) Specification Project | GSF [37:35]
  • Neural scaling law - Wikipedia [45:26]
  • Software Carbon Intensity for Artificial Intelligence | GSF [52:25]

Announcement:
  • Green Software Movement | GSF [01:01:45] 

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Asim Hussain: ChatGPT, they're all like working towards a space of how do we build a tool where people can literally pour junk into it, and it will figure something out. 

Whereas what we should be doing, is how do you use that context window very carefully. And it is like programming. 

Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams. 

Hello and welcome to this week in Green Software where we look at the latest news in sustainable software development. I am joined once again by my friend and partner in crime or occasionally crimes, Asim Hussain, of the Green Software Foundation. My name is Chris Adams. I am the Director of Policy and Technology at the Green Web Foundation, no longer the executive director there,

and, as we've moved to a co-leadership model. And, Asim, really lovely to see you again, and I believe this is the first time we've been on a video podcast together, right?

Asim Hussain: Yeah. I have to put clothes on now, so, so that's,

Chris Adams: That raises all kinds of questions to how intimate our podcast discussions were before. Maybe they had a different meaning to you than they did to me, actually.

Asim Hussain: Maybe you didn't know I was naked, but anyway.

Chris Adams: No, and that makes it fine. That's what, that's what matters. I also have to say, this is the first time we get to, I like the kind of rocking the Galactus style headphones that you've got on here.

Asim Hussain: These are my, yeah, no, these are old ones that I posted recently. I actually repaired them. I got my soldering iron and I repaired the jack at the end there. So, I'm very proud of myself for having repaired. I had the right to repair. Chris. I had the right to repair it.

Chris Adams: Yeah. This is why policy matters.

Asim Hussain: I also have the capability.

Chris Adams: Good. So you can get, so, good on you for saving a bunch of embodied carbon and, how that's calculated is something we might touch on. So, yes. So if you are new to this podcast, my friends, we're just gonna be reviewing some of the news and stories that are kinda showed up on our respective radars as we work in our kind of corresponding roles in both the Green Software Foundation and the Green Web Foundation.

And hopefully this will be somewhat interesting or at least diverting to people as they wash their dishes whilst listening to us. So that's the plan. Asim, should I give you a chance to just briefly introduce what you do at the Green Software Foundation before I go into this?

'Cause I realized, I've just assumed that everyone knows who you are. And I know who you are, but maybe there's people who are listening for the first time, for example.

Asim Hussain: Oh yeah. So, yeah. So my name's Asim Hussain. I am a technologist by trade. I've been building software for several decades now. I formed the green software, yeah, Green Software Foundation, you know, four years ago. And, now I'm the executive director and I'm basically in charge of, yeah, just running the foundation and making sure we deliver against our vision of a future where software has zero harmful environmental impacts.

Chris Adams: That's a noble goal to be working for. And Asim, I wanted to check. How long is it now? Is it three years or four years? 'Cause we've been doing this a while.

Asim Hussain: We, yeah. So we just fin, well, four years was May, so yeah, four years. So next birthday's the fifth birthday.

Chris Adams: Wow. Time flies when 

the world is burning, I suppose. 

Alright, so anyway, as per usual, what we'll do, we share all the show notes and any links that we discuss or projects we discuss, we'll do our damnedest to make sure that they're available for anyone who wants to continue their quest and learning more about sustainability in the field of software.

And I suppose, Asim, it looks like you're sitting comfortably now. Should we start looking at some of the news stories?

Asim Hussain: Let's go for it.

Chris Adams: Alright. Okay. The first one we have, is a story from Hugging Face. This is actually a few months back, but it's one to be aware of if it missed you the first time. So, Hugging Face released a new tool called Chat UI Energy that essentially lets you see, the energy impact live from using a kind of chat session,

a bit like ChatGPT or something like that. Asim, I think we both had a chance to play around with this, and we'll share a link to the actual story around this as well as the actual repo that's online. What do you think of this? what's your immediate take when you see this and have a little poke around with this? 

Asim Hussain: Well, it's good. I wanna make sure. It's a really nice addition to a chat interface. So just so the audience who's not seeing it, every time you do a prompt, it tells you the energy in, well, in watt hours, what I'm seeing right now. But then also, you know, some other stats as well.

And then also kind of how much of a phone charge it is. And that's probably the most surprising one. I just did a prompt, which was 5.7% of a phone charge, which was, that's pretty significant. Actually, I dunno, is that significant? So, one of the things is, I'm trying to, what I'm trying to find out from it though is how does that calculation, 'cause that's my world, it's like, how does, what do you really mean by a calculation?

Is it cumulative? Is it session based? Is it just, you know, how, what have you calculated in terms of the energy emissions? The little info on the side is just the energy of the GPU during inference. So it's not the energy of kind of anything else in the entire user journey of me using a UI to ask a prompt.

But we also know that's probably the most significant. And I'm kind of quite interested in figuring out, as I'm prompting it, I'm one, I'm, one of the things I'm seeing is that every single prompt is actually, the emissions are bigger than the previous prompt. Oh no, it's not actually, that's not true.

Yeah, it is.

Chris Adams: Ah, this is the thing you've been mentioning about cumulative, 

Asim Hussain: Cumulative. Yeah. Which is a confusing one. 'Cause I've had a lot of people who are really very good AI engineers go, "Asim, no, that's not true." And other people going, "yeah, it kind of is true." But they've just optimized it to the point where the point at which you get hit with that is at a much larger number.

But the idea is that there's, there, it used to be an n squared issue for your prompt and your prompt session history. So every time you put a new prompt in all of your past session history was sent with your next prompt. And if you are actually building, like a your own chat system, if you are actually building like your own chat solution for your company or wherever, that is typically how you would implement it as a very toy solution to begin with is just, you know, take all the texts that was previous and the new text and send it, in the next session.

But I think what, they were explaining to me, which was actually in the more advanced solutions, you know, the ones from Claude or ChatGPT, there's a lot of optimization that happens behind the scenes. So it doesn't really, it doesn't really happen that way, but I was trying to figure out whether it happens with this interface and I haven't quite figured it out yet.

Chris Adams: Oh, okay. So I think what you might be referring to is the fact that when you have like a GPU card or something like that, there's like new tokens and kind of cashed tokens, which are priced somewhat differently now. And this is one of the things that we've seen.

'Cause it's using maybe a slightly different kind of memory, which might be slightly faster or is slightly kind of is slightly lower cost to service in that sense. Yeah. Okay. So this is one thing that we don't see. What I, the good news is we can share a link to this, for anyone listening, this source code is all on GitHub, so we can have a look at some of this.

And one of the key things you'll see actually is, well this is sending a message. When you see the actual numbers update, the, it's not actually, what it's actually doing is it's calculating all this stuff client site based on how big each model is likely to be. 'Cause when you look at this, you can A,

Asim Hussain: It's a model.

Chris Adams: You can work out the, I mean, so when people talk about should I be using the word please or thank you, and am I making the things worse by treating this like a human or should I just be prompting the machine like a machine, is there a carbon footprint to that? This will display some numbers that you can see there, but this has all been calculated inside your browser rather than actually on the server.

So like you said, Asim, there is a bit of a model that's taking place here, but as a kind of way to like mess around and kind of have a way into this. This is quite interesting and even now it's kind of telling that there are so few providers that make any of this available, right now. We're still struggling even in like the third quarter of 2025,

to have a commercial service that will expose these numbers to you in a way that you can actually meaningfully change the environmental footprint of through either your prompting behavior or well maybe model choice. But that's one of the key things that I see. I can't think, I can't think of any large commercial service that's doing this.

The only one is possibly GreenPT,

which is basically put a front end on Scaleway's, inference service and I'm not sure how much is being exposed there for them to make some assumptions as well.

Asim Hussain: Do you know how bad, do you know how,

I feel very uncomfortable with the idea of a future where a whole bunch of people are not saying please or thank you, and the reason for it is they're proudly saying, "well, I care about, I care about sustainability, so I'm not gonna say please or thank you anymore 'cause it's costing too many, too much carbon." I find that very uncomfortable. I personally, I don't wanna, we could, choose not to say please or thank you in all of our communications because it causes, emissions no matter what you do. I don't know.

Chris Adams: I'm glad you weren't there, Asim. 'Cause I was thinking about that too. There's a carbon cost to breathing out and if, you, I guess maybe that's 'cause we're both English and it's kinda hardwired into us. It's like the same way that, you know, if you were to step on my toe, I would apologize to you stepping on my toe because I'm just English and I, and it's a muscle memory, kind of like impulsing.

Okay.

Asim Hussain: Yeah.

Chris Adams: That's, what we found. We will share some couple, a couple of links to both the news article, the project on Hugging Face, and I believe it's also on GitHub, so we can like, check this out and possibly make a PR to account for the different kinds of caching that we just discussed to see if that does actually make a meaningful difference on this.

For other people who are just looking, curious about this, this is one of the tools which also allows you to look at a, basically not only through weird etiquette, how etiquette can of impact the carbon footprint of using a tool, but also your choice of model. So some models might be, say 10 times the size of something, but if they're 10, if they're not 10 times as good, then there's an open question about whether it's really worth using them, for example.

And I guess that might be a nice segue to the next story that we touch on. But Asim, I'll let you, you gotta say something. I

Asim Hussain: No, I was gonna say, because I, this is, 'cause I've been diving into this like a lot recently, which is, you know, how do you efficiently use AI? Because I think a lot of the, a lot of the content that's out there about, you know, oh, AI's emissions and what to do to reduce AI's emissions, there are all the choices that as a consumer of AI, you have absolutely no ability to affect. I mean, unless you are somebody who's quite comfortable, you know, taking an open source model and rolling out your own infrastructure or this or that or the other. If you're just like an everyday, not even an everyday person, but just somebody who works in a company who's, you know, the company bought Claude, you know, you're using Claude,

end of story, what are you, like, what do you do? And I think that's really, it is a really interesting area. I might just derail our whole conversation to talk about this, but I think it's a really interesting area because, what it's really boiling down to is your use of the context window.

And so you have a certain number of tokens in a chat before that chat implodes, and you can't use that chat anymore. And historically, those number of tokens were quite low. Relative to, because of all the caching stuff hadn't been invented yet and this and that and the other. So the tokens were quite low.

What, didn't mean they didn't mean they were, the prompts were cheaper before. I think they were still causing a lot of emissions. But because they've improved the efficiency and rather than just said, I've improved the efficiency, leave it at that, I've improved the efficiency, Jevons paradox, I've improved the efficiency,

let's just give people more tokens to play around with before we lock them out. 

So the game that we're always playing is how to actually efficiently use that context. And the please or thank you question is actually, see this is, I don't think it's that good one. 'Cause it's two tokens in a context window of a million now, is what's coming down the pipeline.

The whole game. And I think this is where we're coming from as you know, if you wanna be in the green software space and actually have something positive to say about how to actually have a relationship with AI, it's all about managing that context. 'Cause the way context works is you're just trying to, it's like you've got this intern and if you flash a document at this intern, you can't then say, "oh, ignore that.

Forget it I didn't mean to show you that." It's too late. They've got it and it's in their memory and you can't get rid of it. the only solution is to literally execute that intern and bury their body and get a new intern and then make sure they see the information in the order and only the information they need to see so that when you finally ask 'em that question, they give you the right answer. And so what a lot of people do is they just, because there's a very limited understanding of how to play, how to understand, how to play with this context space, what people end up doing is they're just going, "listen, here's my entire fricking document. It's actually 50,000 words long. You've got it, and now I'm gonna ask you, you know, what did I do last Thursday?"

So it's, and all of that context is wasted. And I think that's, and it's also like a very simplistic way of using an AI, which is why like a lot of companies are, kind of moving towards that space because they know that it means their end user doesn't have to be very well versed in the use of the tool in order to get benefit out of it.

So that's why ChatGPT, they're all like working towards a space of how do we build a tool where people can literally pour junk into it, and It will figure something out.

 Whereas what we should be doing and what I'm like, and I think it's not only what we should be doing, it's, what the people who are like really looking at how to actually get real benefit from AI,

is how do you use that context window very carefully. And it is like programming. It is really like program. That's what, that's my experience with it so far. It's like, I want this, I need to feed this AI information. It's gonna get fed in an order that matters. It's gonna get fed in a format that matters.

I need to make sure that the context I'm giving it is exactly right and minimal. Minimal for the question that I wanna answer, get it answered at the end of it. So we're kind of in this like space of abundance where, because every AI provider's like, "well do what you want. Here's a million tokens.

Do what you want, do what you want."

And they're all, we're all just chucking money. These we're just chucking all our context tokens at it. They're burning money on the other side because they're not about making a profit at the moment. They're just about becoming the winner. So they don't really care about kind of profitability to that level.

So what us It's all about, I'm just getting back to it again. I think, we need to eventually be telling that story of like, how do you actually use the context window very carefully? And again, it's annoyed me that the conversation has landed at please and thank you. 'Cause the actual conversation should be, you know, turning that Excel file into a CSV because it knows how to parse a CSV and it uses fewer tokens to parse a CSV than an Excel file. Don't dump the whole Excel file, export the sheet that you need in order for it to, answer that question. If you f up, don't just kill the session and start a new session.

This is, there's this advice that we need to be giving that I don't even know yet.

Chris Adams: MVP. Minimal viable prompt.

Asim Hussain: Minimal viable prompt! Yeah. What is the minimal viable prompt and the, what's frustrating me is that like one of the things that we use Claude and I use Claude a lot, and Claude's got a very limited context window and I love that.

It was like Twitter when you had to, remember Twitter when you had to like have 160 characters?

It was beautiful.

Chris Adams: to 280, and then you're prepared to be on that website, you can be as, you can monologue as much as you want

Asim Hussain: Yeah. You can now monologue, but it was beautiful having to express an idea in this short, like short, I love that whole, how do I express this complex thing in a tweet? And so with the short context windows, were kind of forced to do that, and now I'm really scared because now everybody, Claude literally two days ago has now gone, right, you've got a million context window, and I'm like, oh, damn it.

Now I don't even, now I don't have personally

Chris Adams: That's a million token context window when you say that. Right. So that's enough for a small book basically. I can dump entire book into it, then ask questions about it. Okay. Well, I guess it depends on the size of your book really, but yeah, so that's, what you're referring to when you talk about a million context window there.

Asim Hussain: Yeah, yeah. And it's kind of an energy question, but the energy doesn't really, kind of, knowing how much, like I've just looked at chat UI window and I've checked a couple of prompts and it's told me the energy, and it's kinda that same world.

It's just it's just there to make me feel guilty, whereas the actual advice you should be getting is well, actually no, I, what do I do? How am I supposed to prompt this thing to actually make it consume less energy? And that's the,

Chris Adams: Oh, I see. So this is basically, so this is, you're showing me the thing and now you're making me feel bad. And this may be why various providers have hosted chat tools who want people to use them more, don't automatically ship the features that make people feel bad without giving 'em a thing they can actually do to improve that experience.

And it may be that it's harder to share some of the guidance like you've just shared about making minimum viable prompt or kind of clear prompt. I mean, to be honest, in defence of Anthropic, they do actually have some pretty good guidance now, but I'm not aware of any of it that actually talks about in terms of here's how to do it for the lowest amount of potential tokens, for example.

Asim Hussain: No, I don't see them. I don't see them. I mean, they, yeah, they do have like stuff, which is how to optimize your context window, but at the same time, they're living in this world where everybody's now 

working to a bigger, that's what they have to do.

And I don't know, it's kinda like, where do we, because we, 'cause the AI advice we would typically have given in the past, or we would typically give is listen, just run your AI in a cleaner region. And you are like, well, I can't bloody do that with Anthropic, can I? It's just, it's whatever it is, it's, you know.

Chris Adams: That's a soluble problem though. Like,

Asim Hussain: Like what I'm just saying or,

Chris Adams: Yeah. You know, but like the idea they're saying, "Hey, I want to use the service. And I want to have some control over where this is actually served from."

That is a thing that you can plausibly do. And that's maybe a thing that's not exposed by end users, but that is something that is doable.

And, I mean, we can touch on, we actually did speak about, we've got Mistral's LCA reporting as one of the things, where they do offer some kind of control, not directly, but basically by saying, "well, because we run our stuff in France, we're already using a low carbon grid."

So it's almost like by default you're choosing this rather than you explicitly opting in to have like the kind of greener one by, the greener one through an active choice,

I suppose.

Asim Hussain: They're building some data centers over there as well, aren't they? So it's a big, it's a big advantage for Mistral to be in France, to be honest with you. It's yeah, they're in

Chris Adams: this definitely does help, there's, I mean, okay. Well, we had this on our list, actually, so maybe this is something we can talk about for our next story, because another one on our list since we last spoke was actually a blog post from Mistral.ai talking about, they refer to, in a rather grandiose terms, our contribution to a global environmental standard for AI.

And this is them sharing for the first time something like a lifecycle analysis data about using their models. And, it's actually one that has, it's not just them who've been sharing this. They actually did work with a number of organizations, both France's agency, ADM. They were following a methodology specifically set out by AFNOR, which is a little bit like one of the French kind of, environmental agency, the frugal AI methodology.

And they've also, they were working with I think, two organizations. I think it's Sopra Steria, and I forget the name of the other one who was mentioned here, but it's not just like a kind of throwaway quote from say Sam Altman. It's actually, yeah, here we are is working with Hubblo, which is a nonprofit consultancy based in Paris and Resilio who are a Swiss organization, who are actually also quite, who are quite very well respected and peer reviewed inside this.

So you had something, some things to share about this one as well. 'Cause I, this felt like it was a real step forward from commercial operators, but still falling somewhat short of where we kind of need to be. So, Asim, what, when you read this, what were the first things that occurred to you, I suppose, were there any real takeaways for you?

Asim Hussain: Well, I'd heard about this, on the grapevine, last year because I think, one of the researchers from Resilio was at greenIO, yeah, in Singapore. And I was there and he gave a little a sneak. They didn't say who it was gonna be, they didn't say it was Mistral, but they said, we are working on one.

And he had like enough to tease some of the aspects of it. I suspect once it's got released, some of the actual detail work has not, that's what I'm, I think I'm, unless I, unless there's a paper I'm missing. But yeah, there is kind of more work I think here that didn't end up to actually get released once it's, once it got announced, but there was, it was a large piece of work.

It's good. It's the first AI company in the world of this, you know, size that has done any work in this space and released it. Other than like a flippant comment from Sam Altman, "I heard some people seem to care about the emission, energy consumption of AI." So, so that's good. And I think we're gonna use this, it's gonna be used in as a, as I'd say, a proxy or an analog for kind of many other, situations.

I think it's, it is lacking a little bit in the detail. But that's okay. I think we, every single company that starts, we should celebrate every organization that leads forward with some of this stuff. it's always very, when you're inside these organizations, It's always a very hard headwind to push against.

'Cause there's a lot of negative reasons to release stuff like this, especially when you're in a very competitive space like AI. So they took the lead, we just celebrate that. I think we're going to, there's some data here that we can use as models for other, as, you know, when we now want to look at what are the emissions of Anthropic or OpenAI or Gemini or something like that,

there's some more, you know, analogs that we can use. But also not a huge amount of surprise, I'd say, it's kind of a training and inference,

Chris Adams: Yep.

That turns be where the environmental footprint is.

Asim Hussain: Yeah. Training and inference, which is kind of, which is good. I mean, I think obviously hardware and embodied impacts is, they kind of separate kind of the two together.

I suspect, the data center construction is probably gonna be, I don't know

 that is quite low. Yeah, yeah,

Chris Adams: I looked at this, I mean this is, it's been very difficult to actually find any kind of meaningful numbers to see what share this might actually make. 'Cause as the energy gets cleaner, it's likely that this will be a larger share of emissions. But one thing that was surprising here was like, this is, you know, France, which is a relatively cr clean grid, like maybe between 40 and say 60 grams of CO2 per kilowatt hour, which is, that's 10 times better than the global average, right?

Or maybe 9, between 8 and 10 times cleaner than the global average. And even then it's, so with the industry being that clean, you would expect the embodied emissions from like data centers and stuff to represent a larger one. But the kind of high level, kind of pretty looking graphic that we see here shows that in, it's less than 2% across all these different kind of impact criteria like carbon emissions or water consumption or materials, for example.

This is one thing that, I was expecting it to be to be larger, to be honest. The other thing that I noticed when I looked at this is that, dude, there's no energy numbers. 

Asim Hussain: Oh, yeah. 

Chris Adams: Yeah. And this is the thing that it feels like a, this is the thing that everyone's continually asking for.

Asim Hussain: It's an LCA. So they use the LCAs specification, so

Chris Adams: That's, a very good point. You're right. that is, that's a valid response, I suppose. 'Cause energy by itself doesn't have a, doesn't have a carbon footprint, but the results of generating that energy does, electricity does have that impact. So yeah.

Okay. Maybe that's 

For 

Asim Hussain: the audience, they use like a well known, well respected, standardized way of reporting the lifecycle emissions using the LCA lifecycle analysis methodology, which is like an ISO certified standard of doing it. So they adhere to a standard.

Chris Adams: So this actually made me realize, if this is basically here and you are a customer of a AI provider, 'cause we were looking at this ourselves trying to figure out, okay, well what people speak to us about a AI policies? And we realized well, we should probably, you know, what would you want to have inside one?

The fact that you have a provider here who's actually done this work, does suggest that for that it's possible to actually request this information if you're a customer under NDAs. In the same way that with, if you're speaking to Amazon or probably any of the large providers, if you're spending enough money with them, you can have information that is disclosed to you directly under NDA.

So it may not be great for the world to see, but if you are an organization and you are using, say, Mistral, for example, or Mistral services, this would make me think that they're probably more able to provide much more detailed information so that you can at least make some informed decisions in a way that you might not be able to get from some of the other competing providers.

So maybe that's one thing that we actually do see that is a kind of. Not really a published benefit in this sense, but it's something that you're able to do if you are in a decision making position yourself and you're looking to choose a particular provider, for example.

Asim Hussain: I mean, you should always be picking the providers who've actually got some, you know,

Chris Adams: optimize for disclosure,

Asim Hussain: optimize for disclosure. Yeah. Always be picking the providers if you optimize for disclosure. I mean, if we, the people listening to this, that is the thing that you can do. And Mistral, They're also, they have some arguments in here as well, which is kind of, they did kind of also surface that it is like a pretty linear relationship between your emissions and the size of the model, which is a very useful piece of information for us to know, as a consumer.

Because then we can go, well actually I've heard all these stories about use Smaller models use smaller models and now you actually have some data behind it, which is supporting the fact that, yeah, using a smaller model isn't, it's not got some weird non-linearity to it, so a half size model is only like 10% less, emissions.

A half size model is half the emissions. So that's pretty, that's a pretty good thing to know. Helps Mistral, the fact that they have a lot of small models that you can pick and choose, is not, so a lot of this stuff really benefits Mistral. They are the kind of the kind of organization which has a product offering which is benefited, which does benefit a sustainability community.

So they have like small models you can use. I think, I wonder actually, Chris, 'cause they do say that they're building their own data center in France, but they've never said where there exists, where they until now, where they've been running their AI. So that might be the reason for, they might have been running it in East Coast US or something

like 

Chris Adams: I think that would be quite unlike, wouldn't be very likely, given that most of their provider, most of their customers are based in probably Western Europe still. Right. There is very much a kinda like Gaelic kind of flavor to the tooling. And I've, I mean actually Mistral, or Mistral's tools are ones which I've been using myself personally over the last, like few months, for example.

And it's also worth bearing in mind that they, took on a significant amount of investment from Microsoft a few years back and I would be very surprised if they weren't, or if they weren't using a French data center serving French providers. 'Cause if you were to choose between two countries, okay, if, France or like France actually has, and since 2021, I believe, has had actually a law specifically about measuring the environmental footprint of digital services.

So they've got things that they, I think it's called, I'm going to, I'm just gonna share a link to that, to the name of the law because I'm gonna butcher the French pronunciation, but it basically, it translates to Reduce the Environmental Footprint of Digital Services Law.

That's pretty much it. And that's where, as a follow on from that, that's what, that's what the RGESN, the kind of general guidance that it shares across kind of government websites in general for France. They've already got a bunch of this stuff out there for like how to do greener IT. I suspect that France is probably gonna be one of, well, probably the premier country, if you'd run, be running a startup to see something like this happening much more so than, well probably the US right now, especially given the current kind of push with its current kind of federal approach, which is basically calling into doubt climate change in the wider sense basically.

We were talking about disclosure, right? And we said an optimization for disclosure. And that's probably a nice segue to talk about, another link we had here, which was the energy score leaderboard. Because this is one thing that we frequently point to. And this is one thing that we've suggested in my line of work, that if you are looking to find some particular models, one of the places to look would be the AI Energy Score Leaderboard, which is actually maintained by Hugging Face.

And, I share this 'cause it's one of the few places where you can say, I'm looking for a model to help me maybe do something like image generation or captioning text or generating text or doing various things like this. And you can get an idea of how much power these use on a standardized setup.

Plus, how satisfied, you know, what the kind of satisfaction score might be, based on these tools and based on a kind of standardized set of like tests, I suppose. The thing is though, this looks like it hasn't been updated since February. So for a while I was thinking, oh, Jesus, does this mean we actually need to, do we have to be careful about who we, how we recommend this?

But it turns out that there's a new release that will be coming out in September. It's updated every six months. And, now that I do have to know about AI, this is one thing that I'm looking forward to 

seeing some of the releases on because if you look at the leaderboard for various slices, you'll see things like Microsoft Phi 1 or Google Gemma 2 or something like that.

Asim Hussain: That quite old?

Chris Adams: yeah, these are old now, it's six months in generative AI land is quite a long time. There's Phi 4 now, for example, and there's a bunch of these out there. So I do hope that we'll see this actually. And if you feel the same way, then 

yeah, go on.

Asim Hussain: Is it, 'cause, is I always assume this was like a, live leaderboard. So as soon as a model, I suppose once a model, like the emissions of a model are linked to the model and the version of it. So once you've computed that and put on the leaderboard, it's not gonna change. So then it's just the case of as new models come out, you just measure and it just sees how it goes on the leaderboard.

Because I'm seeing something here. I'm, I thought open, I'm seeing OpenAI, GPT. Isn't that the one they just released?

Chris Adams: No, you're thinking GPT-OSS, perhaps

Asim Hussain: Oh.

Chris Adams: One thing they had from a while ago. So that one, for example, came out less than two weeks ago, I believe. That isn't showing up here.

Asim Hussain: That isn't showing up

Chris Adams: The, I'm, I was actually looking at this thinking, oh, hang on, it's six months, something being updated, six months,

that's, it'd be nice if there was a way, a faster way to expedite kind of getting things disclosed to this. For example, let's say I'm working in a company and I've, someone's written in a policy that says only choose models that disclose in the public somewhere. This is one of the logical places where you might be looking for this stuff right now, for example, and there's a six month lag, and I can totally see a bunch of people saying, no, I don't wanna do that.

But right now there's a six month kind of update process for this.

Asim Hussain: In the AI realm is an eternity. Yeah.

Chris Adams: Yeah. But at the same time, this is, it feels like a thing that this is a thing that should be funded, right? I mean, it's, it feels :I wish there was a mechanism by which organizations that do want to list the things, how to make them to kind of pay for something like that so they can actually get this updated so that you've actually got some kind of meaningful, centralized way to see this.

Because whether we like it or not, people are basically rolling this stuff out, whether we like it or not, and I feel In the absence of any kind of meaningful information or very patchy disclosure, you do need something. And like this is one of the best resources I've seen so far, but it would be nice to have it updated.

So this is why I'm looking forward to seeing what happens in September. And if you think, if you too realize that like models and timely access to information models might be useful, it's worth getting in touch with these folks here because, I asked 'em about this when I was trying to see when they were, what the update cycle was.

And basically the thing they said was like, yeah, we're, really open to people speaking to us to figure out a way to actually create a faster funded mechanism for actually getting things listed so that you can have this stuff visible. Because as I'm aware, as I understand it, this is a labor of love by various people, you know, between their day jobs, basically.

So it's not like they've got two or three FTE all day long working on this, but it's something that is used by hundreds of people. It's the same kind of open source problem that we see again and again. But this is like one of the pivotal data sources that you could probably cite in the public domain right now.

So this is something that would be really nice to actually have resolved.

Asim Hussain: Because there is actually, 'cause the way Hugging Face works is, they have a lab and they have their own infrastructure. Is that how it works? Yeah. So that's

Chris Adams: this would, that was be, that was either, that was physically theirs, or it was just some space. 

Asim Hussain: Spin up. But yeah. But yeah, but they have to effectively like to get the score here. It's not self certified, I presume, but there's a, you know, each of these things has got to get run against the benchmark. So there's basically, if I remember, there was a way of like self certifying.

There was literally a way for

Chris Adams: You could upload your stuff.

Asim Hussain: Yeah. OpenAI could disclose to the Hugging Face to the, what the emissions of, you know, what the energy of it was. But most of it is, there's actually, you gotta run against the H100 and there's a benchmark

Chris Adams: Yep, exactly. So there's a bit of manual. There's a bit of manual steps to do that, and this is precisely the thing that you'd expect that really, it's not like an insoluble problem to have some way to actually expedite this so that people across the industry have some mechanism to do this. 'cause right now it's really hard to make informed decisions about either model choice or anything like that.

Even if you were to architect a more responsibly designed system, particularly in terms of environmental impact here.

Asim Hussain: Because if you were to release a new model and you wanted it listed in the leaderboard, you would have to run every other model against. Why would you need to do that? You need to

Chris Adams: You wouldn't need to do that. You just need to, you, because you don't have control over when it's released, you have to wait six months until the people who are working in that get round to doing that.

Asim Hussain: Just the time. It's just a time. Yeah. Someone's

Chris Adams: If you're gonna spend like a millions of dollars on something like this, it feels like this is not, even if you were to drop say, if, even if it was to cost, maybe say a figure in the low thousands to do something like this, just to get that listed and get that visible, that would be worth it.

So that you've actually got like a functioning way for people to actually disclose this information, to inform decisions. 'Cause right now there's, nothing that's easy to find. This is probably the easiest option I've seen so far and we've only just seen like the AI code of practice that's actually kind of been kind of pub that came into effect in August in Europe for example.

But even then, you still don't really have that much in the way of like public ways to filter or look for something based on the particular task you're trying to achieve.

I wanted to ask you actually, Asim, so I think, I can't remember last time if I was speaking to you, if this came up, I know that in your, with your GSF hat on, there's been some work to create a software carbon intensity for AI spec, right. Now, I know that there's a thing where like court cases, you don't wanna kind of prejudice the discussions too much by having things internally.

Although you're probably not, there isn't like AI court, you can be in contempt of, but I mean, yeah, not yet, but, who knows? Give it another six months. Is there anything that, is there anything, any, juicy gossip or anything you can share that people have been learning? 'cause like you folks have been diving into this with a bunch of domain experts so far, and this isn't my, like, while I do some of this, I'm not involved in those discussions.

So I mean, and I'm aware that there has been a bunch of work trying to figure out, okay, how do you standardize around this? What do you measure? You know, do you count tokens? Do you count like a prompt? What's the thing? Is there anything that you can share that you're allowed to talk about before it goes?

Asim Hussain: Yeah. I think, we, I think that what we've landed on is that as long as I'm not discussing stuff which is in, you know, active discussion and it's kind of made its way into the spec and there's been, you know, broad consensus over, I think it's pretty safe to talk about it.

If there's something that's kind of, and what we do, we do everything in GitHub. So if there's something which is like, I won't, I won't discuss anything which has only been discussed in like an issue or a discussion or comment thread or something. If it's actually made its way into the actual spare, that's pretty safe.

So yeah, the way it's really landed is that there's, there was a lot of conversations at the start. There was a lot of conversations and I was very confused. I didn't really know where things were gonna end up with. But you know, at the start there was a lot of conversations around well, how do we deal with training?

How do we deal with training? There's this thing called inference. And it's interesting 'cause when we look at a lot of other specs that have been created, even the way the Mistral LCA was done, so they, they gave a per inference, or per request. I've forgotten what they did. It, they didn't do per token.

So per

Chris Adams: they do per chat session or per task, right. I think it's something along those lines. Yeah.

Asim Hussain: Something along that, it wasn't a per token thing. But even then they, they added the training cost to it. And like those, some of the questions we were adding, can you add, is there a way of adding like the training? The training happened like ages ago. Can you somehow, is there a function that you can use to amortize that training to like future inference runs?

And we explored like lots of conversations. There's like a decay function. So if you were the first person to use a new model, the emissions per token would be higher because you are amortizing more of the training cost and the older models, the, so you explored like a decay function, we explored, yeah.

There's lots of ideas.

Chris Adams: Similar to the embodied usage, essentially like what we have with embodied versus, embodied carbon versus like use time carbon. You're essentially doing the same thing for training, being like the embodied bit and inference being the usage. And if you had training and you had three inferences, each of those inferences is massive.

Like in terms of the car embodied carbon, if there's like a billion, it's gonna much lower per, for each one.

Asim Hussain: But then you get into really weird problems because I mean it, we do that with the embodied carbon hardware, but we do that by saying, do you know what? The lifespans gone be four years and that's it. And we're just gonna pretend it's an equal waiting every single day for four years.

Chris Adams: Not with the GHG protocol. You can't do it with the GHG protocol. You can't amortize it out like that. You can, you have to do it the same year, so it, your emissions look awful one year

Asim Hussain: Ah, the year that you bought it from. 

Chris Adams: So this is actually one of the reasons, but yeah, this is actually one of the problems with the kind of default way of measuring embodied carbon versus other things inside this is, it's not, like Facebook for example, they've proposed another way of measuring it, which does that, this kind of amortization approach, which is quite a bit closer to how you might do, I guess, like typical amortization of capital, capital

Asim Hussain: Cap, yeah.

Chris Adams: So that's the, that's the difference in the models. And this is, these are some of the kind of honestly sometimes tedious details that actually have quite a significant impact. Because if you did have to, that's gonna have totally different incentive incentives. If you, especially at the beginning of something, if you said, well, if you pay the full cost, then you are incentivized not to use this shiny new model.

'Cause it makes you look awful compared to you using an existing one for example.

Asim Hussain: And that's one of the other questions like, is like, how do you, I mean, a lot of these questions were coming up like what do you... A we never, we didn't pick that solution. and we also didn't pick the solution of we had the, we actually had the conversation of you amortize it over a year, and then there's a cliff.

And then that was like, we're gonna incentivize people to use older models with this idea that older models were the thing. 

There were questions that pop up all the time. Like, what do you do when you have an open source model? If you were to, if I was to fine tune an open source model and then make a service based off of that, is the emissions of the model the open source model that I got Llama whatever it was, am I responsible for that?

Or is the,

and there was like, if you were to say, if you were to say no, then you're incentivizing people to just like open source their models and go, "meh well the emissions are free now 'cause I'm using an open source model." So there's lots of these, it's very nuanced. Kind of the, a lot of the conversations we have in the standards space, is like a small decision can actually have a cascading series of unintended consequences.

So the thing that we really like sat down was like, what, well, what actually, what do you want to incentivize? Let's just start there. What do we want to incentivize? Okay, we've listed those things we wanna incentivize. Right. Now, let's design a metric, which through no accident incentivizes those things. And where they ended up was basically two,

there's gonna be two measures. So we didn't, we didn't solve the training one because there isn't a solution to it. It's a different audience cares about the training emissions than that doesn't, consumers, it's not important to you because it doesn't really matter. It doesn't change how you behave with a model.

It doesn't change how you prompt a model just because it had some training emissions in the past. What matters to you most is your direct emissions from your actions you're performing at that given moment in time. So it's likely gonna be like two SCI scores for AI, a consumer and a provider. So the consumer is like inference plus everything else.

and also what is the functional unit? There's a lot of conversations here as well, and that's likely to land that now very basically the same as how you sell an AI model. So if you are an LLM, you're typically selling by token. And so why for us to pick something which isn't token in a world where everybody else is thinking token, token, token, token, it would be a very strange choice and it would make the decision really hard for people when they're evaluating certain models. They'd be like, oh, it's this many dollars per token for this one and this many dollars per token for that one. But it's a carbon per growth. And it's a carbon per growth,

I can't rationalize that. Where, if it's well look, that's $2 per token, but one gram per token of emissions and that's $4 per token, but half a gram per token for emissions. I can evaluate the kind of cost, carbon trade off, like a lot easier. The cognitive load is a lot easier.

Chris Adams: So you're normalizing on the same units, essentially, right?

Asim Hussain: Yeah. As how, however it's sold, however, it's, 'cause that's sort of, it's a fast, AI is also a very fast moving space and we dunno where it's gonna land in six months, but we are pretty sure that people are gonna figure out how to sell it, in a way that makes sense. So lining up the carbon emissions to how it's sold.

And the provider one is going to be, that's gonna include like the training emissions, but also like data and everything else. And that's gonna be probably per version of an AI. And that will, so you can imagine like OpenAI, like ChatGPT would have a consumer score of carbon per token and also a provider score of ChatGPT 5 has, and it's gonna be probably like per flop or something,

so per flop of generating ChatGPT 5, it was this many, this much carbon. And that's really like how it's gonna,

 it's also not gonna be total totals are like, forget about totals. Totals are pointless when it comes to, to change the behavior.

 You really want to have a, there's this thing called neural scaling laws.

The paper.

Chris Adams: Is that the one that you double the size of the model when it's supposed to double the performance? Is that the thing? 

Asim Hussain: It's not double, but yeah, got relationship. Yeah. So there's this logarithmic, perfectly logarithmic relationship between model accuracy and model size, model accuracy, and the data, the number of training you put into it, and model size and the amount of compute you put into, it's all logarithmic.

So it's often used as the reason, the rationale for like why we need to, yeah, larger models is because we can prove it. So, but that basically comes down to like really then, you know, like if like I care more about, but for instance, I don't particularly, it doesn't matter to me how much, it's not that important to know the total training emissions of ChatGPT 5 versus ChatGPT 4.

What's far more useful, is to know, well, what was the carbon per flop of training for 4 versus the carbon per flop of training for 5? 'Cause then that gives you more interesting information. Have you, did you,

Chris Adams: What does that allow?

Asim Hussain: Bother to do anything? Huh?

Chris Adams: Yeah. What does that allow me to do? If I know if 5 is 10 times worse per flop than 4, 

what that incentivize me to do differently? 'Cause I think I might need a bit of hand help here making this call here.

Asim Hussain: Because I think, 'cause it, what, let's say ChatGPT 6 is going to come along. The one thing we know absolutely sure is it's just gonna be in terms of total bigger than ChatGPT 5. So as like a metric, it's not, if you are an engineer, if you are somebody trying to make decisions regarding what do I do to actually train this model with causing less emissions, it doesn't really help me because it's just, a number that goes higher and higher.

Chris Adams: Oh, it's a bit like carbon intensity of a firm versus, absolute emissions. Is that the much, the argument you're using? So it doesn't matter that Amazon's emissions have increased by 20%, the argument is well, at least if they've got more efficient per dollar of revenue, then that's still improvement.

That's the line of reasoning that's using, right?

Asim Hussain: Yeah. So it's, 

because of the way the SCI is, it's not if you want to do a total, there are LCAs, like the thing that Mistral did, there's existing standards that are very well used. They're very well respected. There's a lot of, there's a lot of information about how to do them.

You can just use those mechanisms to calculate a total. What the SCI is all about is what is a, 

KPI that a team can use and they can optimize against, so over time, the product gets more and more efficient? 

Obviously, you should also be calculating your totals and be making a decision based upon both.

But just having a total is, I've gotta be honest with you, it's just, I don't see totals having, in terms of changing behavior, I don't think it changes any behavior. Full stop.

Chris Adams: Okay. I wanna put aside the whole, we live in a physical world with physical limits and everything like that, but I think the argument you're making is essentially that, because the, you need something to at least allow you to course correct on the way to reducing emissions in absolute terms, for example. And your argument you're making is if you at least have an efficiency figure, that's something you can kind of calibrate and change over time in a way that you can't with absolute figures, which might be like having a, you know, a budget between now and 2030, for example.

That's the thinking behind it, right?

Asim Hussain: Yeah. I mean, if you, I've actually got an example here from 'cause we, so we don't have actual compute. They, no, no one's ever disclosed like the actual compute that they used per model. But they have, or they used to disclose the number of parameters per model. And we know that there's a relationship.

So there's a really interesting, so for 2, 3 and 4, we have some idea regarding the training emissions and the parameters, not from a disclosure, from like research as well, so between, but when you compute the emissions per billion parameters of the model, so per billion parameters of the model, GPT two was 33.3 tons of carbon per billion parameters of the model.

Chris Adams: Okay.

Asim Hussain: GPT-3 went down to 6.86 tons of carbon per billion parameters. So it went down from 33 to 6. So that was a good thing. It feels like a good thing, but we know the total emissions of 3 was higher. Interestingly, GPT-4 went up to 20 tons of carbon per billion parameters. So that's like an interesting thing to know.

It's like you did something efficient between two and three. You did something good. Whatever it was, we don't know what it was, we did something good actually the carbon emissions per parameter reduced. Then you did something. Maybe it was bad. Maybe I, some, maybe it was necessary. Maybe it was architectural. But for some reason your emissions,

Chris Adams: You became massively less efficient in the set, in that 

next 

Asim Hussain: In terms of carbon. In terms of carbon, you became a lot less efficient in GPT-4. We have no information about GPT 5. I hope it's less than 20 metric tons per billion parameters.

Chris Adams: I think I'm starting to wanna step, follow your argument and I'm not, I'm not gonna say I agree with it or not, but I, the, I think the argument you're making is essentially by switching from, you know, that that in itself is a useful signal that you can then do something with. there was maybe like a regression or a bug that happened in that one that you can say, well, what change that I need to do so I can actually start working my way towards, I don't know, us careering less forcefully towards oblivion, for example, or something like that.

Right.

Asim Hussain: Yeah.

Chris Adams: Okay. That makes, I think I understand that now. And, let's, and I suppose the question I should ask from following on from that is that this is, some of this is, we're talking about, we got into this, 'cause we were talking about the SCI for AI, this kind of standard or presumably an ISO standard that we published.

Is there a kind of rough like roadmap for when this is gonna be in the public domain, for example, or people might be requesting this in commercial agreements or something like that?

Asim Hussain: I mean, I can tell you what my hope is. So I think, I mean, cause everything is based upon consensus and if anybody objects then everything or all the plans basically, you know, put on the back burner. But everything's looking very positive. I'm very hopeful that by the end of Q3, so the end of September, we will have gone into draft and then, there hasn't been a full agreement yet as to what we'll actually publish for that. But I'm hoping we'll be able to actually publish the whole specification, because what we wanna start doing is get, I mean this maybe if anybody's interested, we wanna start running case studies because right now it's like the outline of what we want the calculation to be is being agreed on.

But we need a lot of use cases of very different types of products that have computed using it. Not just, you know, I'm a major player and I've got a gazillion servers and we also want, need people, there's lots of organizations we're talking to or listen, we've just, we are, AI is not our central business, but we've built like AI solutions internally and we want to be able to measure that.

Or even smaller organizations or people who are not even training in AI, but just consuming APIs then build like an AI solution on top of that. So there's like a whole range of things that we wanna measure and we want to publish, go into draft in September, and then work on a number of case studies. Hopefully, dream,

my dream, and I, no one holds me to this is by kind of Q1, Q2 next year where we're out and we start the ISO process then, but when we come out, we want to come out with here's a specification. It'll come out with a training course that you can take to learn how to compute the specification. It will come out with a tooling.

So you can just plug in values and then you'll be able to get your numbers and also come out with a number of case studies from organizations who, this is how exactly we calculated it, and maybe you can learn from, how we did it. So that's our goal.

Chris Adams: Okay, well that, so we're looking at basically, okay, first half of 2026, so there's still time to be involved and there's, and presumably later on in Q3, Q4, some of this will be going out in public for people to kind of respond to or have this some, something like the consultation there.

Asim Hussain: Yeah, It'd be a public consultation coming up soon.

Chris Adams: This is useful to know because this takes it to our last story we were looking at, which is actually also talking about the challenges related to the working on the environmental footprint of other things, particularly websites.

This is our final link of the podcast, which is a link to, the IEEE, where there's a post by, I believe it's Janne Kalliola. And, oh dear. I'm not gonna pronounce the other person's name very well. Juho Vepsäläinen. Oh dear. I'm so sorry for mispronouncing your names. I'm drawing attention to this 'cause this is the first time In a while I've seen a peer reviewed article in the IEEE specifically, which I think is the.

It's the Instutute of Electrical and Electronics Engineers. I forget what it stands for. Yes, thank you. They looked at both, Firefox Profiler and Website Carbon. They basically started looking at the environmental footprint, what kind of, what does using these website calculators actually tell you and what can you use?

And they had some recommendations about, okay, we've tried using these tools, what can we learn from that? And the thing that was actually particularly interesting was that they were using Firefox's Firefox profiler specifically to look at the footprint of, they're basically saying that there's two, three insights that have probably come away from this, which I thought was interesting.

One of them, it's really hard to get meaningful numbers around data transfer, which I think is actually something that we've shared and we've covered in a lot of detail and I'm finding very helpful for future discussions around creating something like a software, carbon intensity for Web for this.

But the other thing they did was they spoke about the use of, like tools out there, like profilers, which do provide this direct measurement that does give you some meaningful numbers. But when you look at the charts, the differences aren't that high. For example, they were showing comparisons with things like website carbon, which shows massively different, massively different kind of readings for the carbon footprint of one site versus another.

And then when they used other tools like say Firefox Profiler, the differences were somewhat more modest between these two things. So this kind of gives the impression that tool, some of the tools that use like the soft, the sustainable web design model may, they may be overestimating the effectiveness of changes you might be making as an engineer versus what gets measured directly.

Now, there's obviously a elephant in the room and that this isn't measuring what's happening server side, but this is the first time I've seen a really, kind of a real deep dive by, some people who are actually looking into this to come up with some things you can, you can test, I suppose, or you can kind of, you can like, reproduce to see if they get, you're getting the same numbers from these people here.

And, this is actually quite a useful, it's, I found it quite noteworthy and really nice to see and I would've found out about it because, Janne actually shared it inside the Climateaction.tech Slack.

Asim Hussain: So it was a paper inside IEEE or, an article inside that

Chris Adams: It's, a paper. So it's a peer reviewed paper in volume 13 of IEEE and they basically, they talk about the current state of the art, how people currently try to measure energy consumption on the Web. Then they talk about some of the tools you can use for the end user devices. Talk about some of the issues related to trying to go on just data transfer alone and why that isn't necessarily the best thing to be using, but, what kind of statements you could plausibly make.

But as someone who ends up, you know, we, the organization I work for, we implemented the sustainable web design model for this. Having something like this is so, so useful because we can now cite other peer reviewed work that's in the public domain that we can say, hey, we need to update this, based on this, or possibly do some, or an idea, which I believe that Professor Daniel Sheen shared with me.

He said, well, if we know, if we've got figures for the top million websites, the top thousand websites, maybe you could actually just experimentally validate those versus what you have in the, in a model already. So you can get better numbers for this. There's a bunch of steps. Yeah, exactly. If you were to measure the top thousand ones compared to the model figures, then that will give you an idea of the gap between the model figure and the ground truth, so you can end up with a slightly better, a better figure.

There's a bunch of things that you could do out there, which would, might make it easier to make these, this tooling much, much easier to use and much more likely to give people the signals they are craving to kind of build websites in a more kind of climate compatible fashion, for example.

Asim Hussain: And I think it's important because I think people like when you use a, when you use a tool and it gives you a, it gives you a value, it's incentivizing a behavior. And it might be incentivizing the wrong behavior. And it's, and I think that's one of the things I find that when people get excited about a measurement, I don't, because I'm, I need to know the details behind it.

'Cause I know that if you're a little bit wrong, you're incentivizing the wrong thing. And you shouldn't just, you shouldn't just take it face value. But it's really hard. I also, in the sense it's really bloody hard even for the tool makers to even figure out what to do here.

So this isn't really a, you know, but it's not really criticism of anybody. But, yeah, it's just really hard to figure this stuff out. But the Firefox stuff is using yours isn't, it's using CO2.js, isn't it?

Chris Adams: I'm not sure if this actually uses the carbon figures we use 'cause we're just, we basically package up the numbers from Ember, which is a non-profit think tank who already published stuff. I can't remember if this one is actually use using the energy or the carbon figures basically.

But we update the carbon figures, every month anyway. So it may, it might be our, I'll need to kind of check if they measure in terms, if they, I think they report this in energy, not carbon actually. It's what they used inside this. 

Actually, I'll need to reread and we're coming up to time actually.

Asim Hussain: Here we come time, so this, but also I think maybe just call out a little bit. So we are gonna be running the, and you are leading it, the SCI for Web assembly shortly in the foundation. And I think this is, this can be a very, this looks, my brief scan of it, like a very important pre-read, I presume for a lot of the people who are gonna be attending that assembly.

Chris Adams: Yeah, I'm actually really pleased this came out. That was initially what I saw, oh great, this is a really nice, concise piece that covers this. This was another piece from Daniel Sheen talking about, okay, well how do you measure network figures, for example? 'cause he's put some really, good interesting stuff inside that we don't have enough time to talk about, but it's a really, but we'll share links to that inside that because yes, this is something that we'll be doing and I'm looking forward to doing it.

And oh, I've just realized we've gone way over.

Asim Hussain: We're well over. You've gotta go, on. Let's just, let's wrap

Chris Adams: Dude, really lovely catching up with you again. Oh, the final thing I need to give is this, just quickly talking about this GSM, the Green Software Movement thing that you were talking about here. Maybe I can just give you space to do that before we cl before we wrap up.

'Cause I know this is the project you're working on at the moment.

Asim Hussain: Yeah. So the movement is a platform that we've created, so it's movement.greensoftware.foundation. So this is where we, will be putting a lot more of our tension moving forward in terms of engaging with the broader community. It's also where all of our training is going to be.

So our current training is moving over there, and we just now have a, now that we've got like a real platform to publish training to. We're gonna get training for all of our products and services, so for SCI, Impact Framework, SOFT, RTC. We're gonna do training for all of them and have them available on the platform.

And you'll be able to go in, you'll be able to learn about the products that we've created, learn about the foundation, get certified for your training. But also it's a platform where you can connect with other people as well. So you can meet people, have chats, have conversations, connect with people who are local to you.

We've had over 130,000 people take our previous training, which unfortunately is on a previous, another platform. So we're gonna be trying to move everybody over. So hopefully our goal is ultimately for this to be the platform where you go, at least from terms of the Green Software Foundation to learn about our products, our standards get involved would be, our champions programs moving over there as well.

And we're just kind of like having, this will be where we put a lot of our effort moving forward, and I recommend people go to it, join, sign up, take the training, and connect with others.

Chris Adams: Alright. Okay. Well, Asim, lovely catching up with you. And I hope you have a lovely rest of the week. And I guess I'll see you in the Slacks or the Zulips or whichever online tools we use to across paths.

Asim Hussain: Zulips. I don't know what that is. Yeah. Sounds good. right, mate.

Chris Adams: our open source chat tool inside the Green Web Foundation. It runs on Django and it's wonderful.

Yeah, it's really good. I cannot recommend it enough. If you are using Slack and you are sick of using Slack, then use Zulips. Zulips is wonderful. Yeah. It's really, good.

Asim Hussain: I can check it out. Yeah. All right.

Chris Adams: Take man. See you Bye.

Asim Hussain: Bye. 

Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.

To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 

 




Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 months ago
1 hour 4 minutes 39 seconds

Environment Variables
LLM Energy Transparency with Scott Chamberlin
In this episode of Environment Variables, host Chris Adams welcomes Scott Chamberlin, co-founder of Neuralwatt and ex-Microsoft Software Engineer, to discuss energy transparency in large language models (LLMs). They explore the challenges of measuring AI emissions, the importance of data center transparency, and projects that work to enable flexible, carbon-aware use of AI. Scott shares insights into the current state of LLM energy reporting, the complexities of benchmarking across vendors, and how collaborative efforts can help create shared metrics to guide responsible AI development.

Learn more about our people:
  • Chris Adams: LinkedIn | GitHub | Website
  • Scott Chamberlin: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

News:
  • Set a carbon fee in Sustainability Manager | Microsoft [26:45]
  • Making an Impact with Microsoft's Carbon Fee | Microsoft Report [28:40] 
  • AI Training Load Fluctuations at Gigawatt-scale – Risk of Power Grid Blackout? – SemiAnalysis [49:12]

Resources:
  • Chris’s question on LinkedIn about understanding the energy usage from personal use of Generative AI tools [01:56]
  • Neuralwatt Demo on YouTube [02:04]
  • Charting the path towards sustainable AI with Azure Machine Learning resource metrics | Will Alpine [24:53] 
  • NVApi - Nvidia GPU Monitoring API | smcleod.net [29:44]
  • Azure Machine Learning monitoring data reference | Microsoft 
  • Environment Variables Episode 63 - Greening Serverless with Kate Goldenring [31:18]
  • NVIDIA to Acquire GPU Orchestration Software Provider Run:ai [33:20]
  • Run.AI
  • NVIDIA Run:ai Documentation  
  • GitHub - huggingface/AIEnergyScore: AI Energy Score: Initiative to establish comparable energy efficiency ratings for AI models. [56:20]
  • Carbon accounting in the Cloud: a methodology for allocating emissions across data center users 

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Scott Chamberlin: Every AI factory is going to be power constrained in the future. And so what does compute look like if power is the number one limiting factor that you have to deal with? 

Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams. 

Hello and welcome to Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. We talk a lot about transparency on this podcast when talking about green software, because if you want to manage the environmental impact of software, it really helps if you can actually measure it.

And as we've covered on this podcast before, measurement can very quickly become quite the rabbit hole to go down, particularly in new domains such as generative AI. So I'm glad to have our guest, Scott Chamberlain today here to help us navigate as we plum these depths. Why am I glad in particular?

Well, in previous lives, Scott not only built the Microsoft Windows operating system power and carbon tracking tooling, getting deep into the weeds of measuring how devices consume electricity, but he was also key in helping Microsoft Azure work out their own internal carbon accounting standards. He then moved on to working at Intel to work on a few related projects, including work to expose these kinds of numbers in usable form to developers when people when making the chips that go in these servers. His new project Neuralwatt is bringing more transparency and control to AI language models.

And a few weeks back when I was asking on LinkedIn for pointers on how to understand the energy usage from LLMs I use, he shared a link to a very cool demo showing basically the thing I was asking for: real-time energy usage figures from Nvidia cards directly in the interface of a chat tool. The video's in the show notes if you're curious.

And it is really, cool. So Scott, thank you so much for joining us. Is there anything else that I missed that you'd like to add for the intro before we dive into any of this stuff?

Scott Chamberlin: No, that sounds good.

Chris Adams: Cool. Well, Scott, thank you very much once again for joining us. If you are new to this podcast, just a reminder, we'll try and share a link to every single project in the show notes.

So if there are things that are particularly interest, go to podcast.greensoftware.foundation and we'll do our best to make sure that we have links to any papers, projects, or demos like we said. Alright, Scott, I've done a bit of an intro about your background and everything like that, and you're calling me from a kind of pleasingly green room today.

So maybe I should ask you, can I ask where you're calling from today and a little bit about like the place?

Scott Chamberlin: So I live in the mountains just west of Denver, Colorado, in a small town called Evergreen. I moved here in the big reshuffles just after the pandemic, like a lot of people wanted to shift to a slightly different lifestyle. And so yeah, my kids are growing here, going to high school here, and yeah, super enjoy it.

It gives me quick ability to get outside right outside my door.

Chris Adams: Cool. All right. Thank you very much for that. So it's a green software podcast and you're calling from Evergreen as well, in a green room, right? Wow.

Scott Chamberlin: That's right. I have a, I actually have a funny story I want to share from the first time I was on this podcast. It was me and Henry Richardson from Watttime talking about carbon awareness. And I made some focus on how the future, I believe, everything's going to be carbon aware. And I used a specific example of my robot vacuum of like, it's certainly gonna be charging in a carbon aware way at some point in the future.

I shared the podcast with my dad and he listened to it and he comes back to me and says, "Scott, the most carbon reduced vacuum is a broom."

Chris Adams: Well, it, he's not wrong. I mean, it's a, it's manual but it does definitely solve the problem and it's definitely got lower embedded carbon, that's for sure, actually.

Scott Chamberlin: Yeah.

Chris Adams: Cool. So Scott, thank you very much for that. Now, I spoke a little bit about your kind of career working in ginormous trillion dollar or multi-billion dollar tech companies, but you are now working at a startup Neuralwatt, but you mentioned before, like during, in our prep call, you said that actually after leaving a couple of the big corporate jobs, you spent a bit of time working on like, building your own version of like what a cloud it might be.

And I, we kind of ended up calling it like, what I called it Scott Cloud, like the most carbon aware, battery backed up, like really, kind of green software, cloud possible and like pretty much applying everything you learned in your various roles when you were basically paid to become an expert in this.

Can you talk a little bit about, okay, first of all, if it's, if I should be calling it something other than Scott Cloud and like are there any particular takeaways you did from that? Because that's had like quite an interesting project and that's probably what I think half of the people who listened to this podcast, if they had essentially a bunch of time to build this, they'd probably build something similar.

So yeah. Talk. I mean, why did you build that and, yeah, what are the, were there any things you learned that you'd like to share from there?

Scott Chamberlin: Sure. So, I think it's important to know that I had spent basically every year from about 2019 through about 2022, trying to work to add features to existing systems to make them more, have less environmental impact, lower CO2, both embodied as well as runtime carbon.

And I think it's, I came to realize that adding these systems on to existing systems is always going to come with a significant amount of compromises or significant amount of challenges because, I mean, I think it's just a core principle of carbon awareness is that there is going to be some trade off with how the system was already designed.

And a lot of times it's fairly challenging to navigate those trade offs. I tend to approach them fairly algorithmically, doing optimization on them, but I had always in the back of my mind thought about what would a system look like if the most important principle that we were designing the system from was to minimize emissions? Like if that was the number one thing, and then say performance came second, reliability came second, security has to come first before everything. There's not a lot of tradeoffs you have to make with carbon awareness and security. So I started thinking, I'm like, "what does a data center architecture look like if this is the most important thing?"

So of course, starts with the lowest, it's not the lowest, it's the highest performance-per-watt hardware you can get your hands on. And so really serving the landscape of really what that looked like. Architecting all the, everything we know about carbon awareness into the platform so that developers don't necessarily have to put it into their code, but get to take advantage of it in a fairly transparent and automatic way. And so you end up having things like location shifting as a fundamental principle of how your platform looks to a developer. So, as the idea was, we'd have a data center in France and a data center in the Pacific Northwest of the United States, where you have fairly non-correlated solar and wind values, but you also have very green base loads, so you're not trying to overcome your base load from the beginning.

But that time shifting was basically transparent to the platform. I mean, not time shifting, I'm sorry. Location shifting was transparent to the platform. And then time shifting was implemented for the appropriate parts. but it was all done with just standard open source software, in a way that we minimized carbon while taking a little bit of a hit on performance a little bit of a hit on latency, but in a way the developer could continue to focus on performance and latency, but got all the benefits of carbon reduction at the same time.

Chris Adams: Ah, okay. So when you said system, you weren't talking about like just maybe like an orchestrator, like Kubernetes that just spins up virtual machines. You're talking about going quite a bit deeper down into that then, like looking at hardware itself?

Scott Chamberlin: I started the hardware itself. 'Cause you have to have batteries, you have to have ability to store renewable energy when it's available. You have to have low power chips. You have to have low powered networking. You have to have redundancy. 

And there's always these challenges when you talk about shifting in carbon awareness of, I guess the word is, leaving your resource, your capital resources idle.

So you have to take costs into account with that. And so the goal, but the other challenge that I wanted to do was the goal was have this all location based, very basic carbon accounting, and have as close to theoretically possible minimizing the carbon, as you can. Because it's not possible to get to zero without market based mechanics in when you're dealing with actual hardware.

So get as close to net zero as possible from a location based very, basic emissions accounting. So that was kind of the principle. And so, on that journey, we got pretty far to the point of ready to productize it, but then we decided to really pivot around energy and AI, which is where I'm at now.

But, so I don't have a lot of numbers of what that actual like net, close to the zero theoretically, baseline is. But I'm pretty close. It's like drastically smaller than what we are using in, say, Hyperscale or public cloud today. 

Chris Adams: Oh, I see. Okay. So you basically, so rather than retrofitting a bunch of like green ideas onto, I guess Hyperscale big box out outta town style data centers, which already have a bunch of assumptions already made into them, you, it was almost like a clean sheet of paper, basically. You're working with that and that's the thing you spend a bunch of time into. And it sounds like if you were making some of this stuff transparent, it was almost like it wasn't really a developer's job to figure out, know what it was like shifting a piece of code to run in, say, Oregon versus France, for example, that would, that, the system would take care of that stuff.

You would just say, I just want you to run this in the cleanest possible fashion and don't, and as long as you respect my requirements about security or where the data's allowed to go, and it would take care of the rest. Basically that was the idea behind some of that, right? 

Scott Chamberlin: That's the goal because in the many years I've been spending on this, like there's a great set of passionate developers that want to like minimize the emissions of the code, but it's a small percent, and I think the real change happens is if you make it part of the platform that you get a majority of the benefit, maybe, 80th percentile of the benefit, by making it automatic in a way.

Chris Adams: The default?

Yeah. 

Scott Chamberlin: My software behaves as expected, but I get all the benefits of carbon reduction automatically. 'Cause developers already have so much to care about. And again, like, it's not every developer actually is able to make the trade offs between performance and CO2 awareness appropriately.

Right. It's really hard and we haven't made it easy for people. So that was the goal. Like how do you actually like enable the system to do that for you while the developer can focus on the demands, the principles that they're used to focusing on, making their software fast, making their software secure, making it reliable, making it have good user experience, that kind of stuff. 

Chris Adams: Ah, that's interesting though. That's almost like, so like the kind of green aspect is almost like a implementation detail that doesn't necessarily need to be exposed to the developers somewhat in a way that when people talk about, say, designing systems for end users to use, there's a whole discussion about whether you, whether it's fair to expect someone to feel terrible for using Zoom and using Netflix, when really like, it makes more sense to actually do the work yourself as a designer or as a developer to design the system so by default is green. So rather than trying to get people to change their behavior massively, you're essentially going with the fact that people are kind of frail, busy, distracted people, and you're working at that level almost.

Scott Chamberlin: Yeah, I think that's the exact right term. It is green by default. And that phrase, when I started working on this in Windows, 

so you know, like you referred to earlier, like I created all the carbon aware features in Windows and there was a debate early on like how do we enable these? Like should the carbon awareness feature, should it be a user experience?

I mean, should the user be able to opt in, opt out, that kind of stuff? And it was actually my boss, I was talking to this, he's like, "if you're doing this, it has to be the default," right? And so, you're never going to make the impact on any system if somebody, at the scale we really need to make this impact on, if people have to opt in. It has to be the default. And then sure, they can opt out if there's certain reasons that they want a different behavior. But green by default has to be the main way we make impact. 

Chris Adams: That's actually quite an interesting, like framing because particularly when you talk about carbon aware and at devices themselves, this is something that we've seen with like a, I guess there is a, there, there's a default and then there's maybe like you, the thing you said before about it's really important to leave people in control so they can override that, feels like quite an important thing.

'Cause I remember when Apple rolled out the whole kind of carbon away charging for their phones, for example. Some people are like, "oh, ah, this is really cool. Things have, are slightly greener by default based on what Apple have showed me." But there are some other people who absolutely hated this because the user experience from their point of view is basically, I've got a phone, I need to charge it up, and I plugged it into my wall.

And then overnight it's been a really, high carbon grid period. So my phone hasn't been charged up and I woke up and now I've go to work and I've got no phone charger. And it just feels like this is exactly the thing. Like if you don't provide the, like a sensible kind of get out clause, then that can lead to a really, awful experience as well.

So there is like quite a lot of thought that needs to guess go into that kind of default, I suppose.

Scott Chamberlin: Definitely. Like the user experience of all of these things have to ultimately satisfy the expectations and needs of the users, right. You're, it is 

another like learning experience we had, it was a deep, it was really a thought experiment, right? When we were working on some of the, and Windows is actually, we were working on the ability to change the timer for how fast the device goes to sleep.

Because there's a drastic difference even in between an active mode, and the sleep state that, it's basically when the device will turn on if you touch the mouse, screen's off, it goes into low power state. And so one of the changes we made in Windows was to lower that value from the defaults.

And it's fairly complex about how these defaults get set. Basically, they're set by the OEMs and different power profiles. But we wanted to lower the default that all software was provided. And we did some analysis of what the ideal default would be. But the question in the user experience point of view was "if we set this too low, will there be too many people turning it to, basically, entirely off, rather than what the old default was, which was like 10 minutes?" So let's use these values. Theoretically, I can't remember what the exact values are, but old default, 10 minutes, new default three minutes for going from active to sleep.

If people were, if three minutes was not the right value and we got maybe 20% of the people entirely turning it off, is the carbon impact worse for the overall, fleet of Windows devices by those 20% people turning off 'cause we got a bad user experience by changing the default? So we had to do all these analyses, and have this ability to really look for unintended consequences of changing these.

And that's why the user experience is really critical when you're dealing with some of these things.

Chris Adams: Ah, that's, okay, that's quite useful nuance to actually take into account 'cause there is, there's a whole discussion about kind of setting defaults for green, but then there's also some of the other things. And I actually, now that you said that I realize I'm actually just, 'cause I am one of these terrible people who does that because I've, like,

I mean I'm using a Mac. Right. And, you see when people are using a laptop and it starts to dim and they start like touching the touch pat thing to kinda make it brighten again. And you see people do that a few times. There's an application called Caffeine on a Mac, and that basically stops it going to sleep, right. And so that's great. I mean, but It's also then introduces the idea of like, am Is my a DD bad adult brain gonna remember to switch that back off again? Like, this are the things that come up. So this is actually something that I have direct experience, so that is very much hitting true with me, actually.

Okay. So that was the thing you did with, I'm calling it Scott Cloud, but I assume there was another name that we had for that, but that's, that work eventually became something that Neuralwatt. That's like you went from there and move into this stuff, right?

Scott Chamberlin: Right. So, Scott Cloud or Carbon, Net Zero Cloud, was basically a science experiment. And I wanted to deploy it for the purposes of just really seeing, you learn so much when things are in production and you have real users, but before I did it, I started talking to a lot of people I trusted in my network.

And one of my old colleagues from Microsoft and a good friend of mine, he really dug into it and started pushing me on like some serious questions like, "well, what does this really impact in terms of energy?" Like it was a CO2 optimization exercise, was that project. And he's like, "well what's the impact on energy?

What's the impact on AI?" And actually to, Asim Hussain, he is, he's asked the same question. He's like, "you can't release anything today," and this is, let's rewind, like a year ago, he's like, "you can't release anything today that doesn't have some story about AI," right? And this was just a basic just compute platform with nothing specific about AI.

So both of those comments really struck home. I was like, okay, I gotta like figure out this AI stuff we got. And I've gotta answer the energy question, it's wasn't hard 'cause it was already being measured as part of the platform, but I just was focused on CO2. And what it turned out was that there were some really interesting implications once we started to apply some of the optimization techniques to the GPU and how the GPU was being run from energy point of view, that ended up being in, that we, when we looked into it and it ended up being like potentially more impactful in the short term than the overall platform. And so, that colleague Chad Gibson, really convinced me in our discussions to really spin that piece out of the platform as a basis of the startup that we went and decided to build, which we call Neuralwatt now.

So yeah, what Neuralwatt really is, like the legacy of that, all that work, but the pieces that we could really take out of it that were focused on GPU energy optimization, within the context of AI, growth and energy demands, because those are becoming really critical challenges, not just for just businesses, but there are critical challenges that are underlying all of our, the work against green software, underlying all of the work, and around trying to reduce emissions of compute as a whole.

Right? And we're just really looking at a new paradigm with the exponential increase in energy use of compute and what behaviors that's driving in terms of getting new generators online, as well as what is the user experience behaviors when LLMs are built into everything, LLMs or other AIs are built into everything?

And so I felt that was really important to get focused on as quickly as possible. And that's where we really, really jumped off, with Neuralwatt on.

Chris Adams: Oh, I see. 

Okay. So the, basically there is a chunk of like, usage and there's the ability to kind of make an improvement in the existing set of like, like a fleet of servers and everything. Like that's already could have deployed around the world. But you see this thing which is growing really fast.

And if we look at things like the International Energy Agency's own report, AI and Energy, they basically say over the next five years looks like it's gonna be a rough, their various projections are saying it's probably gonna be the same energy use as all data centers. So it makes more sense to try and blunt some of that shift as early as possible.

Or like that's where you felt like you had more chance for leverage essentially. 

Scott Chamberlin: More chance for leverage, more interest in really having an impact. Because, I mean, we were in really in a period of flat growth in terms of energy for data centers prior to the AI boom because the increase in use in data centers was basically equaled out by the improvement in energy efficiency of the systems themselves.

And there's a lot of factors that went into why that was really balancing, relatively balancing out, but the deployment of the GPUs and the deployment of massively parallel compute and utilization of those from the point of view of AI both training and inference, really changed that equation entirely. Right. And so basically from 2019 on, we've basically seen going from relatively flat growth in data centers to very steep ramp in terms of energy growth.

Chris Adams: Okay. Alright. Now we're gonna come back to Neuralwatt for a little bit later. Partly because the demo you shared was pretty actually quite cool actually, and I still haven't had anything that provides that kind of live information. But one thing that I did learn when I was asking about this, and this is probably speaks to your time when you're working in a number of larger companies, is that there is a bit of a art to get large companies who are largely driven by like, say, profits for the next quarter to actually invest in kind of transparency or sustainability measures. And one thing that I do know that when you were working at Microsoft, one thing I saw actually, and this is one thing I was surprised by when I was asked, I was asking on LinkedIn, like, okay, well if I'm using various AI tools, what's out there that can expose numbers to me?

And there was actually some work by a guy, Will Alpine, providing some metrics on existing AI for an existing kind of AI pipeline. that's one of the only tools I've seen that does expose the numbers or provide the numbers from the actual, the cloud provider themselves. And as I understood it, that wasn't a thing that was almost like a passion project that was funded by some internal kind of carbon fund or something.

Could you maybe talk a little bit about that and how that, and what it's like getting, I guess, large organizations to fund some ideas like that because I found that really interesting to see that, and I, and there was, and as I understand it, the way that there was actually a kind of pool of resources for employees to do that kind of work was actually quite novel.

And not something I've seen in that many, places before.

Scott Chamberlin: Yeah, no, I think that was great work and Will is, want to, I'm a big fan of Will's work and I had the fortune to collaborate with him at that period of both of our careers when really it was, I don't think carbon work is easy to get done anywhere, in my experience, but that, I think Microsoft had a little bit of forethought in terms of designing the carbon tax. And yeah, we did have the ability to really vet a mission vet projects that could have a material impact against Microsoft's net zero goals and get those funded by the carbon tax that was implemented internally.

And so the mechanism was, every, as Microsoft built the capability to audit and report on their carbon, they would assign a dollar value to that from teams and then that money went from those teams budget into a central budget that was then reallocated for carbon reduction goals.

And yeah, I think Will was really at the forefront of identifying that these AI and, we all just really said ML back then, but now we all just say AI, but this GPU energy use was a big driver of the growth and so he really did a ton of work to figure out what that looked like at scale, figure out the mechanics of really exposing it within the Hyperscale cloud environment, taking, essentially like NVIDIA's also done a great job in terms of keeping energy values in.

their APIs and exposed through their chips and through their drivers, so that you can use it fairly easy on GPU. I would say it's more challenging on CPUs to do so, or the rest of the system, but, so he like did a great job in collaboration with those interfaces to get that exposed into the Azure, I think it's the ML studio is what it's called.

So that it has been there for many years, this ability to see and audit your energy values, if you're using the Azure platform. 

Yeah, those super good work.

Chris Adams: Yeah, so this was the thing. I forget the name of it and I'm a bit embarrassed to actually forget it. But, let, I'm just gonna play back to what I think you're saying. 'Cause when I was reading about this is something that I hadn't seen in that many other organizations. So like there's an internal carbon levy, which is basically for every ton that gets emitted, there was like a kind of a dollar amount allocated to that. And that went to like a kind of internal, let's call it a carbon war chest, right? So like there's a bunch of money that you could use. And then any member of staff was basically then able to say, I think we should use some of this to deliver this thing because we think it's gonna provide some savings or it's gonna help us hit our whatever kind of sustainability targets we actually have.

And one of the things that came outta that was essentially, actual meaningful energy report energy figures, if you're using these tools, and this is something that no, the other clouds, you're definitely not gonna get from Amazon right now. Google will show you the carbon but won't show you the energy.

And if you're using chat GPT, you definitely can't see this stuff. But it sounds like the APIs do exist. So it's just a, it has been largely a case of staff being prepared, they're being kind of will inside the system. And people being able to kind of win those, some of those fights to get people to allocate time and money to actually make this thing that's available for people, right? 

Scott Chamberlin: The Nvidia APIs definitely exist. I think the challenge is the methodology and the standards, right? So, within a cloud there's a lot of like complexity around how cycles and compute is getting assigned to users and how do you fairly and accurately count for that? GPUs happen to be a little bit simpler 'cause we tend to allocate a single chip to a single user at a single time.

Whereas in like CPUs, there's a lot of like hyper threading, most clouds are moving to over subscription or even just single hardware threads are 10 are starting to get shared between multiple users. And how do we allocate the, first the energy, all this starts with energy, how to allocate first the energy, and then the CO2 based on a location.

And then, the big complexity in terms of the perception that these clouds want to have around net zero. They're, they want to, everyone wants to say they're net zero for a market-based mechanic. And what's the prevailing viewpoint within the, what is allowed with the GHG protocol or what is the perception that the marketing team wants to have?

Is a lot of the challenges. it tends to, at least in the GPU energy, there's not like huge technical challenges, but there's a lot of like marketing and 

accounting and methodology challenges to overcome.

Chris Adams: So that's interesting. Well, so I did an interview with Kate Goldenring who was working at Fermyon at the time. We'll share a link to that for people and I will also share some links to both the internal carbon levy and how essentially large organizations have funded this kind of like climate kind of green software stuff internally.

'Cause I think other people working inside their companies will kind of want, will find that useful. But I'm just gonna play back to you a little bit about what you said there and then we'll talk a little bit about the, demo you shared with me. So it does seem like, so GPUs like, the thing that's used for AI accelerators, they can provide the numbers.

And that is actually something that's technically possible a lot of the time. And it sounds like that might be kind of tech technically slightly less complex at one level than way the way people sell kind of cloud computing. 'cause when we did the interview with Kate Goldenring, and we'll share the link to that, she basically told, she could have explained to me that, okay, let's say there is a server and it's got maybe, say 32 little processes like, cores inside this, what tends to happen, because not everyone is using all 32 cores at all the same time, you can pretty much get away with selling maybe 40 or 50 cores because not everyone's using all the same tool, all the cores at the same time. And that allows you to basically, essentially sell more compute.

So end up having, you make slightly more money and you end up having a much more kinda like profitable service. And that's been one of the kind of offers of cloud. And also from the perspective of people who are actually customers that is providing a degree of efficiency. So if you have, like, if you don't need to build another server because that one server is able to serve more customers, then there's a kinda hardware efficiency argument.

But it sounds like you're saying that with GPUs, you don't have that kind of over a subscription thing, so you could get the numbers, but there's a whole bunch of other things that might make it a bit more complicated elsewhere, simply because it's a new domain and we are finding out there are new things that happen with GpUs, for example.

Scott Chamberlin: Yeah. So, yeah, that's exactly what I was trying to say. And I think we are seeing emerging GPU over subscription, GPU sharing. So at the end of that will probably change at some point and at scale. It's certainly the technology is there. Like I think NVIDIA's acquisition of run.ai, enables some of this GPU sharing and that, they acquired that company of like six months ago and It's now open source and so people can take advantage of that.

But yes, I think that the core principle is like, from a embodied admissions point of view and in a, green software point of view, it's relatively a good practice to drive up the utilization of these embodied missions you've already like purchased and deployed. There are a lot, some performance implications around doing the sharing that how, it gives back user experience, but today the really, the state of the art is GPU, is that it's mostly just singly allocated and fully utilized when it's utilized or it's not fully utilized, but it's utilized for a single customer, at a time. But that is certainly changing.

Chris Adams: Oh, okay. So I basically, if I'm using a tool, then I'm not sharing it with anyone else in the same way that we typically we'd be doing with cloud and that, okay, that probably helps me understand that cool demo you shared with me then. So maybe we'll just talk a little bit about that. 'cause this was actually pretty, pretty neat when I actually asked that when you showed like, here's a video of literally the thing you wished existed, that was kind of handy.

Right? So, basically if you, we will share the link to the video, but the key thing that Scott shared with me was that using tools like say a chat GPT thing or anthropic where I'm asking questions and I'll see kind of tokens come out in us when I'm asking a question. It we were, we basically saw charts of realtime energy usage and it changing depending on what I was actually doing.

And, maybe you could talk a little bit about actually what's actually going on there and how you came to that. Because it sounds like Neuralwatt wasn't just about trying to provide some transparency. There's actually some other things you can do. So not only do you see it, but you can manage some of the energy use in the middle of a, for like an LLM session, for example, right? 

Scott Chamberlin: So yeah, at the first stage, the question is really just what is, can we measure what's happening today and what does it really look like in terms of how you typically deploy, say, a chat interface or inference system? So, like I was mentioning, we have ability fairly easily because NVIDIA does great work in this space to read those values on the GPU specifically, again, 

there's system implications for what's happening on the CPU what's happening on the network, the discs.

They tend to be outstripped by far because these GPUs use so much energy. 

But so, the first step in really that demo is really just to show what the behavior is like because what we ultimately do within the Neuralwatt code is we take over all of the energy management, all of the system, and We train our own models to basically shift the behavior of servers from one that is focused on maximizing performance for the available energy to balancing the performance for the energy in a energy efficiency mode, essentially. So we are training models that shift the behavior of energy of the computer for energy efficiency.

And so that's why we want to visualize multiple things. We want to visualize what the user experience trade off is. Again, going back to the user experience. You have to have great user experience if you're gonna be doing these things. And we want to visualize the potential gains and the potential value add for our customers in making this shift.

Because, I think we talk about, Jensen Huang made a quote at GTC that we love is that, we are a power constrained industry. 

Every AI factory is going to be power constrained in the future. And so what does compute look like if power is the number one limiting factor that you have to deal with?

So that's why we believe, we really want to enable service to operate differently than what they've done in the past. And we want there to be some, essentially think about, as, like energy awareness, right? That's the word I come back to. Like we want behavior of servers to be energy aware because of these power constraints.

Chris Adams: Ah, okay. Alright. you said a couple of things that I, that kind of, I just want to run by you to check. So, with the, there's this thing, there's all these new awarenesses, there's like carbon aware, then there's grid aware, then there's energy aware. This is clearly like an area where people were trying to figure out what to call things.

But the Neuralwatt, the neural, the thing that you folks are doing was basically okay, yes, you have access to the power and you can make that available, so I'm using something, but I'm just gonna try and run this by you and I might be right and you, I might need you to correct me on this, but it sounds a little bit like the thing that you are allowing to do is almost throttling the power that gets allocated to a given chip. 'Cause if you use, like things like Linux or certain systems they have, like they can introduce limits on the power that is allocated to a particular chip. But if you do that, that can have a unintended effect of making things run a little bit too slowly, for example.

But there, there's a bit of head, there's a bit of headroom there. But if you are able to go from giving absolute power, like, take as much power as you want to, having a kind of finite amount allocated, then you can basically still have a kind of a good, useful experience, but you can reduce it to the amount of power that's actually be consumed. It sounds like you're doing something a little bit like that, but with Neuralwatt thing. So rather than giving it, carte blanche to take all the power, you are kind of asking it to work within a kind of power envelope. That means that you're not having to use quite so much power to do the same kind of work.

Is that it?

Scott Chamberlin: Yeah. So if you go back to the history of like, before we had GPUs everywhere, the CPUs have fairly, let's call 'em like moderate level sophistication of terms of power management. They have sleep states, they have performance states, and there's components that run on the OS that are called, basically CPU governors that govern how a CPU behaves relative to various constraints.

And so, when you allocate a, let's say a Linux VM in the cloud, I don't know why this is, but a lot of 'em get default allocated with a, I'm the name of, it's slipping in my mind, but there's about five default CPU governors in the default Linux Distros, and they get out allocated with the power save one, actually.

And so what it does, it actually limits the top frequencies that you can get to, but it essentially is balancing power and performance is kind of the default that you get allocated. You can check these things, you can change it to a performance mode, which basically is gonna use all of the capability of the processor at a much higher energy use.

And, but on the GPU it's a lot less sophisticated, right? There's, GPUs don't tend to support any sleep states other than just power off and on. And they do have different performance states, but they're not as sophisticated as the CPU has historically been. And so essentially we are inserting ourselves into the OS Neuralwatt and managing it in a more sophisticated manner around exactly how you're describing.

We're making the trade off and we're learning the trade off really through modeling. We're learning the trade off to maintain great user experience, but get power gains, power savings, with our technology, and doing this across the system. So, yes, I think your description essentially, very good. And, we're just essentially adding a level sophistication into the OS, than what exists today.

Chris Adams: Okay. So basically, rather than being able to pull infinite power is, has, like, it's an upper limit by how much it can pull, but you'd probably want to kind of, the reason you're doing some of the training is you're based on how people use this, you'd like the upper limit, the kind of, the upper limit available to what's actually being needed so that you've, you're still giving enough room, but you're not, you're delivering some kind of savings.

Scott Chamberlin: Yeah, and it's important to understand that there's, it's fairly complex, which is why we train models to do this rather than do it, 

Chris Adams: Like sit at one level and just one and done. Yeah.

Scott Chamberlin: Because think about like a LLM, right? So there's essentially two large phases in inference for an LLM. And one of the first phase is really compute heavy, and then the second phase is more memory heavy. And so we can use different, power performance trade-offs in those phases. And understanding what those phases are and what the transition looks like from a reservable state is part of what we do. And then the GPU is just one part of the larger system, right?

It's engaged in the CPU. A lot of these LLMs are engaged in the network. And so how do we balance all the, tradeoffs so to maintain the great user experience for the best amount of power efficiency? That's essentially like what we're, our fitness function is when we're essentially training.

Chris Adams: Ah. I think I understand that now. And, and what you said about the, those two phases, presumably that's like one of, one of it is like taking a model, loading it to something a bit like memory. And then there's a second part which might be accessing, doing the lookups against that memory. Because you need to have the thing, the lookup process when you're seeing the text come out that is quite memory intensive rather than CPU intensive.

So if you're able to change how the power is used to reflect that, then you can deliver some kind of savings inside that. And if you scale that up a data center level that's like, like 10%, 20, I mean, maybe even, yeah. Do you have an idea of like what kind

Scott Chamberlin: We tend to shoot for at least 20% improvements in what I would say performance per unit of energy. So tokens per Joule is the metric I tend to come back to fairly often. But again, how exactly you measure energy on these things, what is the right metric, I think is, I think you need to use a bunch of 'em.

But, I like tokens per Joule 'cause it's fairly simple and it's fairly, it's easy to normalize. But like, it's, it gets super interesting in this conversation about like, inference time, compute and thinking LLMs and stuff like that. 'Cause they're generating tons and tons of tokens and not all of 'em are exposed to, essentially improve their output.

And so people use all their metrics, but they're harder to normalize. So, yeah, long, long story short, I tend to come back to tokens for Joule is my favorite, but,

Chris Adams: So what, so it sounds like the thing that you're working on doing is basically through kind of judicious use of like power envelopes that more accurately match what is actually being required by a GPU or anything like that, you're able to deliver some savings that way. That's essentially, that's one of the things, and like you said before when we were talking about kind of Scott Cloud, that's transparent to the user.

I don't have to be thinking about my prompt or something like that. This is happening in the background, so I don't really, my experience isn't changed, but I am basically receipt of that, 20% of power is basically not being turned into carbon dioxide in the sky, for example, but it's basically the same other than that though.

Scott Chamberlin: That's the goal, right? Essentially we've informed our work continually on number one, user experience has to be great, number two, developer experience has to be great, which means the developer shouldn't have to care about it. So, yeah, it's a single container download, it runs in the background.

It does all the governance in a fairly transparent way. But you know, all throughout as well, like, we actually have CO2 optimization mode as well, so we can do all of this. Fall mode is really energy, but we actually can flip a switch and we get an extra degree of variability where if we're optimizing on CO2, average or marginal emissions. so, we can vary those behaviors of the system relative to the intensity of the carbon in the grid as well. So

Chris Adams: Okay. Or possibly, if not the grid, then the 29 data, 29 gas turbines that are powering that data center today, for example.

Scott Chamberlin: I think that's an emerging problem. And I actually would love to talk to somebody that has a data center that is having a microgrid with a gas turbine, because I actually do believe there's additional optimization available for these little microgrids, that are being deployed alongside these data centers.

If you were to do plummet all the way through in this energy, again, go back to energy awareness, right. Like if your servers knew how your microgrids were behaving relative to the macro grid that they were connected to, like, there's so many interesting optimizations available and, people are looking at this from the point of view of the site infrastructure, but like the reality is all of the load is generated by the compute on the server.

Right. And that's what we're really trying to bring it all the way through to where load originates and the behavior where, while maintain that user experience. So

Chris Adams: Okay. So you said something interesting there, I think, about this part, like the fact that a, you mentioned before that GPU usage is a little bit less sophisticated right now. You said it's either all on and all off. And when you've got something which is like, the power, the multiple thousands of homes worth of power, that can be all on and all off very, quickly.

That's surely gotta have to have some kind of implications, within the data center, but also any poor people connected to the data center. Right? Because, if you are basically making the equivalent to tens of thousands of people disappear from the grid, then reappear from the grid like inside in less than a second, there's gotta be some like a knock on effect for that.

Like, you spoke about like gas turbines. It's like, is there, do you reckon we're gonna see people finding something in the middle to act like a kind of shock absorber for these changes that kind of go through to the grid? Because if you're operating a grid, that feels like the kind of thing that's gonna really mess with you being able to provide like a consistent, kind quality of power to everyone else.

If you've got the biggest use of energy also swinging up and down the most as well, surely.

Scott Chamberlin: Yeah, and it's certainly like a, I don't know if existential problem is the right word, but it's certainly a emerging, very challenging problem, 

Chris Adams: Mm-hmm. 

Scott Chamberlin: within the space of data centers is the essentially like seeking up of some of these behaviors among the GPUs to cause correlated spikes and drops in power and it's not, it has implications within your data center infrastructure and, to the point where we hear from customers that they're no longer deploying some of their UPSs or their battery backups within the GPU clusters because they don't have the electronics to handle the loads shifting so dramatically, to the point where we're also getting emerging challenges in the grid in terms of how these loads ramp up or down and affect, say, I'm not gonna get into where I'm not an expert in terms of the generational, aspects of generation on the grid and maintaining frequency, but it has implications for that as well. But so, we, in the software we can certainly smooth those things out, but there's also, I mean, there's weird behaviors happening right now in terms of trying to manage this.

My favorite, and I don't know if you've heard of this too, Chris, is PyTorch has a mode now where they basically burn just empty cycles to keep the power from dropping down dramatically when, I think it's when weights are sinking, in PyTorch, I'm not exactly sure when it was implemented.

Because i've only read about it, but you know, when you maybe need to sink weights across your network and so some GPUs have to stop, what they've implemented is some busy work so that the power doesn't drop dramatically and cause this really spiky behavior. 

Chris Adams: Ah. 

Scott Chamberlin: So I think what 

Chris Adams: you're referring to, Yeah. So this the PyTorch_no_powerplant_blowup=1 they had, right? Yeah. This, I remember reading about this in semi analysis. It just blew my mind. The idea that you have to essentially, keep it running because the spike would be so damaging to the rest of the grid that they have to kind of simulate some power, so it doesn't, so they don't have that that change propagate through to the rest of the grid, basically.

Scott Chamberlin: Correct. And so, that's one of the, we look at problems like that, there's that problem in terms of the thinking of the way the problem if you train, start a training run

where all the GPUs basically start at the same time and create a surge. And, so, we help with some of those situations in our software.

But yes, I think that some of the behaviors that are getting implemented, like the, no_powerplant_blowup=1 , they're fairly, I would say they're probably not great from a green software point of view because anytime we are, we're, doing busy work, that's an opportunity to reduce energy, reduce CO2 and there probably are ways of just managing that in a bit with a bit more sophistication depending on the amount of, the scale that you're working at, that is, probably may have been more appropriate than that.

 So this is definitely 

Chris Adams: still needs 

Scott Chamberlin: to be looked at a little bit. 

Chris Adams: So like, I mean before like in the pre AI days, there was this notion of like a thundering herd problem where everything tries to use the same connection or do the same at the same time. It sounds like this kind PyTorch_no_powerplant_blowup=1 is essentially like the kind of AI equivalent to like seeing that problem coming and then realizing it's a much greater magnitude and then figuring out, okay, we need to find a elegant solution to this in the long run.

But right now we're just gonna use this thing for now. Because it turns out that having incredibly spiky power usage kind of propagating outside the data center wrecks all kinds of havoc basically. And we probably don't want to do that if we want to keep being connected to the grid.

Scott Chamberlin: Yeah. But at it's really a, spiky behavior at scale is really problematic. Yes.

Chris Adams: Okay, cool. Dude, I'm so sorry. We're totally down this kind of like, this AI energy spikiness rabbit hole, but I guess it is what happens

when 

Scott Chamberlin: it's certainly a, it's certainly customers are really interested in this because it's, it, I mean if we were to like bubble up one level, like there's this core challenge in the AI space where the energy people don't necessarily talk the same language as the software people.

And we, I think that's one place where maybe Hyperscale has a little bit more advantage 'cause it has emerged from software companies, but Hyperscale is not the only game in town and especially when we're going to neo clouds and stuff like that. And so, I think one of our like side goals is really how do we actually enable people talking energy and infrastructure to have the same conversations and create requirements and coordinate with the software people running the loads within the data centers? I think that's the only way to really solve this holistically. So,

Chris Adams: I think you're right. I mean, this is, to bring this to some of the kind of world of green software, I suppose, the Green Software Foundation did a merger with, I think they're called it, SSIA, the Sustainable Servers and Infrastructure Alliance. I think it's something like that. We had them on a couple of episodes a while ago, one where there was a whole discussion about, okay, how do, setting out some hardware standards to have this thing kind of crossing this barrier.

Because, like you said, it does it, as we've learned on this podcast, some changes you might make at AI level can have all these quite significant implications. Not just thinking about like the air quality and climate related issues of having masses and masses of on-premise gas turbines. But there's a whole thing about power quality, which is not something that you've had to think about in terms of relating to other people, but that's something that's clearly needs to be on the agenda as we go forward.

Just like, like responsible developers. I should, before we just kind of go further down there, I should just check, we just, we're just coming up to time. we've spent all this time talking about

 this and like we've mentioned a couple of projects. Are there any other things that aren't related to, like spiky AI power that you are looking at and you find, Hey, I wish, I'm on this podcast, I wish more people knew about this project here or that project there.

Like, are there any things that you are, you've, read about the news or any people's work that you're really ins impressed by and you wish more people knew about right now?

Scott Chamberlin: Yeah, I mean, I think a lot of people probably on this podcast probably know about AI Energy Score. Like I think that's a promising project. Like I really believe that we need to have the ability to understand both the energy and the CO2 implications of some of these models we're using and the ability to compare them and compare the trade-offs.

I do think that, the level of sophistication needs to get a bit higher because it's, right now it's super easy to trade off model size and energy. Like, I can go, single GPU and, but I'm trading off capabilities for that. So how do we, I think on one of my blog posts, it was someone's ideas.

Like you really have to normalize against the capabilities and the energy at the same time for making your decisions about what the right model is for your use cases relative to the energy available to say the CO2 goals you have. So, but yeah, I think eventually they'll get there in that project.

So I think that's a super promising project. 

Chris Adams: We will share a link to that. So we definitely got some of that stuff for the AI Energy Score, 'cause it's entirely open source and you can run it for open models, you can run it for private models and if you are someone with a budget you can require customers to, or you can require suppliers to publish the results to the leaderboard, which would be incredibly useful because this whole thing was about energy transparency and like.

Scott Chamberlin: Yeah,

Chris Adams: I'm glad you mentioned that. That's like one of the, I think that's one of the more useful tools out there that is actually relatively, like, relatively easy to kind of write into contracts or to put into a policy for a team to be using or team to be adopting, for example.

Scott Chamberlin: Correct. Yep. No, a big fan, so.

Chris Adams: Well that's good news for Boris. Boris, if you're hearing this then yeah, thumbs up, and the rest of the team there, I only mentioned Boris 'cause he's in one of the team I know and he's in the Climateaction.tech Slack that a few of us tend

Scott Chamberlin: Yeah. Boris and I talked last week. Yeah. A big fan of his work and I think Sasha Luccioni, who I actually never met, but yeah, I think she's also the project lead on that one.

Chris Adams: Oh, Scott, we are coming up to the time and I didn't get a chance to talk about anything going on in France, and with things like Mistral sharing some of their data, some of their environmental impact figures and stuff like that because it's actually, it's kind of, I mean, literal, just two days ago we had Mistral, the French kind of competitor to open AI,

they, for the first time started sharing some environmental figures and quite a lot of detail. More so than a single kind of like mention from Sam Altman about power, about the energy used by AI query. We've, we actually got quite a lot of data about the carbon and the water usage and stuff like that.

But no energy though. But that's something we'll have to speak about another time. So hopefully maybe I'll get, be able to get you on and we can talk a little bit about that and talk about, I don't know the off-grid data centers of Crusoe and all the things like that. But until then though, Scott, I really, I'm, I've really enjoyed this deep dive with you and I do hope that the, our listeners have been able to keep up as we go progressively more detailed.

And, if you have stayed with us, listeners, what we'll do is we'll make sure that we've got plenty of show notes so that people who are curious about any of this stuff can have plenty to read over the weekend. Scott, this has been loads of fun. Thank you so much for coming on and I hope you have a lovely day in Evergreen Town.

Scott Chamberlin: Thanks Chris.

Chris Adams: Alright, take care of yourself, Scott. Thanks. 

Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.

To find out more about the Green Software Foundation, please visit greensoftware. foundation. That's greensoftware. foundation in any browser. Thanks again, and see you in the next episode. 




Hosted on Acast. See acast.com/privacy for more information.

Show more...
3 months ago
1 hour 44 seconds

Environment Variables
Real Efficiency at Scale with Sean Varley
Anne Currie is joined by Sean Varley, Chief Evangelist and VP of Business Development at Ampere Computing, a leader in building energy-efficient, cloud-native processors. They unpack the energy demands of AI, why power caps and utilization matter more than raw compute, and how to rethink metrics like performance-per-rack for a greener digital future. Sean also discusses Ampere’s role in the AI Platform Alliance, the company’s partnership with Rakuten, and how infrastructure choices impact the climate trajectory of AI.

Learn more about our people:
  • Anne Currie: LinkedIn | Website
  • Sean Varley: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • Ampere Cloud Native Processors – Ultra-efficient ARM-based chips powering cloud and edge workloads [02:30]
  • AI Platform Alliance – Coalition promoting energy-efficient AI hardware [04:55]
  • Ampere + Rakuten Case Study – Real-world deployment with 36% less energy per rack [05:50]
  • Green Software Foundation Real Time Cloud Project – Standardizing real-time carbon data from cloud providers [15:10]
  • Software Carbon Intensity Specification – Measuring the carbon intensity of software [17:45]
  • FinOps Foundation – Financial accountability in cloud usage, with sustainability guidance [24:20]
  • Kepler Project – Kubernetes power usage monitoring [26:30]
  • LLaMA Models by Meta [29:10]
  • Anthropic’s Claude AI [31:25]
  • Anne Currie, Sara Bergman & Sarah Hsu: Building Green Software [34:00]

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Sean Varley: Because at the end of the day, if you want to be more sustainable, then just use less electricity. That's the whole point, right. 

Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams. 

Anne Currie: Hello and welcome to the World of Environment Variables, where we bring you the latest news and updates from the world of sustainable software. So I'm your guest host today. It's not, you're not hearing the usual dulcet tones of Chris Adams. My name is Anne Currie. And today we'll be diving into a pressing and timely topic, how to scale AI infrastructure sustainably in a world where energy constraints are becoming a hard limit. And that means that we are gonna be, have to be a little bit more clever and a little bit more careful when we choose the chips we run on. So it's tempting to believe that innovation alone will lead us towards greener compute, but in reality, real sustainability gains happen when efficiency becomes a business imperative when performance per watt, cost and carbon footprint are all measured and all have weight. So, that's where companies like Ampere come in, with cloud native energy efficient approaches to chip design. They're rethinking how we power the AI boom, not just faster but smarter. It's a strategy that aligns directly with Green Software Foundation's mission to reduce carbon emissions from the software lifecycle, particularly in the cloud. So in this episode, we'll explore what this looks like at scale and what we can learn from Ampere's approach to real world efficiency. So what did it take? What does it take to make an AI ready infrastructure that's both powerful, effective, and sustainable? Let's find out. And today we have with us Sean Varley from Ampere.

So Sean, welcome to the show. Can you tell us a little bit about yourself?

Sean Varley: Yeah, absolutely Anne, and thanks first for having me on the podcast. I'm a big fan, so, I'm looking forward to this conversation. So I'm the chief evangelist of Ampere Computing. And, I, now what that means is that we run a lot of the ecosystem building and all of the partnership kind of, works that go on to support our silicon products in the marketplace.

And also, build a lot of awareness right around some of these concepts you introduced. You know, all of the, you know, kind of building out that awareness around sustainability and power efficiency and how that also really kinda works, within different workload contexts and workload context change over time.

So all of those sorts of things are kind of in scope, for the evangelism role.

Anne Currie: That's, that is fantastic. So I'll just introduce myself a little bit as well. My name is Anne Currie. If you haven't heard the podcast before, I am one of the authors of O'Reilly's new book, Building Green Software, which I, as I always say, everybody who's listening to this podcast should read Building Green Software.

That was, that is entirely why we wrote the book. I'm also the CEO of the training and Green Consulting Company as Strategically Green. So, hit me up on LinkedIn if you want to talk a little bit about training consultancy, but back to the, back to the podcast. Oh, and I need to remember that everything we'll be talking about today, there will be links about it in the show notes.

So you don't need to worry about writing down URLs or anything. Just look at the show notes before. So, now, I'm actually gonna start off the question by harking, start off the podcast by harking back to somebody that we had on the podcast a couple of months ago. A chap called, Charles Humble. And his, the assertion that he was making was that we all need to wake up to the fact that there isn't just one chip anymore, there isn't a default chip anymore that everybody uses and is kind of good enough for the best in all circumstances to use. when you are, setting up infrastructure, or in the cloud for example, and you have the dropdown that picks witch chip you're going use, the defaults might be Intel, for example. That is no longer a no-brainer, that you just go with the default. There are lots and lots of options, to the extent that, I mean, Ampere is a new chip company that decided to go into the market. So one of the questions that I have is why? You know, what gap did you see that it was worth coming in to fill?

Because 10 years ago we would've said there was no real gap, wouldn't we?

Sean Varley: That's right. Yeah. Actually it was a much more homogenous ecosystem back in those days. You know, and I, full disclosure, I came from Intel. I did a lot of time there. But about seven years, six years ago, I chose to come to Ampere. and part of this was the evolution of the market, right?

The cloud market came in and changed a lot of different things, because there's kind of classically, especially in server computing, there's sort of the enterprise and the cloud and the cloud of course has had a lot of years to grow now. And the way that the cloud has evolved was to, really kind of, you know, push all of the computing

to the top of its performance, the peak performance that you could get out of it. But there, you know, nobody really paid attention to power. Going back, you know, 10, 15, 20 years, nobody cared. And those were in the early days of Moore's law. And, part of what happened with Moore's Law is as frequencies, you know, grew then so did performance, you know, linearly.

And I think that sort of trained into the industry a lot of complacency. And that complacency then became more ossified into the, you know, the way that people architected and what they paid attention to, metrics that they paid attention to when they built chips. But going back about seven, eight years, we actually saw that there was a major opportunity to get equal or better performance for about half the power. And that's kind of what forms some of our interest in building a company like Ampere. Now, of course, Ampere, since its inception has been about sustainable computing and, me being personally sort of in interested in sustainability and green technology and those sorts of things

just outside of the, my profession, you know, I, was super happy to come to a company like Ampere that had that in its core. 

Anne Currie: And that's very interesting. So really and Ampere, your chip is a, is an X86 chip, so it's not competing against ARM is more competing against Intel and AMD.

Sean Varley: It's actually, it is an ARM chip. It's a, it's based on the ARM instruction set. And, yeah, so it's kind of an interesting dynamic, right? There was, there's been a number of different compute architectures that have been put into the marketplace. and the X86 instruction set classically by Intel and a MD who followed them, have dominated the marketplace, right?

And, well at least they've dominated the server marketplace. Now, ARM has traditionally been in mobile handsets, embedded computing, things like this. But part of where the, that architecture was built and its roots were grown up in more power-conscious markets, you know, because anything running on a battery you want to have be pretty power miserly

Anne Currie: Yeah.

Sean Varley: to use the word. So yeah, the ARM instruction set and the ARM architecture did offer us some opportunities to get a lift when we first, when we were a young company, but it doesn't necessarily have that much of a bearing on overall what we can do for sustainability, because there's many things that we can do for sustainability and the instruction set of the architecture is only one of them.

And it's a much smaller one. I, it is probably way too detailed to get into on this podcast, but it is one factor and so yes, we are ARM instruction set based and about four years back, we actually started creating our own course, on the instruction set. And that's sort of been an evolution for us because we wanted to maintain this focus on sustainability, low power consumption, and of course, along with that, high performance. 

Anne Currie: Oh, that's interesting. So as you say, the instruction set is only one part of what you're attempting, of what you're doing to be more efficient, to be, to use less power to per operation. What else are you, what else are you doing?

Sean Varley: Oh, many things. Yeah. So the part of this that kind gets away from the instruction set is how you architect and how you present the compute to the user, which may get further into kind of some of your background and interest around software because, part of what we've done is architect a chip or a set of family of chips that now that are very, well, they start off with area efficiency in the core.

And how we do a lot of that is we focus, on cache, cache configuration. So we, you, we use a lot more of what we call L2 cache, which is right next to the cores that helps us get performance. We've, kind of steered away from the X86 industry, which is much more of a larger L3 cache, which is a much bigger piece and area, part of the area of the chip.

And so that's one of the things that we've done. We've, but we've also kind of just decided that many of the features of the X86 architecture are not necessary for high performance or efficiency in the cloud. And part of this is because software has evolved. So what are those things? Turbo, for example. Turbo is a feature that kind of moves the frequency of the actual cores around, depending on how much thermal headroom the chip has. And so if you have a small amount of cores, the frequency could be really high. But if you have a lot amount of cores doing things, then you, then it pulls the frequency back down low because you've only got so much thermal budget in the chip. So we got, we said, oh, we're just gonna run all of our cores at the same frequency.

And we've designed ourselves at a point, in the, you know, voltage frequency curve that allows us that thermal headroom. Now, that's just one other concept, but, so many things have really kind of, you know, created this capability for us to focus on performance per watt and all of those things are contributors to how you get more efficient.

Anne Currie: Now that's, that is very interesting. So why, yeah, it's, what was your original motivation? Was it for the cloud? What did you, were you designing with the cloud in mind or were you designing more with the devices in mind?

Sean Varley: Yeah, we absolutely, we're in, 

are, you know, designing for cloud, because, 

cloud is such a big mover in how things evolve, right? I mean, if you're looking at markets, there's always market movers, market makers and the way that you can best accomplish getting something done. So if our goal is to create a more sustainable computing infrastructure, and now in the age of ai, that's even become more important, but, if our goal is that, then we need to go after the influencers, right? The people that will actually, you know, move this, the needle. And so the cloud was really important and we've, had a kind of this, you know, overall focus on that market, but it's not,

our technology is not limited to it. Our technology is, you know, by far and away much more power efficient anywhere from all the way out at the edge and devices and automotive and networks all the way into the cloud. But the cloud also gave us a lot of the paradigms that we have also been attached to.

So when we talk about cloud native computing, we're really kind of hearkening to that software model that was built out of the cloud. The software model built out of the cloud is something that they call serverless, in the older days. Or now it's, you know, microservices and some of these sorts of concepts.

And so as software has grown, so have we, you know, kind of put together a hardware architecture that meets that software where it is, because what that software is about is lots of processes, you know, working together to formulate a big service. And so those little processes are very latency sensitive.

They need to have predictability, and that's what we provide is our architectures, lots of cores that all run at the same kind of pace, and so you get high degree of predictability out of that architecture, which then makes the software and the entire service more efficient.

Anne Currie: So that's, that is very interesting. And I hadn't realized that. So obviously things like serverless going on in clouds, that is a, the software that's actually running on the chip is software that was written by usually the cloud provider. You know, the, clouds wrote that software.

So it, you are isolating from, it is, one of the interesting things about high performance software is that it's hard, really hard to write. In fact, in Building Green Software, I always talk about people about don't start there, it's really hard. You need specialist skills. You need to know the difference between L2 caches and L3 caches.

And you need to know how to use them. And the vast majority of engineers do not have those skills. And it will never achieve, will never acquire those skills. But the cloud providers where they are managing, providing managed services that you are using, like, you're just writing a code snippet that's running in Lambda or whatever. You are not writing the code that makes that snippet run. You're not writing the code that talks to the chip. Really super specialist engineers at AWS or Azure or whatever are writing that code.

So is that the, is that the move that you were anticipating?

Sean Varley: Absolutely. I mean, that's a big part of it, right? And as you just articulated a lot of the platform as a service kind of code, right, so that managed service that's coming out of a hyperscaler is, you know, built to be cloud native. It's built to be very microservice based.

And it has a lot of what we call SLAs in the industry, right? Service level agreements, which mean that you need to have a lot of, different functions complete, on time for the rest of the code to work as it was designed. And as you said, it is a much more complex way to do things, but the overall software industry has started to make it a lot easier to do this, right. And things like containers, you know, which are inherently much more efficient. you know, sort of, you know, entities, yeah, like, footprints, images is what I was really kind of going for there. They're, they are, you know, already you've cut out a lot of the fat, right, in the software. You've gotten down to a function. You mentioned Lambda, for example. A function is the most, you know, sort of nuclear piece of code that you could potentially write, I suppose, to do something. And so all of these functions working together, they need these types of execution architectures to really thrive and yes, you're right, that developers, you know, they have come a long way in having these serviceable components in the industry. You know, Docker sort of changed the world about, what is it, 10 years ago now, maybe longer. And all of a sudden people could go and grab these little units of, what they call endpoints in kind of, you know, kinda software lingo, you know? And so if I wanna get something done, I can go grab this container that will do it. And those containers and the number of containers that you can run on a cloud native architecture like Ampere's is vastly better than what you can find in most X86 architectures.

Why? Because these things run on cores. Right. And we have a lot of them.

Anne Currie: Yeah, so that is very interesting, the, so I also. Everybody who's listening to the podcast must also in like my other book on this very subjects, which is called the Cloud Native Attitude. And it was about why Docker is so important, why containers are so important.

Because they wrapped up, they allowed you to wrap up programs and then move those programs around so that's, it basically put a little handle that made you be able to move stuff around and started and stop it and orchestrate it. And what that meant was

Sean Varley: I love that analogy, by the way, the handle, and you just pick it up and move it anywhere you want it, right.

Anne Currie: Yeah, because really that was what, 

that was all that Docker really did. It wrapped something that was, a fairly standard Linux concept that had been around quite a long time. And it put a nice little API on it, which was effectively a handle, which let other tools move it around.

And then you've got orchestrators like Kubernetes, but you also got lots of other orchestrators too.

But what that meant in the cloud native world was that you could have services that were written by super experts or open source. So it had lots of experts from all over the place, writing them and tuning them and improving them and get, letting Moore's law and write, well, not Moore's Law, Wright's Law, which the law systems get better if you use them. Yet it gave people a chance to go in and improve things. But have those be the people who are improving things, be specialists and let that specialist code was incredibly hard to write, be shared with others. So you're kind of amortizing the incredibly difficult work. So fundamentally, what you are saying, and I think this is, you know, I, you could not be singing more from my hymn sheet on this, is that it's really hard to write code that interfaces well and uses CPUs well so that they're highly efficient and you get code efficiency and you get operational efficiency really hard to do. But, if you can do it, if you can find a way that it doesn't require every single person to write that code, which is really hard, but you can share it and leverage it through open source implementations or cloud implementations written by the cloud providers, then suddenly your CPUs can do all kinds of stuff that they couldn't have done previously.

Is that what you're saying?

Sean Varley: Absolutely, and I would've, I was gonna put tack on one little thing to your line was it's really hard to do this by yourself, right? 

And this is where the open source communities and all of these sorts of things that have really kind of revolutionized, especially the cloud, coming back to that topic that we were talking about.

Because the cloud has really been, I think evolved on the back of open source software, right? And that radically changed how software was written. But now coming back to your package and your handle, you can go get a function that was written in and probably optimize by somebody who spent the time to go look at how it ran in a specific architecture.

And now with things like Docker and GitHub and all these other tool chains where you can go out and grab containers that are already binary compiled for that instruction set that we were talking about earlier, this makes things a lot more accessible to a lot more people. And in some ways, you have to trust that, you know, this code was written to get the most out of this architecture, but sometimes there's labeling, right?

This was written for that, or, you know, a classic example in code is that certain types of algorithms get inline assembly done to make them the most efficient that they can be. And all of that usually was done in the service of performance, right? But one of the cool things about trying to do things in service of performance is that you can actually usually get better power efficiency out of that if you use the right methodologies. Now, if the performance came solely from something that was frequency scaled, that's not gonna be good for power necessarily. But if it's going to be done in what we call a scale out mechanism where you get your performance by scheduling things on, not just one core, but many cores,

and they can all work together in service of that one function, then that can actually create a real opportunity for power efficiency. 

Anne Currie: Yeah, so that maps back to something that in Building Green Software we talk about, which is utilization. So, you know, a machine is. And a machine use needs to be really well utilized because if it's not well utilized, it still uses pretty much the same power, but it's not doing anything if it's not actually doing anything. It's not doing anything useful with it. It's just a waste.

Sean Varley: I'm so glad you brought this up.

Anne Currie: Well go for it. Go for it. You know, you are the expert in this area.

Sean Varley: Oh, no. Yeah, I think you're, exactly right. You hit it on the, the nail on the head, and the part of the problem in the world today is that you have a lot of machines out there that are underutilized, and that low utilization of these machines contributes a lot to power inefficiency. Now I'm gonna come back to some other things that maybe go back to the, where we were talking about in certain terms of processor architecture, but is still super relevant to code and efficiency. So the one thing going back to everybody only had one choice on the menu, which was Intel at the time,

was that architecture instilled some biases or some habits, pick your sort of word here, but, people defaulted to a certain type of behavior. Now, one of the things that it trained into everyone out there in the world, especially code writers and infrastructure managers, was that you didn't ever get over about 50% utilization of the processor because what happened is if you did then at, after 50% all of the SLAs I was talking about earlier, those, that service level agreement where things are behaving nicely, went out the window, right? Nobody could then get predictable performance out of their code because why?

Hyperthreading. So Hyperthreading is where you share a core with two execution threads. That sharing at once you got went over 50%, then all of a sudden you are heavily dependent on the hyperthreading to get any more performance. And what that does is it just messes up all the predictability of the rest of the processes operating on that machine.

So the net result was train people 50% or below. Now our processors, if you're running at 50% or below, that means you're only using half of our complete capacity, right? So we've had to go out and train people, "no, run this thing at 80 or 90% utilization because that's where you hit this sweet spot," right?

That's where you're going to save 30, 40, 50% of the power required to do something because that's how we architected the chip. So these are the kinds of biases and habits and sort of rules of thumb that we all end up having to kind of combat. 

Anne Currie: Yeah, and it's interesting. I mean, that's say as, you say that completely maps back to a world in which we just weren't thinking about power, you know, we just didn't care about the level of waste. So, I, quite often en enterprise, enterprise engineers, architects are very used these days to the idea of lean, and agile.

It's about reduction of waste. And the biggest waste there is, underutilized machines. And we don't tend to think about it. And as you say, in this part, because we were trained now to thinking about it. 

Sean Varley: And also people were, didn't really care there, you know, back in the day, you know, going back again, 10, 15, 20 years ago, people didn't really care that much about how much power was consumed in their computing tasks because it wasn't top of mind for people, right. And frankly, we consumed a lot less of it, primarily because we had a lot of less infrastructure in service in, you know, worldwide I'm talking about, but also because, you know, back in, you know, in older chip architectures and older silicon process technology, it consumed less power. Now as we've gotten into modern process technology, that whole thing has changed. And now you've got chips that can burn hundreds and hundreds of watts by themselves, not to mention the GPUs, which can burn thousands of watts. And that's just a wholesale shift in, you know, kind of the trajectory of power consumption for our industry.

Anne Currie: So you've brought up AI and GPUs there, and obviously, and even more AI focused chips that are even potentially more power hungry. How does Ampere help? 'Cause Ampere is a CPU, not a GPU or a TPU, how does it

fit into this story?

Sean Varley: It fits in a number of different ways. So, maybe a couple of definitions for people. CPU is a general purpose processor, right? If we, it runs everything, and in, you know, kind of everyday parlance, it's an omnivore. It can do a lot of different things and it can, you know, do a lot of theso pretty well, but what you have is an industry that is evolving into more specialized computing. That's what A GPU is. But there are many other examples, accelerators and others types of, you know, kind of, not homogenous type computing, but heterogeneous computing, where you've got different specializations. GPU is just one of those.

And, but in AI, what we've found is, that the GPU architecture, of course, has driven that overall workload, you know, to a point where the power consumption of that type of a workload, because there's a lot of computational horsepower required to do, AI models

and, so that has driven, you know, the industry up into the right in terms of power consumption. And that has, you know, there's a bias now in the industry about, well, if you're gonna do AI, it's gonna just take a ton of power to do it. The answer to that is, "maybe..." right? Because what you've got is, maybe a little bit of a lack of education about the whole pantheon of AI, you know, kind of execution environments and models and things like that, and frameworks and all sorts of things.

All of these things matter because a CPU can do a really good job of doing the inference operation, for AI and it can do an excellent job of doing it efficiently. 'Cause coming back to your utilization, you know, kind of argument we were talking about earlier. Now, in GPUs, the utilization is even far more important because as you said, it sits there and burns a lot of power no matter what.

So if you're not using it, then you definitely don't want that thing just kind of, you know, running the meter. And so utilization has become a huge topic in GpU, you know, kinda circles and so, but CPUs kind of have a ton of technology in them for low power when not utilized.

You know, that's been a famous, you know, kind of set of capabilities. But also AI is not one thing. And so AI is the combination of specialized things that are being run in models and then a lot of generalized stuff that can be run and is run on CPUs. So where we come in, Ampere's concept for all that is what we call AI compute.

So AI compute is the ability to do a lot of the general purpose stuff and quite a bit of that AI specific stuff on CPUs, and you have a much more kind of flexible platform for doing either.

Anne Currie: So it's interesting. Do you, now I'm going show my own ignorance here 'cause I've just thought of this and therefore I'm gonna go horribly roll with it. There are kind of a, there are kind of platforms to help people be more hardware agnostic when it comes to stuff like, Triton, is it, and,

are there things that, do you fit in with anything like that,

or is it just, does everybody have to kind of decipher themselves whether they're gonna be, which bit of hardware they're gonna be using?

Sean Varley: Oh man. We could do a whole podcast on this. Okay.

Yeah. Let me try to like, break this down at least in a couple of simple terms. So, yes, I mean, there's two, first of all, there's two main operations in AI. There's training and there's inference. Now training is very high batch, high consumption, high utilization of a lot of compute.

So we will think of this as maybe racks full of GPUs because it's also high precision and it's a big, it's a kind of a very uniform operation, right? once you set it, you kind of forget it and you let it run for famously weeks or months, right? And it turns out a model, but once the model's turned out, it can be run on a lot of different frameworks.

Right. And so this is where, you know, that platform of choice part comes back in because inference is the operation where you're gonna get some result, some decision, some output out of a model. And that's gonna be the, by far and away the vast majority of AI operations of the future, right?

We've been, 

we're still training a lot of models, don't get me wrong. But in the future is gonna be a lot of inference and that particular operation doesn't require as high a precision. It doesn't require a lot of the same characteristics there that are required in training. Now that can be run a lot of different places on these open source frameworks.

And also what you're starting to see is now specializations in certain model genres. A genre, I would say is like a llama genre, you know, from meta, you know, they've built all of their own, much more efficient, you know, kind of frameworks in their CPP, their C++ implementation of the llama frameworks.

So you got specialization going on there. All that stuff can run on CPUs and GPUs and accelerators and lots of other types of things. Now it becomes more of a choice. What do I want to focus on when I do this AI operation? Do I really want to focus on something that's going to, you know, get me the fastest result, you know, ever?

Or can I maybe let that sort of thing run for a while and then give me results as they come? And a lot of this sort of decision making, use case based decision making will dictate a lot of the power efficiency of the actual AI operation.

Anne Currie: That is interesting. Thank you very much for that. So Ampere, you see, so Ampere is basically in that second thing, you are one of the options for inference.

Sean Varley: That's right, yeah. And we actually, we, our sort of whole thought process around this is, that we want to provide a very utilitarian resource, right? Maybe it's the right word. Because the utilitarianism of it is not that it's like low performance or anything like that, it's still high performance.

It's just that you're not going to necessarily need, you know, all of the resources of the most expensive or the most, kind of, parameter-laden model. So, 'cause models come in, a lot of parameters. We hear this term, right? You know, up to trillions of parameters, down to millions of parameters.

And somewhere in the middle is kind of that sweet spot right now, right? Somewhere in the 10 to 30 million per, or billion, sorry, billion parameter range and that sort of thing requires optimization and distillation. So we are building a resource that will be that sort of utility belt for AI of the future, where you need something that runs, you know, a like a llama 8 billion type of model, which is gonna be a workhorse of a lot of the AI operations that are done in GenAI, for example, that will run really well and it will also run with a lot less power than what it might have been required if you were to run it on a GPU. So there's gonna be a lot of choices in there will need to be, you know, folks that specialize in doing AI for a lot less you know, kind of power and cost.

Anne Currie: Something that Renee mentioned on stage when we were so, the CEO of Ampere and I were on stage at the same, in a panel a few months ago, which is how comes we're talking today, and one of the things she said that very much interested me was that Ampere chips could, didn't have to be water cooled, they could be air cooled. Is that, true? Because obviously that's something that comes up a lot in the water use and AI's terrible water use. What's, the story on that?

Sean Varley: Yes. That is actually one of our design objectives, right? If you put in a design objective, sustainability is one of your design objectives. That is what we do, right? So part of what we've done is we've said, look, our chips run at a certain kind of ceiling from a power perspective, and we can get a lot of performance out of that power envelope.

But that power envelope's gonna stay in the range where you can air cool the chip. This provides a lot of versatility. Because if you're talking about sort of the modern data center dynamic, which is, oh, I've got a lot of Brownfield, you know, older data centers that, now are they gonna become obsolete?

And then in the age of AI, because they can't actually run liquid cooling and stuff like that. No. We have infrastructure that goes into those types of data centers and also will get you a lot of computational horsepower for AI compute inside a power envelope that was more reasonable or already provisioned for that data center.

Right? We're talking about racks that run 15 kilowatts, 22 kilowatts. Somewhere in that 10 to 25 kilowatt range is sort of a sweet spot in those types of data centers. But now what you hear these days is that racks are starting to go to 60 kilowatts, a hundred kilowatts even higher. Recently, you know, Nvidia had been pushing the industry even higher than that.

Those things require a lot of specialization, and one of the specializations that are required is direct liquid cooling, what they call DLC. And that requires a whole different refit for the data center. It's also, of course, the reason why it's there is to dissipate a lot of heat.

Right. And that requires a lot of. Water.

Anne Currie: Which is fascinating because it, the water use implications of AI data centers comes up a lot at the moment and perfectly reasonably so. It is yet, it is not sustainable at the moment to put the, to put data centers in places where, and it's a shame because, places where there is a lot of solar power, for example, there's also often and not a lot of water. Right. Yeah.

If you can turn solar, the sun into air conditioning, that's so much better than taking away all their lovely clean water that they really needed to live on. 

Sean Varley: Yes. 

Anne Currie: So that's, I mean, is that the kind of thing that's, that you are envisaging, that it doesn't have to, you know, it works better in places where there's sunshine.

Sean Varley: Absolutely. And we create technology that can very efficiently implement a lot of these types of AI enabled or traditional, you know, kind of compute. And they could be anywhere. They could be, you know, at an edge data center in a much smaller, you know, environment where there's, you know, only a dozen racks.

But it's also equally comfortable in something where there's thousands of racks, 

because at the end of the day, if you want to be more sustainable, then just use less electricity. That's the whole point, right. 

And you know, we can get into a lot of these other schemes. you know, for trying to offset carbon emissions and all these sorts of things and, all those schemes,

i'm not saying they're, bad or anything like that, but at the end of the day, our whole mission is to just use less power for these types of operations. And it comes back to many of the concepts we've talked about, right? You know, utilize your in infrastructure. Use code efficient, you know, practices, which comes back to like containers and there's even much more refined you know, code practices now for, doing really efficient coding. And then, you know, utilize a power efficient hardware platform, right? Or the most power efficient platform for whatever job you're trying to do. And certain things can be done to advertise, you know, how much, you know, electricity you're consuming to get something done, right? And there's, that's a whole sort of, you know, next generation of code I think is just that power aware, you know, kind of capacity for what you're gonna run at any given moment.

Anne Currie: Well, that's fantastic. I, we've talked for quite a long time and that was very information dense. It was high utilization of time to information there. I think we had a quite a high rate there of information passed. So, is there, so that was incredibly interesting and I really enjoyed it and I hope that, the listeners enjoyed it. All the, if there's anything that we talked about, we'll try and make sure that it's in the show notes below. Make sure that you read Building Green Software and the Cloud Native Attitude, because that would, that's a lot of what we talked about here today. and is there anything else, is there anything you wanna finish with, Sean?

Sean Varley: Well, I just, I really enjoyed our discussion, Anne, thank you very much for having me. I think these technologies that are very important, and these concepts are very important, you know, there's a lot of misinformation out there in the world as we know, it's not just in, not just confined to politics,

Anne Currie: Yep.

Sean Varley: but yeah, there, you know, there's a lot of education I think that needs to go on in these types of environments that will help all of us to create something that is much greener and much more efficient. And by the way, it's good practice because almost every time you do something that's green, you're gonna end up saving money too.

Anne Currie: Oh, absolutely. Yes, totally. If you're not doing it because you're, well, you can do it because you're a good person, which is good,

but also do it 'cause you're a sensible person who doesn't have a

Sean Varley: That's great. Yeah. Successful businesses will be green, shall be green! Let's, there needs be a rule of thumb there.

Anne Currie: Yeah. Yeah. So it is interesting. If you've enjoyed this podcast, listen as well to the podcast that I did with Charles Humble a few weeks ago, that we, again, he touched on, it's an interesting one, is there's a lot of disinformation out there, misinformation out there, but a lot of that is because the situation has changed.

So things that were true 10 years ago are just not true today. So it's not deliberate misinformation, it's just that the situation has changed. You know, the context has changed. So if you, you might hear things and think, "but that didn't used to be true. So it can't be true." You can't make that judgment anymore. You know, it might be true now and it wasn't true then. But yeah, that's the world. We are moving quite quickly.

Sean Varley: Yeah, technology, it moves super fast. 

Anne Currie: Absolutely. I don't, I've been in, so I suspect that you and I have been in for, you know, 30 years past, but it's never moved as fast as it's moving now, is it really? 

Sean Varley: Oh, I agree. Yeah. AI has just put a whole like, you know, afterburner on the whole thing. Yeah.

Anne Currie: Yeah, it's just astonishing. But yeah. Yeah. So the world, yes, all the rules have changed and we need to change with it. So thank you very much indeed. And thank you very much for listening and I hope that you all enjoyed the podcast and I will speak to you again soon. So goodbye from me and goodbye from Sean.

Sean Varley: Thank you very much. Bye-bye.

Anne Currie: Bye-bye. 

Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.

To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 




Hosted on Acast. See acast.com/privacy for more information.

Show more...
3 months ago
47 minutes 30 seconds

Environment Variables
Real Time Cloud with Adrian Cockcroft
Chris Adams is joined by Adrian Cockcroft, former VP of Cloud Architecture Strategy at AWS, a pioneer of microservices at Netflix, and contributor to the Green Software Foundation’s Real Time Cloud project. They explore the evolution of cloud sustainability—from monoliths to microservices to serverless—and what it really takes to track carbon emissions in real time. Adrian explains why GPUs offer rare transparency in energy data, how the Real Time Cloud dataset works, and what’s holding cloud providers back from full carbon disclosure. Plus, he shares his latest obsession: building a generative AI-powered house automation system using agent swarms.

Learn more about our people:
  • Chris Adams: LinkedIn | GitHub | Website
  • Adrian Cockcroft: LinkedIn | GitHub | Medium

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • Serverless vs. Microservices vs. Monolith – Adrian's influential blog post [08:08]
  • Monitorama 2022: Monitoring Carbon – Adrian’s talk at Monitorama Portland [25:08]
  • Real Time Cloud Project – Green Software Foundation [30:23]
  • Google Cloud Sustainability Report (2024) – Includes regional carbon data [33:39]
  • Microsoft Sustainability Report [36:49]
  • AWS Sustainability Practices & AWS Customer Carbon Footprint Tool [39:59]
  • Kepler – Kubernetes-based Efficient Power Level Exporter [48:01]
  • Focus – FinOps Sustainability Working Group [50:10]
  • Agent Swarm by Reuven Cohen – AI agent-based coding framework [01:05:01]
  • Claude AI by Anthropic [01:05:32]
  • GitHub Codespaces [01:11:47]
  • Soopra AI – Chat with an AI trained on Adrian’s blog [01:17:01]

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Adrian Cockcroft: We figured out it wasn't really possible to get real time energy statistics out of cloud providers because the numbers just didn't exist.

It turns out the only place you can get real time numbers is on things that are not virtualized.

Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams.

Welcome to Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. If you have worked in cloud computing for any length of time, then even if you do not know the name yet yourself, it's very likely that the way you design systems will have been influenced by my guest today, Adrian Cockcroft.

When at Netflix, Adrian led the move to the cloud there helping, popularize many of the patterns we use when deploying applications ourselves to the cloud. And his name then became synonymous with serverless throughout the 2010s when he joined AWS first leading on open source engagement, and then as a VP focused on what we might refer to now as cloud sustainability.

After leaving AWS, Adrian's kept his fingers in many pies, one of which is the Green Software Foundation's real time cloud project, an initiative to bring transparency and consistency to cloud emissions reporting. With the first dataset release from that project out the door, it seemed a good idea to invite him onto the show to see what's up.

Adrian, thank you so much for joining us today. Can I give you a bit of time to tell us about yourself and what you are, what's what you're keeping? What's keeping you busy these days? I.

Adrian Cockcroft: Yeah, it's great to see you and thanks also for your contributions to the project. We've had a lot of discussions over the last few years as we've worked on that together. well, I'm sort of semi-retired. I stopped my big corporate job at Amazon in 2022. and yeah, I spend my time worrying about my family.

I've got old parents that live in the uk, so I spend a lot of time with them. And, fixing stuff around the house and generally goofing around and doing things I feel like doing rather than stuff that's driven by some corporate agenda. So I'm enjoying that freedom. And, let's see the, yeah, I spend time on the, Green Software Foundation project.

I go to a few conferences and give a few talks and I try to keep up with, you know, what's happening in technology by playing around with whatever the latest tools are and things like that. And that's been my career over the years. I've generally been an early adopter through my entire career. as you mentioned, we were early adopters in cloud.

Back when people said This isn't gonna work and you'll be back in the data center soon. People forgot that was the initial reaction to what we said. it's a little bit like that now with people saying, all this AI stuff doesn't work and we're gonna be giving up and whatever. And it's like, well, I'm making bits of it work well enough to be interesting.

We can talk a bit about that later. and then I know you probably see behind me various musical instruments and things like that, so that's kind of, I, collect musical instruments that I don't have time to really learn how to play and mess around and make bad noises that make me happy. But luckily no one else has to listen to them particularly.

So that's kind of my, that and messing around with cars and things, that's sort of the entertainment for me.

Chris Adams: That sounds like quite a fun, state of stem semi-retirement, I have to say actually. So before we dive into the details of cloud, I have to ask, where are you calling from today Because you have an English accent and like, I have an English accent, but I'm calling from Berlin and I'm guessing you're not in England, so maybe you could do that.

'cause I follow you on social media and I see all these kind of cryptic and interesting posts about cars and stuff and it's usually sunnier than where I am as well. So there's gotta be a story there. What's going on there, Adrian?

Adrian Cockcroft: Well, I lived in England long enough to decide I didn't want to be rained on all the time. which is why I never moved to Seattle when, you know, I didn't move to California to move to America to go live in somewhere with the same weather as England. So that was one reason I never moved to Seattle when I was working for Amazon.

So used to live in the Bay Area in Los Gatos, near Netflix. about five years ago we moved down near Monterey, about an hour or two south of the Bay Area. I. Depending on traffic. we are within earshot of a race track called Laguna Seka that most people know. I can kind of see it outta my window.

I can see a few dots on the horizon on the, you know, moving and that's, there's a few cars you can just about hear them on if they're loud cars. and this is where they have in every August, this thing called Monterey Car Week with the Pebble Beach concourse and historic races. And we used to go to that every year and we like the kind of messing around with cars and going to the track occasionally culture.

So we moved down here and that's been, it's been fun. It's, you know, I don't have to commute anywhere. We have a nice place. The house prices are a lot cheaper down here than they are in the Bay Area itself. So we live in, technically we live in Salinas. lots of good vegetables around here. That's where a lot of the growers are.

and it's, we live actually out in the countryside, sort of. Just in the hills near, near there. So we have a nice place, have plenty of room for messing around and a big house, which requires lots of messing around with. And we can talk a bit about one of the projects I have later on to try and automate some of that.

Chris Adams: Yeah, that's quite a hint. Alright, well that does explain all the kind of cars and coffee stuff when I, like say 30 verse and Okay. If you're near a racetrack, that would explain some of the cars as well. Alright. Thank you

Adrian Cockcroft: Well, actually there's cars and coffee events just about everywhere in the world. If you, like looking at old cars and hanging out with car people, there's one probably every Saturday morning somewhere within 10 miles away. Pretty much anyone. Anyway, the other things, on that front that's sort of more related to Green Software Foundation is we've had a whole bunch of electric cars over the years.

I have one of the original Tesla Roadster cars that was made in 2010. I've had it since 2013. it actually has a sticker on the back saying, I bought this before Elon went nuts. so I'm keeping that. we used to have a Tesla model three and we replaced it recently with a Polestar three, which is quite a nice car with very bad software initially.

But they did a software update recently that basically fixed just about every bug and we, it's actually fun driving a car where you don't worry if it's about to do something strange and need a software reset, which was the state it was in when we first got it in April. But the difference, a bug fix can make whether they actually went and just fixed everything that was currently going wrong with it and went, transformed the car into something That's just actually a fun thing to drive now.

Chris Adams: So it was a bit like turning it off and turning it off and on again. And then you've got like a working car,

Adrian Cockcroft: Yeah. Well, yeah, we got really used to pushing the reset button. You hold the volume control down for 30 seconds and that resets the software and we would be doing that most days that we drove it

Chris Adams: Oh my God. I didn't realize that was a real thing that people did. Wow.

Adrian Cockcroft: Yeah. It's one of these things where a product can be transformed from something buggy and annoying to, oh, we just fixed all the software now.

It actually works properly. And, you know, it's, interesting to see. So, so it went from bad, really bad to actually pretty good with one software release. Yeah.

Chris Adams: guess that's the, wonders of software I suppose. Wow. Alright then, and I guess that gives us a nice segue to talk about, I guess some back to some of the cloud and serverless stuff then. So. Before you were helping out in some of the Green Software Foundation projects. I remember reading a post from you called the evolution from Monoliths to microservices to functions.

And I think for a lot of people it actually really joined the dots between how we think about sustainability and how things like scale to zero designs, might kind of what role they play when we design cloud services. And in that post, you laid out a few things, which I found quite interesting. You spoke about the idea that like, okay, most of the time when we build services, they may be being used maybe 40 hours a week and there's 168 hours a week.

So like 75% of the time it's doing nothing. And just like waiting there. Yet we've still spent all this time and money building all this stuff and, post. I remember you writing a little bit about saying, this actually aligns incentives in a way that we haven't seen before. And I think this idea of actually like changing the programming model that actually incentivizes the correct behavior.

I think that's really, that, that was really profound for me. And I figure like, now that I've got a chance to have you on the call on this podcast, I wanted to ask you what drove you to write that in the first place? And for folks who haven't actually read it, maybe, you could just talk a little bit about the argument that you were making and then why you wanted to actually write that as well.

Adrian Cockcroft: Yeah, that's actually one of the highest traffic blog posts that I ever wrote. There was a lot of, reads of that. The context then, so it was soon after I joined AWS, so it was probably 25. Early 2017, something like that. I joined AWS in 2016. I'd spent a few years basically involved in kind of, helping promote microservices as an architecture.

And, I was also interested in serverless and AWS Lambda as, an architecture. And I wanted to connect the dots. And it's a kind of, when I write things, some of the things I write, the approach I take is along the lines of his, this is how to think about a thing, right? These are the, it, I have a systems thinking approach generally, and so what I do is I try to expose the systems that I'm thinking about and the incentives and feedback loops and reasons why things are the way they are, rather than being prescriptive and saying, just do this, and this.

I. And the world will be great, or whatever the, you know, the more typical instructive things. So I tend to try and explain why things are the way they are and, sort of work in that. So that's, it's, an example of that type of writing for me. And we were, at the time, people were talking a lot about the monolith and microservices transition and what it meant and how to do it and things like that.

And I was trying to explain what we'd done at Netflix. And then I was thinking that there was a, the next generation of that transition was to serverless. And the, post was basically to just try and connect those dots, that was the overall goal of it. And then it is quite a long post. It's one of these things when you work with somebody, you know, PR people or whatever, and they say, you, you should write short blog posts and you should, you know, da Well this, and they shouldn't be so technical. So this is one of the longest and most technical posts I wrote, and it actually has the highest traffic. So, you know, ignore the PR people. It turns out if you put real content in something, it will get traffic. and, that's, the value you can, provide by trying to explain an idea.

So I think that's generally what that was about. This idea that. it was, I mean, the microservices idea was, is a tactic for implementing a for solving a problem. It isn't an end in itself. Right. And that's one of the distinctions I was trying to make. It's like if you have a large team working on a code base, they'll keep getting in each other's way.

And if you're trying to ship code and the code has a hundred people's contributions in it, one person has a bug, then that stops the shipment of the other 99 people. So there's this blocking effect of, of bugs in, in, in the whole thing. And then it also, you've got it destabilizes the entire thing.

You're shipping completely new code when you ship a new monolith was when you have say a hundred microservices with one person working on each. They can ship independently. And yeah, you have some interaction things you have to debug, but 99 of those services didn't change when you pushed your code. So it's easy to isolate where the problem is and roll it back.

So there's a bunch of things that make it easier. And then we thought, well, you've got the microservice, which does a thing. But it contains a bunch of functions. If you blow that up into individual functions, then you don't actually need all those functions all the time. And some code paths are very busy through the code.

They may be do it a hundred times, you know, every request goes through this part of the code, but may one times in a hundred or a thousand it does something else. So what you can do is break those into separate functions and different lambda functions. And you've got, so the code parts that don't get executed very often just aren't running.

The code gets called and then it stops and it's doesn't get called again, for a long time. Whereas the busy ones tend to stay in memory and get called a lot. Right. So that way you're actually, the memory footprint is more tuned to, and the execution footprint is tuned to what's actually going on.

So that was, the second thing. And then the third thing was that a lot of applications, particularly corporate in access, you mentioned they're only used during work hours. And those are the perfect ones to build serverless. They're internal. They are, they only exist for as long as anybody is actually trying to use them.

And they aren't just sitting their idle most of the time just because you need to have a wiki or something, or you need to have a thing that people check in with in the morning. Like anything that salespeople at the end of the quarter or the end of the month, those sorts of things make things super busy and it's idle the rest of the time, so you need very high concurrency for short periods of time.

Anything like that is, is sort of the area where I think serverless is particularly good. And later on I did another, series of talks where I basically said serverless first, not serverless only, but start trying to build something with serverless because you'll build it super quickly. And, one of the books I should reference is by, David Anderson.

is it called the Value Flywheel Effect or something like that will give a link in the show notes. And I helped. Talked, I, talked to him, helped him get, find the publisher for that book. And I wrote, did I write, I think I wrote a foreword for it, or at least put some nice words on the cover.

and that book talks about people developing app, entire applications in a few days. And then you get to tune it and optimize it. And maybe you take some part of it where you say, really, I need a container here. Something like that. but, you can rapidly build it with the tag I used to say was in the time it takes to, have meetings about how you're going to configure Kubernetes, you could have finished or building your entire application serverless, right?

And, you just get these internal discussions about exactly what version of Kubernetes to use and how to set it up and all this stuff. And it's like, I could have finished building the whole thing with the amount of effort you just put into trying to figure out how to configure something. So that's the sort of, a slightly flippant view I have on that.

And, anyway, the, other thing is just, and effectively the carbon footprint of a serverless application is minimal. But you do have to think about the additional systems that are running there all the time when you are not running. And a little bit of a, sort of a future segue, but AWS just changed them, their own accounting model to include those support services so that, when you look at the carbon footprint of a Lambda app that isn't running, you actually have a carbon footprint because the Lambda service needs to be there ready.

So you actually get a share of the shared service attributed to each customer that's using the, using it, right? So it's a little, it's a little bit deeper and it's kind of an interesting change in the model to be explicit that's what they're doing.

Chris Adams: Ah, I see. Okay. So on one level, some of this post was about like the, I guess the unit of code or the unit of change can become smaller by using this, but there's also a kind of corresponding thing on the hardware level. Like, you know, typically you might be, I remember when I was reading this, there was like, okay, I'm shipping a monolithic piece of code and I've got a physical server to begin with.

It's like the kind of. That was like how we were starting at maybe, I dunno, 10, 20 years ago. And then over time it's becoming smaller and smaller and that has made it a bit easier to build things kind of quickly. And, but one of the, flip side that we have seen is that, if you just look at say the Lambda function, then that's not a really accurate representation of all the stuff that's actually there.

You can't pretend that there is an infrastructure that has to be there. And it sounds like the accounting has now starting to reflect that the fact that yeah, you, someone needs to pay for the capacity in the same way that someone has to pay for the electricity grid, even though you're not, even when you're not using the grid for example, there is still a cost to make that capacity available for you to use.

Basically that's, what it seems to be a reference to.

Adrian Cockcroft: Yeah. And just going back to the car analogy.

People own cars. People lease cars. People rent cars, right? And you can, if you rent a car for a day, you can say, well, my carbon footprint of renting the car is one day's worth of car ownership, right? Except that in order for you to rent a car for the day, there has to be a fleet of cars sitting around idle That's ready for you to rent one. So really you want to take into account the overhead of your car rental company managing a fleet, and it's maybe got whatever, 70% utilization of the fleet. So 30% of the cars are sitting around waiting for somebody. So you basically have to uplevel your, I just need a car for a day to add an extra overhead of running that service, right?

So it's, it kind of follows that same thing, you know? And if you basically rent a car for every single day and you have a car every day of the year, but it's a rental car, that's an expensive way to own a car, right? I mean, even at a monthly rate, it's still more expensive than buying a car or leasing a car because you're paying for some overhead.

But it's kind of those sorts of models. So it's a bit like owning a car, maybe leasing a car, and, doing a rental car with sort of the monolith microservices. Serverless sort of analogy, if you like. cost model's a little different because, you're giving stuff back when you don't want it anymore.

is sort of the cloud analogy, right? The regular cloud service. I can just deep, I can scale things down.

Chris Adams: mm going back to something else you mentioned, I was talking to a CIO once and he was very annoyed 'cause he said that he'd only just found out that he could turn off all his test infrastructure at the weekends and overnight. and it was like they, he'd been running this stuff for two years and this, he finally realized and, he'd just, like, three quarters of his cost had just gone away from his test environment. And, he, was happy that had happened, but he was annoyed that it, took him two years for him to somebody to mention to him that this was possible and for him to tell them to do it.

Adrian Cockcroft: Right. So there's. Yeah. Any, tests, anything that's driven off people should absolutely be, you know, shut down. There are ways to just freeze a bunch of a, bunch of cloud instances can just be shut down and frozen and come back again later.

Chris Adams: so this is something I might come back to actually, because one of the things that in somewhat on, in some ways, if you look at, say maybe cloud computing, each individual server is probably quite a bit more efficient than maybe a corresponding, server you might buy from Dell or something like that from a few years ago because it's in a very optimized state.

But because it's so easy to turn on, this is one of the cha challenges that we can consistently have. So it's almost like a, and also in many ways. It's kind of in the interest of the people running very effect, very efficient servers to run, but have to basically have people paying for this capacity, which they're not using.

'cause it makes it easier to then like resell that. Like this is, I guess maybe this is one of the things that the shifts to serverless is supposed to address, or in theory, you know, it does align things somewhat, better and more. More in terms of like reducing usage when you're not actually using it, for example, rather than leaving things running like you're saying actually.

Adrian Cockcroft: Yeah, you don't have to remember to turn it off With serverless, it's off by default and it comes on and it's sort of a hundred percent utilized while you're running and then it turns off again. So in that sense, it is much more like you have a rental car that returns itself after 15 minutes or whatever.

Whatever your timeout

is or when you're done with it. It's more, maybe it's more like a taxi, right? That kind of going, one level beyond rental car, you have taxi, right? Which is you just use it to get there and you're done. So serverless is maybe more like a taxi service, right? And then, right. And then a daily rental is more like a.

Like an EC2 instance or something like that. And there's all these different things. So there we're used to dealing with these things and you wouldn't, you know, you wouldn't have a taxi sitting outside your house 24 hours a day just waiting for you to want to go somewhere, right? People say, well, serverless is expensive.

if you used it in that very stupid way, right?

Chris Adams: wouldn't, you'd, either lease a car or you'd buy a car if

Adrian Cockcroft: Yeah. If you, if it's being used continuously, if you've got traffic, enough traffic that the thing is a hundred percent active, sure you should put it in a container and just have a thing there rather than, waking it up every time, you know, having it woken up all the time

Chris Adams: Ah. I never really thought to make the comparison to cars, to be honest. 'cause I, I wrote a, piece a while back called A demand curve for compute, which compares these two, like, I just like energy for example. Like if you do something all the time, then you have something running all the time, it's a bit like maybe a nuclear power station, like it's expensive to buy, but per unit it makes a load of sense.

And then you work your way up from there basically. So, at the other end, like serverless, there are things like peak plants, which are only on for a little bit of time and they're really expensive for that short period of time. But because they're only on, 'cause they, can charge so much, you'll need to have them running maybe five to 15% of the year.

And that's how they, and that's how people design grids. And like, this idea of demand curves seems like, it's quite applicable to how we think about computing and how we might use different kinds of computing to solve different kinds of problems. For example.

Adrian Cockcroft: Yeah. Well that brings up another current topic. What's actually happening now is the peaker plants are running flat out running AI data centers capacity load, and the peaking is moving to battery, which is now getting to the point where batteries are sufficiently cheap and high capacity, that the peaker capacity is being driven by batteries which respond much more quickly to load.

And, some of the instabilities we've seen in the grids can be fixed by having enough battery capacity to handle, You know, a cloudy day or whatever, you know, the sort of the effects that you get from sudden surges in power demand or supply, right? And once you get enough battery capacity, that problem is soluble that the problem historically as the batteries have been too expensive, but they're getting cheaper very quickly.

So there've been a few, there's a few cost curves that I've seen recently showing that it's actually the cheapest thing to do for power now is, solar and batteries just put that in. And the batteries that they're now getting, originally they were saying you can get a few hours worth of battery cost effectively.

I think they're now up to like six to eight hours is cost effective. And we're getting close to the sort of 12 to 18 hours, which is means that you can go through the night in the winter on batteries. and it's cost effective to deploy batteries to do that. It's something about the economics that means that you have.

A certain amount of capacity, you still need some base load. geothermal isn't particularly interesting for that. I think as one of the cleaner technologies, a company called Vos building a station that, Google are using for some of their energy, I've spent some time looking at alternative energy.

But yeah, those peak of plants, they were sitting there mostly idle, and then all this extra demand suddenly appeared that wasn't in the plan for these big AI data centers and they're hoovering up all that capacity. So people are desperately trying to figure out how to add additional capacity, to take that on.

Chris Adams: We will come to that a little bit later in a bit more detail actually. So, but thank you. So maybe we can talk a little bit about, actually some of this stuff about. Essentially observability and being able to track some of this stuff because one thing that I've seen you present before is this idea of like carbon being just another metric.

And I think, what we'll do is we'll share a link in the show notes to a YouTube video. I called Monitoring Carbon. I think you presented this at Monitorama I Portland in 2022. And the argument that I understood it covers various other, it, it does talk a little bit about like the state of the art in 2022, but one of the key things you were kind of saying was basically as developers, we're gonna have to learn to track carbon because it's just gonna be another thing we have to track.

Just like, space left on a disc requests and things like that. So maybe you could talk a little bit about that and some of the re and just tell me if you think that's still the direction that we're going in. Basically, I.

Adrian Cockcroft: Yeah, so that was the first talk I gave after I left AWS I'd already given, agreed to present there. and then I left AWS I think just a few weeks before that event. so it was kind of an interesting thing. Hey, I, by the way, I quit my job and sort of retired now and, but this is the thing I was working on.

So I was, the last job I had a WSI was a VP in the sustainability organization, which is an Amazon wide organization, but I was focused on the AWS part of the, that problem in particular, the how to get. all of AWS sort of on the same page every, there was lots of individual things popping up. so we and lots of people writing their little presentations about what they thought AWS was doing.

And so we basically created a master PR approved, you know, press, press relations approved, deck that everyone agreed was like what we could say and should say, and it was high quality deck and got everyone to use the same, get on the same, be saying the same thing externally. Now, part of the problem there was that the various constraints we had at Amazon, we couldn't really talk about a lot of the things we were doing for all kinds of reasons.

So the story of Amazon, I think is better than most people think, but the, way it's told is really poor and it's very difficult to get, get things out of Amazon to actually, I. cover what they've been up to. So, so that was what I was working on. And along the way I thought, you know, we need to monitor.

ARM is a monitoring, observability conference I've been to many times and I have a long history in monitoring tools in particular. I thought, yeah, we should, I, should be trying to get everybody to add carbon as some kind of metric. And the problem is, then where do you get that metric from? And that wasn't very obvious at the time.

And I think there's sort of two things that have happened since 2022. One is that we actually haven't made much progress in terms of getting carbon as a metric in, most areas. There's a co with a couple of exceptions that we'll get to, but we haven't made as much progress as I hoped we would. And then the other one is that the sort of standards bodies and.

government regulations that were on the horizon then have mostly been stalled or slowed down, or delayed, whatever. so the requirement to do it from the business has generally come back, has reduced. Right. So, which is disappointing. 'cause now we're seeing even more climate change impacts and, you know, the globe doesn't care whether you're,

what your, corporate profitability or what you're trying to do or you know, what the reasons why you aren't doing it.

But, so we're just gonna get more and more cost from dealing with various types of climate disasters and we're seeing those happen all around us all the time. So, I think in some sense it's got to get much worse before people pay attention. And we're, you know, there's a big sort of battle going on to try and just make it, keep it focused and certainly Europe is doing a much better job of.

Right now. but even, the European regulations are a little watered down. And that's, I mean, I know that you are all over that's really your specialist area, you know, far more than I do about what's going on in, in that area.

Chris Adams: But yeah.

Adrian Cockcroft: It's a big topic, but I think in 2022, I thought that we would be having more regulations sooner, and that would be pushing more activity.

And then I wanted to basically, by talking about this, at that event, I wanted to get some of the tools, vendors to basically I would, for me to talk to them about how to do this. I ended up doing a little bit of advisory work for a few people, as a result, but not really that substantial. So that's kind of where I was then.

And then over the next year or so, I did some more talks, saying it's basically I just tried to figure out what was available from the different cloud providers. Did a talk about that, and then, wrote a. A-P-R-F-A-Q or a, proposal for a project for DSF saying, well, we should fix this. And it would be really nice if we did actually have a, this is what people would like to see, and then went and tried to see what we could get done.

Chris Adams: Okay, so that's, that, that's useful sort of kind bring, us up to this point here. And like, one thing I've appre appreciated about being on the Real Time Cloud project is that it's very easy, to basically call for transparency bec and there are absolutely reasons why you, why a company might not want to share their stuff, which are kind of considered like, I don't know, wrong reasons I suppose, or kind of like greedy reasons.

So, I used to work at a company called A that stood for avoid mass Extinction engine. And one thing we did was I. we were, we raised something in the region of 20 million US, dollars to find out all the ways you can't sell or carbon API in the early 2010s. And, you know, pivoting like a turntable, it's kind of a bit embarrassing at times.

Right? And one of the things that we, one of the potential routes that people went down was basically, we are gonna do this stuff and we are gonna work with large buyers to basically get people in their supply chain to share. Their emissions information, with the idea being that this would then be able to kind of highlight what they refer to as, supply chain engagement.

So that sounds great. Like we'll lend you some money so you can buy cheaper, you can buy more efficient fridges and do stuff like that. But there was another flip side to this, where when you're working with large enough companies or large enough buyers, one of the things they would basically say is they could use this information to then say, well, who are the people who are the least efficient?

And like, who am I gonna hit with my cost cutting stick first? Basically like who is, and this is one of, and for this reason, I can totally understand why organizations might not want to expose some of their cost structure. But at the same time, there is actually a kind of imperative coming from, well, like you said, the planet and from the science and everything like that.

And like, this is one thing that I feel like this is one of the drive, this is one of the thing that's been a real blocker right now. Because companies are basically saying we can't share this information 'cause we are going to end up revealing in how many times we maybe sell the same server, for example, like the, and these are kind of, you can see why people might, might not want to release that or, disclose that information.

'cause it can be sited, considered commercially sensitive. But there is also the imperative elsewhere. And like I wanted to ask you like. Faced with that, how do we navigate that? Or are there any things that you think we can be pushing for this for? Because I think this disclosure conundrum is a really difficult one to actually,, to get around basically.

And I, figured like you are on the call, you've been on both sides. Maybe you have some perspectives or some viewpoints on this that might be better. Shed some light here rather than it just being this, you are transparent. No, we're not gonna destroy our business kind of thing, because there's gotta be something, there's gotta be a third way or a more, useful way to talk about this.

Adrian Cockcroft: Yeah. And I think, I mean, there are three primary cloud providers that we've been working with or attempting to work with. And they're all different, right? And just Google generally have been the most transparent. they produce data that's easy to find, that's basically in a useful format. And they came out with their, their annual sustainability report recently, and there's a table of data in it, which is pretty much what we've been adopting as this is useful data.

Right? So that's one. but still they don't disclose some things because they don't have the right to disclose it. For example, if you want to know the power usage effectiveness, the PUE, they don't have it for all of their data centers. When you dig into that, you find that some of their regions are hosted in data centers they don't own,

right?

So somewhere in the world there's a big colo facility owned by Equinix or somebody, right? And they are, they needed to drop a small region in that area. So they leased some capacity in another data center. Now, the PUE for that data center is not the they, because they're not the only tenant. It's actually hard to calculate, but also the owner doesn't necessarily want to disclose the PUE, right?

So there's a one, the number isn't really obtainable. You could come up with a number, but they have to, you know, as a third party that they'd have to get to approve it. So that's a valid reason for not supplying a number. It's very annoying because you have p OE for some data centers and not others, and that applies to all the cloud providers.

so that's a valid, yeah, it's annoying, but valid reason for not providing a number. Right. So that's one level. And Google are pretty good at providing all the numbers, and they've been engaged with the project. They've had a few people turn up at the, on the meetings. they've fixed a few things where something wasn't quite right.

there was some missing data or something that didn't make sense and they just went fixed it. And there was also a mapping we needed from there. They're the Google data centers, which support things like Gmail and whatever, Google search to the Google Cloud data centers, which is a subset of it. But that we, they actually went and figured out their mapping for us and gave us a little table so we could look up the PUE for the data center and basically say, okay, this cloud region is in that data center.

They've worked well with it. So that's kind of what I'd like to see from the other cloud providers. It show, it's like, I like to see existence proofs. Well, they did it. Why can't you do that? Right. So that's what I'd expect to see from everybody. Microsoft were involved in setting up the GSF and were very enthusiastic for a while.

Particularly when Asif was there and driving it and, since he's moved on and, is now working directly for the GSF, I think the leadership at Microsoft is off worrying about the next shiny object, which is ai, whatever. Right? There's less su less support for sustainability and, we've found it hard to get, engagement from the Microsoft, Ah,

to get data out of them.

they have a report, they issued their new report for the year and they had total numbers for carbon, but they didn't release their individual regions updates, you know, so they released overall carbon data for 2024, but we haven't got any updated, nothing that I can find anyway on the individual regions, which is what we've been producing as our data set.

Chris Adams: Ah, okay. So basically as the moon and the moonshot has got further away, as they say, it's also got harder to see. Basically we still have this issuer then that this, it's less clear and we have less transparency from that. That's a bit depressed. That's a bit depressing. When early on they were basically very, they were real one.

They were. I was really glad to have them inside that because that they, they shared this stuff before Google shared it, so we actually had, okay, great. We've got two of the big three starting to disclose this stuff. Maybe we might be able to use this to kind of find against concessions from the largest provider to share this.

Because if you are a consumer of cloud, then you have some legal obligations that you still need to kind of, kind of meet, and this is not making it easy. And for the most part, it feels like if you don't have this, then you end up having to reach for a third party, for example, where you, like, you might use something like green pixie, for example, and like, that's totally okay to use something like that, but you happen to go via a third party where you know, you're, that, that's secondary data at best.

Basically it feels like there's something that you should be able to have with your, supplier, for example.

Adrian Cockcroft: Yeah. Just to clarify, I think there's several different types of, Sustainability data or sustainability related data that you get from a cloud provider. One of them is, well, I'm a customer and I have my account and I pay so much money to it, and how much carbon is associated with the, the things I've used, right?

And that is they all provide something along those lines to greater or lesser degree.

Chris Adams: Mm.

Adrian Cockcroft: but you can get, an estimate for the carbon footprint of an account, right? typically delayed by several months, two to three months, and it's a fairly, and it's pretty high level. So, and it gets, there's more detail available on, Google and Microsoft, and there's fairly high level data from AWS, but that's, one source.

The other source that we're interested in is, let's say I. I'm trying to decide where should I put a workload? And it could be I have flexibility, I can put it pretty much anywhere in the world or I can choose between different cloud providers in a particular country. what's the, and I want to know what the carbon footprint of that would be.

Right? So to do that, you need to be able to compare regions, and that's the data set that we've produced and standardized so that it lists every cloud region for the three main cloud providers. And for each of them we've got whatever information we can get about that region. And back in 2022, we have a fairly complete data set and 2023, it's missing.

Microsoft provide less data than in 2022. And in 2024 data, currently we have Google data, we have Microsoft have released their report, but haven't given us any new data. And AWS are probably releasing their data in the next, Few days, last year, it was on July the ninth, and I just checked this morning and it hasn't been released yet, so it's probably coming next week.

It's sometime in July. Right. So, we're hoping to see, well, we'll see what information we get from AWS and I'll, I, every year I write a blog post where I, they said, okay, the three reports are out. This is what happened. This is the trend year on year, and I'm working on an update of that blog post.

So probably by the time this, this podcast airs, I'm hoping that pod, that blog post will

Chris Adams: out there.

Adrian Cockcroft: I should have got it. I, you know, I've written as much as I can right now, but I'm waiting for the AWS ones, so. So we've sort of discussed Google have been pretty good, I guess, corporate citizens, disclosing whatever they can and engaging with the project.

Microsoft's sort of early enthusiasm. In their latest report, they actually mentioned the GSF and they mentioned they founded it and they mentioned that they support the real time cloud project, but they're not actually providing us any data and we're still trying to find the right people at Microsoft to escalate this to, to figure out, well, so gimme the data.

Right? and then AWS then they have, some different issues going on. they, the way that they run their systems, one of the things they found is that if they disclose something about how they work, people will start leveraging it. Right. You get this sort of gamifying thing. If there's an interface or, a disclosed piece of information, people will, optimize around it and start building on it.

You see, there's a lot in eBay. One of the reasons eBay's interface hasn't changed much over the years is that there are sellers that optimize around some weird feature of eBay and build a business around it. And every time eBay plans to change that, they're like, some sellers gonna lose their business, right?

So, if you over expose the details of how you work, there's sort of an arbitrage opportunity where somebody will build something on that and if you change it data, they get upset. So that's a one of the reasons that AWS doesn't like saying how it works,

right? Because it would cause people to optimize,

Chris Adams: yeah. Private

Adrian Cockcroft: optimize for the wrong things.

And, one example is that there's an Archive capability, tape Archive capability. That AWS has, and you can, and if you're thinking about I have lots of data sitting on desk, I should move it to tape. 'cause that is a much lower carbon footprint. And it is, except if you're in a tiny region that AWS has just set up, they haven't actually really got tapes there, the same services there, they're actually just storing it to disc until they have enough volume there, for them to put in a tape unit and transfer that to tape.

Like they want the same interface, but the implementation is different. Now, if they exposed which regions the, this is actually going to dis, it would say, well, this is a high carbon region, so I shouldn't store my data in there. Which means it would not get enough volume to actually install the tape.

Right? So you get the sort of negative feedback loop that's actually counterproductive. Right. So, so, so there's this, there's that sort of a, an example of. It's one of the reasons that they don't want to tell you how much carbon every different service is because it could cause you to optimize for things that are gonna cause you to do the opposite of what's the right thing to do Ultimately.

Chris Adams: okay. So that's one of the argument we see used for not disclosing how an organ, like. Per, like, per like service level and per region level things. 'cause one thing that when you use, say Amazon's carbon calculator, you'll get a display which broadly incentivizes to do, incentivizes you, you use to change basically nothing.

Right? like that's one thing we actually see. But, and that's different to say Google and Microsoft. We do provide service level stuff and region level stuff. So one of the reasons they're trying to hide some of that information is basically it's making it harder for us to kind of basically provide that service, for example, or there's all these second order effects that they're trying to basically avoid.

That's one of the arguments people are using,

Adrian Cockcroft: That's the argument that they have, and it's something that's pervasive. It's not just related to carbon. This is something that they've seen across lots of services is that people will, people will depend on an implementation. And they changed the implementation frequently. Like we're on, I dunno what the eighth or the ninth version of S3 total rewrite from scratch.

I dunno. When I was there, I think they were up to the seventh or eighth version and I knew somebody that was working on the team that was building the next version. Right. And this is tens of exabytes of storage that is migrated to a completely new underlying architecture every few years. If you depend upon the way it used to work, then you end up being suboptimal.

So there's some truth in that, however, and this is the example we were pointing at when I was at AWS, is that Microsoft and Google are releasing this data and we haven't, there's no evidence of bad things

Chris Adams: Yeah. The sky hasn't fallen when they

Adrian Cockcroft: Yeah. So, so I think that it, would be just fine too. And they are gradually increasing the resolution.

So what they had when. When they first released the, the account level information when I was there, and we'd managed to get this thing out in 2022, I guess 20 21, 20 22 was the, you had regions being continents, right? You just said Europe, Asia, and Americas.

And you had S3, E, c two, and other,

Chris Adams: yeah.

Adrian Cockcroft: and you had it to the nearest a hundred tons or something, or nearest a hundred kilograms.

Yeah, a hundred 10th of a ton. So most, so a bunch of people in Europe just got zero for everything and went, well, this is stupid. But actually, yeah, because of the way they, the, model works, they were generate, generating lots of energy to offset the carbon. It probably is zero for at least scope two.

scope, scope two, for the market based model.

Chris Adams: where you, count the, green energy you've used to kind of offset the, actual kind of, yeah. Figure. Alright.

Adrian Cockcroft: Yeah. So what they've done in the last couple of years, they finally got a team working on it. There's a manager called Alexis Bateman that I used to work with in the sustainability team that's now managing this, and she's cranking stuff out and they finally started releasing stuff. So the very latest release from AWS now has per region down to per region.

It has location based, just got added to the market based. So we actually have that finally.

Chris Adams: okay. Yeah.

Adrian Cockcroft: So this happened a few weeks ago. and the, and they've added, I think they have cloud. CloudFront because it's a global, CDN, it doesn't really live in a region. So they've separated CloudFront out and they also changed this model, as I mentioned earlier, so that the carbon model now includes supporting services that are required for you to use the thing.

So your, Lambda functions, even if they're not running, you've still got a carbon footprint because you need to have the lambda control planes there, ready to run you. So you pay for a share of that. And then the question is, how do you calculate these shares? And it's probably, you know, dollar based or something like that.

Some kind of usage based thing,

Chris Adams: Okay. Alright. So that's, yeah, I think I've, I read the, I hadn't realized about the location based, information being out there as well.

Actually,

Adrian Cockcroft: the location and the model with a new thing and they've now got this sort of, every few months they're getting a new thing out. They have def, they've clearly said they're going to do scope three. I know they're trying to do scope three where they real scope three thing rather than a financial allocation scope three.

So we could talk about that if you want, how much you wanna get into the weeds, of this stuff. But anyway,

So what we ended up with in the real Time cloud project was we figured out it wasn't really possible to get real time energy statistics out of cloud providers because the numbers just didn't exist.

It turns out the only place you can get real time numbers is on things that are not virtualized. And the thing that people don't generally virtualize is the GPUs. Yeah. So if you're using an Nvidia GPU, you can get a number out of it, which is the energy consumption of that GPU. So if anyone working on AI based workloads, you can get the dominant energy usage cap calculation is available to you, sources available.

But the CPUs, because the way virtualization works, you can't provide the information unless you're using, what they call a bare metal instance in the cloud, which you get access to the whole thing. So that's we gave up a bit on having like real time energy data and also the CNCF came up with a project called Kepler, which does good estimates and it does a workload analysis for people running on Kubernetes.

So it just, we just did a big, like point over at that. Just use, Kepler. If you want workload level energy estimates, use Kepler. and then. If we want to, and we focused instead on trying to gather and normalize the data, the metadata available on a region so that you could make region level decisions about where you want to deploy things and understand why certain regions were probably more efficient than others in terms of PUE and water usage and, carbon and the carbon free energy percentage that the carbon that the cloud provider had, meaning how much local generation did they have in that region.

So that was the table of data that we've produced and standardized, and we've put a 1.0 standard on it. And the current activity there is to rewrite the doc to be, basically, standards compliant so that we can create an ISO standard or propose an ISO standard around it. And the other thing we're doing is talking to the finops Foundation who come at this from the point of view of.

standardizing the way billing is done across cloud providers and they have all the cloud providers as members and all working on billing and they're trying to extend that billing to include the carbon aspects

of what's produced. working. so, we've done an interview with someone from Focus already who is basically talking about, they are almost. You, like you mentioned before, the idea that, okay, Microsoft and Google have shared this kind of per service level information and the sky hasn't fallen.

Chris Adams: They've created something a bit like that to kind of almost list these diff different kind of services. What, if I understand it, the GSF, you know, the, real time cloud thing might be like a carbon extension for some of that, because that doesn't necessarily, the, right now the focus stuff doesn't have that much detail about what carbon is or what, the kind of subtleties might be related to the kind of other, the kind of non-cash non, yeah, the, non-cash things you might wanna associate with, the way you, purchase cloud for example.

Adrian Cockcroft: Yeah, so focus is the name of the standard they've produced. Really all the cloud providers have signed up to it. If you go to an AWS's billing page, it talks about focus and has a focus, a conformant, schema. So the idea was all the cloud providers would have the same schema for their billing. Great obvious thing to do, but all the cloud providers have joined up to do that, which is fine.

Now Focus does, has some proposals for sustainability data, but they are just proposals for maybe the next version. They had a working group that looked at it and the problem they run into. One of the things is we've deeply looked into that in our group. We know why you can't do that. So what you'd really like is a billing record that says you just used, you know, 10 hours of this instance type.

And this is the carbon footprint of it. And the problem is you, that number cannot be calculated. and that's what you'd like to have. And intuitively you'd like to just no matter how much carbon it is, the problem is the carbon is not known at that time. You can generate the bill 'cause you know, you've used 10 hours of the thing, but you can't know the energy consumption and the carbon number, the carbon intensity, those two numbers are not known for a while.

So you typically get the data a month or two later. Whereas like, yeah, but you have to go back to your billing data. So you could put a guess in there. And things like the cloud carbon footprint tool and other tools that are out there will just generate guesses for you. but they are guesses. And then when you go and get the real data from your car cloud provider, the numbers will definitely be different, sometimes radically different.

so the question is, do you want to have an early guess or do you want to have a real number and what are you doing with that number? And if what you're doing is rolling it up into an audit report for your CFO to go and buy some carbon credits at the end of the year, that's what the monthly, reports are for.

Right? If you're a developer trying to tune a workload that is useless information to you, you need real, that's what the real Time cloud group was really trying to do is like if you're a developer trying to make a decision about what you should be doing. You know, calculating an SCI number or, understanding which cloud provider and which region has what impact.

That's the information you need to make a decision in real time about something. So the real time aspect is not about like in my milliseconds, I need to know the carbon or whatever. It's like I need to know now. I need to make a decision now.

Chris Adams: to make a forward looking decision

Adrian Cockcroft: Yeah. It's like I need to make a decision now, so what information do I have now?

Which is why we take the historical, metadata they have for the regions and we project it into the current year with, so just trending and filling in the gaps to say, this is our best guess for where you'd be if you needed to make a decision this year, on it. And we've got some little code that automatically generates the Nafus, estimate.

Chris Adams: so that's, at least useful. So people have an idea about what you might be using these two different kinds of data for. I guess maybe the thing, if we could just unpack one last thing before we move on to one of the questions is that one of the reasons you have this delay is basically because, is it, 'cause companies aren't, don't get the billing data themselves and they need to go then go out and buy credits.

Like this is for the market based calculations. So this, what you've said here is basically about carbon based on a market based figure. But if we had something like, maybe if we were to separate that out and looking, look at something like location based figures for electricity, which is like representing the kind of what's happening physically on the grid.

You plausibly could look at some of this stuff. Is that the, I mean, is that the way you see it? Really? Because I feel that we are now at this point where there's a figure for the grid, but that's not necessarily gonna be the, only figure you look at these days, for example, because as, because it's, we increasingly seeing people having different kinds of generation in the facility.

If you've got batteries, you might be, you might have charged batteries up when the energy's green, for example, or clean and then using it at a certain time. that's there's another layer that we need to, that you might need to take into account. Right.

Adrian Cockcroft: Yeah, so there's a couple of different reasons why the data is delayed. you know, you're in Germany, I'm sure with Germanic efficiency, you know exactly when you are going to get the information from your energy provider,

Chris Adams: They fax it to us. Yep. Yeah.

Adrian Cockcroft: and it will be nice and precise and there'll be high quality on it. now if you're operating a region in a developing nation.

not so much, right? There's bits of paper moving around. Probably. There's, random things happening. You dunno quite know when. So if you are trying to produce a service that is a global summary across all regions, you have to, you are limited to the slowest region that you operate in, right? you take this sort of distribution of how quickly you find out about the carbon intensity and the power usage of what's going on in your country and in the energy supply for your region.

And, you know, it's, whoever is slowest will de determine it, right? And AWS operates in regions in India, and Indonesia and places like that where, I don't know, maybe, there are efficient, maybe they aren't. But there, they, there are more global regions in more different countries on AWS. than in particularly in Asia than Azure and Google have, but fundamentally, it's gonna take you a few months to gather your billing and carbon data accurate to the point where it's not gonna change.

So then on top of that, you can then say, I'm gonna buy some credits to offset that. And there's two different ways of doing credits. You can buy green energy, procure your energy from a supplier that says, okay, I'm this energy that we already generated, you can buy the credits for it later. And so you can basically pre post allocate it, and you can do that within the rules for up to a year afterwards.

So at the end of the year, it comes to December, end of December, okay, how much energy we did we use, how much wasn't offset. I can buy energy credits from my energy suppliers to offset that. And the first thing you do is try and do it in region so that the energy is happening in the same grid. That you, your consumption was, and then you get to Singapore and go, okay, we all give up on Singapore.

There isn't enough local energy that's green, so we're going to buy energy somewhere, anywhere we can, green energy somewhere else and do a global offset on it. Google's been doing that since 2017, I think, or whenever they, said they were a hundred percent green back in the day, long time ago.

AWS since 2023, a hundred percent offset. but what, that's the mechanism they use and it's documented in their disclosure that they do it on a region by region basis and then they use global offsetting just for the, to mop up whatever's left over at the end. Right. So that's, and, then. A s does less of this, but is starting to do more, which is, carbon offsetting where you go and, you know, pay for a forest to not be cut down or you pay for built, grow some trees or you sequester some carbon.

And that is a little bit on the end that people are investing in to try and develop those markets. but most of it is, buying green energy. Like for the house here, I have an option to just subscribe to a different cloud, a different energy provider. It's called Central Coast Community Energy. And, Yeah, I pay them at slightly higher, you know, an extra cent or so per kilowatt hour. And I have a hundred percent green energy. And by market method, my, I'm completely green here, right? So that's fine. But it's the same thing going on. So, because what I'm paying for is the green energy. I'm not paying for carbon.

I'm probably is emitting carbon at night, certainly, but I'm generating more during the day 'cause I've got some solar panels here. Right. So that it, it's that mechanism that's being developed basically.

Chris Adams: Okay. Thank you for that. Alright, Adrian, I realize we we're coming up to time, so I, did have a bunch of questions about, what's making it harder to track, this stuff, like, because we are, we're now moving to work to the world of grid responsive data centers, for example, like various data.

We've been doing some stuff like that, and we're seeing cases of like, I don't know, in Memphis, the Colossus data center running primarily on gas turbines, right now, which is playing, which, massively complicates some of this. But we did actually say that there was some stuff going on in the house, and I do wanna kind of come back to that if we could, because that feels like it's, we won't have time to explore to do those, the, those subjects.

Justice basically. So we were talking, at the beginning of this podcast that in addition to doing this stuff here with the Green Software Foundation, you've been exploring and playing around with some of the, tools and some of the technology and like finding out if there's a, there, for example, and, When I looked up this project, when before doing some research for this podcast, I heard, I, I read about this thing called the House Consciousness System. So maybe you could talk a little bit about that because, you've been working as a technologist for, you know, at least 40 years now, and I see you've messing around with things like generative AI and AI for this and doing things that I am not expecting people to do.

So maybe you could talk a little bit about this, that HCS or the, or whatever, this project is, because I, found it quite interesting that you were to see your take on this, basically.

Adrian Cockcroft: Yeah. So the history of this going back a while is that a while ago, I mean, every now and again, I, do some programming and I wanted to do some programming a year or two ago in the R language, which is a statistical language, which I use occasionally. I keep forgetting the syntax of. So I thought, well, maybe I can ask the AI to remind me of the syntax.

And the AI just started writing code that worked. So I went, oh, this is cool. I can just tell it what I want and it'll write the code. And this is very early days when people. Most people weren't doing that kind of thing, so that was fine. and then, more recently I wanted to write some code in Python.

I'm not ever, never really wrote in code in Python. I can sort of read it, but I can't write it. So I started telling it what I wanted it to do and it wrote the code for me and it could get it working pretty well and it worked. So I have code, I was using it to generate code there. and this is mostly just 'cause I'm not, I don't have patience and time to be like a full on developer, but it, these are the things that it's good at.

So one of the things that I tend to focus on, if as a new tool around, I figure out what can it do and what can't it do. Think of cloud, the early days of cloud. We built Netflix on an extremely rudimentary set of cloud services. And it was like just about possible to make it work given the services we had.

And most people today would look at that and say, we wouldn't even start trying to build anything on that, right? But we made it work, and we made it work reliably. And, that became like a template and it caused other people to try and figure out how to do it. Now there's a lot more capability there now, so we're sort of in that early stages day, thing where a bunch of people say, well, this'll never work.

there are people figuring out how to make it work. What happened? where shall I go next? Let's talk about the idea. So years ago, I mean, I have lots of iot devices around my house. I like buying random, automated, and then none of them talk to, or some of them talk to each other, but I have too much random stuff and I have a, like an iPad with lots of icons on it, and I have to know which one does what.

Right? and it's annoying. And if I'm not home and my wife's trying to do something, she can't figure out which one. And she knows some of them, but like, she doesn't, know how, this stuff works. other visitors to the house don't know how to do things. And that's just, and a lot of people are in that environment.

And I was thinking about this a few years ago when I was at Amazon and I was talking to the Alexa team because they have, house automation kind of stuff. So why don't you build in something that is just like a more general thing that knows what's going on in the house. That would, and it's sort of like a, central consciousness because the re thing about consciousness, it's an observability system.

I regard consciousness as human observability. And part of the definition of consciousness for me is that you have to understand what unconscious means, right? If your definition of consciousness doesn't include unconscious, then you're not, you haven't picked the right thing. So it's the thing that goes away when you're asleep,

right?

'cause you're unconscious when you're asleep. So anything that goes away when you're asleep is consciousness to me. Right. And there are, and this isn't the standard definition. People have big arguments about it, but that's a, good working definition for me. 'cause what it means, it's the thing you can talk to, to discover its state, right.

So if somebody is conscious, you can ask them questions and find out what's going on with them. So in that sense, what I want is want my house to have a memory of all the things that have happened to it. I want it to look at the weather and remind me there's a storm coming and have you dealt, you know, ti it up outside so things don't blow away and all the stuff, right?

but right now the iot devices live in the moment. Let see, your temperature is 73 degrees and they sort of have a schedule for changing stuff, but they don't really have a memory and they aren't talking to each other. And so I had this idea that, Hey, why doesn't Alexa team build something like this?

And they, I sent, I found somebody in that team and they never built it and weren't interested in it, right? So I had this idea of. Kicking around. And then a few weeks ago I saw that Rueven Cohen, R-U-E-V-E-N, Cohen Coen, C-O-H-E-N, on LinkedIn, is just posting and posting about his agent's swarm work.

And he's just, like, he's building amazing stuff and said this, does this really work. So I wanted to play around with it and I decided I needed a new idea that I needed to try and build. That was a fairly aggressive thing. So I wrote up a rough idea of what this house consciousness would be.

And then I got together with Reuven. He showed me how to start an agent swarm, to just get with the CLO and Philanthropics Claude Code service to just go build it. And it wrote about 150,000 lines of code in a day or so.

Chris Adams: So these, so when you talk about a swarm of age, a swarm of agents, it's basically kind of like a model in a loop that's writing some code and there's multi, there's lots of them working together. That's what a swarm of agents in this case. Yeah.

Adrian Cockcroft: Yeah, but they aren't all writing code. So what you've got is sort of a, the latest version it, they call, he calls it a hive. So they're sort of a queen bee that who is just managing the hive. And it's basically like there's an AI acting as a light, as a development manager,

a dev manager. And the dev manager picks these specializations they want, so they start a selection of agents and they've got a QA agent, a DevOps agent for a deployment, a spec reading agent, a researching agent.

And they basically specialize. And what happens is if you're doing AI coding. The context. If you use one agent to do, if you just use one AI and you tell it all the things you want to do, it sort of gets confused. 'cause you've asked it to do lots of things at once. What you do here is by giving them each one track, mind specializations and a, and an ability to communicate, you get dramatically better results out of it.

So that's the aha moment if you like. But what it means is that to manage this swarm of agents, you need basically product manager and line manager skills, not developer

skills, right? You need a bit of developer skill to read the code and see if it works. But I don't, I'm currently writing, I switched from writing in Python.

The first thing I tried building, which was more just like a, can it build anything at all? And it built a thing and it ran, but didn't really work. 'cause I didn't, really specify what I wanted well enough. I'm now building an i an iPhone app in Swift, which I absolutely cannot write a single line of Swift.

I have no idea. And it's writing the code. I'm telling it to do code reviews and, Run and tests and things. So it's actually coding and testing and building itself and build a UI design and a plan. And so I'm doing that anyway. So I'm basically, I've now got a little obsessed by building myself this thing.

And you basically need a Max plan to do this, which is sort of about a hundred dollars a month AI plan. And once I finish building this, I'll sort of wind that back down to the usual $20 a month kind of level. and yeah, I mean, from my point of view, you can use AI to do a bunch of bad things, you know, generate fake news and stuff and adverts and things, but I'm actually using it to develop something that I always wanted to have, that isn't, there's no real business model for this thing other than I want it to exist.

I don't need a business model. I can spend a hun few hundred dollars, which is sort of go out to dinner. You spend a hundred dollars, right? That on building something, getting it to build something, which I'm sharing on GitHub. you can go and have a look at the repo, the, original Python repo is, it's sort of, it's there, it runs, but it's not, doesn't really do what I think.

It doesn't really work, right? 'cause I hadn't thought it through. I'm now doing front back, front end backwards. So I'm doing as much functionality as I can in this iOS app, and then I'll build the service to go behind it. I'll revisit that when I get to it. So that's kind of what I'm doing. I'm just happily use using this new tool to do something that will make me happy and potentially be useful for other people if they feel like it, but I don't care whether anyone else uses it.

So that's sort of my approach to figuring out new tools and finding where they work and where they don't work.

Chris Adams: Okay, so let me, if I can, I just wanna paraphrase some of that. So you've, so when people, like, I am not using a all that, that, that much at the moment, but I am dabbling and I'm using, I've been messing around with Claude and stuff like that to ask questions or, okay. I'm in Europe, so we use Misra 'cause it's the, French equivalent for example.

But one thing you said that was significant was that a, rather than me using one thing in like serial, it's happening concurrently. So there's lots of different things all burning

Adrian Cockcroft: They typically run like five to eight of these agents in parallel, and they're coordinating and communicating. They make to-do lists and they, do different specializations and they, it's basically like managing a team of developers.

Chris Adams: Ah, and that might be why you have people talking about, say, like, what the hell, you know, vibe, coders, dunno what they're doing here. 'cause in many cases. it's a new set of skills. It's not necessarily just, can you read, I mean, yeah. It helps to be able to read Python in the same way that if you are reading the output from a, chatbot, you want to, you know, you'll probably tweak it to make it sound like a human rather than, a, an ai model.

But there's also a bunch of other skills that you need to do, like spec writing and all these other things that you might, that typically might live in a product manager rather than a developer, for example, or someone who in different roles.

Adrian Cockcroft: it's much more product management than anything else. You have to have a clear idea of what you want. And figure out what the user flows through it are, and you know what functionality you want, how you want it arranged. And then it will build whatever you tell it to build and it will add on things you didn't ask for.

So you do it. So the other thing is to do it very incrementally and check every time to see what it did build. And you ask for this and it does like two or three times as much, and you go like, I want to keep these things. That was a good idea. No, I don't want that. And if you're working with a team of engineers, you say, I want them to, build a thing, they'll come back with extra stuff that they thought they, you might want.

Right? So there's actually, it's normal. I mean, this is how you manage a team of engineers to go build a new thing, right? So in that sense, it feels anybody that's managed a development team, this actually feels very familiar. If you're a developer on one of those teams, it doesn't feel very familiar. So I think that there's this sort of weird.

Thing where we sort of brought it up a level into management, but you still kind of need specialist experts. Like I'm stuck in a whole bunch of Apple stuff to do with iOS. That is nothing to do with the ai, it's just stuff that Apple makes difficult

Chris Adams: That would explain what you've been posting about what? How? Okay. That, that, that. Adds a context. 'cause I've seen you posting like, how do I know what to ask from Apple now? And I was like, why is he asking that? But okay. This is putting two and two together for me now. S Adrian. Thank you. So there was one last thing I was just gonna ask before we kind of wrap up.

Right. you mentioned that you're doing this all locally on your own computer? Or like, is it, are you running in like an, in an environment in the cloud, like a code space with GitHub? Or maybe, yeah. Where is that happening?

Adrian Cockcroft: Yeah. So when you have an agent swarm, it can do anything. You basically let it free run, so you need to put it in a box so that it doesn't delete your computer or do random things or whatever. Right. So. You go to GitHub and they have a free, it's free for up to some amount, which is plenty for me.

So far, having been doing this for a few weeks, you create a code space, which is basically an instance I guess running on Azure, which is like a little container. It shuts down if you don't use it for a few minutes or a few, you know, when it's idle. But basically it opens it and the only thing it can really do is run against the repo.

You opened it on, so the AI can sit there and it can do anything it wants in a copy of that repo and then it can push to the back, to the repo. I tell it usually when it's, when you finish this work, just push it back to the main repo and GitHub because when I'm working on iOS, I have to pull it back out into X code on my machine to build it.

So it's a safe box. It's a safe box. And I wouldn't run an agent swarm today on my own machine. Right. You could, but you're, it's sort of dangerous if it gets a bit carried

Chris Adams: That's what I was wondering about. That did seem like the idea of having one rogue machine work on my machine is on my own personal laptop feels a bit weird because I don't dunno where to post it in the, but if I've got a whole bunch of them, that times terrifying. Adrian.

Adrian Cockcroft: the option on Claude is minus, minus dangerous dangerously enable all permissions or something like that.

Chris Adams: Okay. All right.

Adrian Cockcroft: so that is the, mode we're running Claude in, and on, I think on our Google Gemini it's called YOLO mode.

Chris Adams: Yeah.

Adrian Cockcroft: you just, right, right. So, but when you're running it in that mode and a swarm other, and you can't just sit there and say, yes, it's okay to do this.

Yes, it's okay to delete that file, whatever, because it's tidying up, it's moving things around. It's writing code fragments and running them and deleting them again and stuff like that. It's doing all these things, but you don't want it to just do like an R minus rf.

Chris Adams: to just like hose your

Adrian Cockcroft: I once had somebody call, what does con stat CWD mean?

Oh, it means you're in a directory that doesn't exist. Now I'm at my home directory. What did you just type? Well, I was cleaning up some dot files, so I did RR minus R star,

Chris Adams: Oh dear.

Adrian Cockcroft: dot. He recursively deleted his parent directory.

Chris Adams: Oh geez. He just like wiped his entire machine.

Adrian Cockcroft: Yeah. The end of the day tidying up before he went home and said, well, you just lost your day's work.

Luckily we have a backup. I was the guy that ran the backups. This is before I, when, I was,

Chris Adams: Okay. I'm glad you mentioned this, final story here, because it sounds like you're using a code space almost. Whereas typically you might use it for convenience. You're almost using it for safety. Like I want to minimize the blast radius of these agents running am mock inside my system. And also I guess like conveniently, because it's, I mean, surely this should be something what you can work out the environmental footprint of this because if it's a billable tool and like if GitHub knows to bill me for all those minutes, they should be able to tell me the carbon footprint of this as well, right?

Like, I mean, yeah, there's could be some complexity with like using Azure, but like I should have something indicative at

Adrian Cockcroft: There's a little, console that tells you how many hours you've used, and if you use more than a certain amount, they start billing you for it. But you get some number of hours per month of CPU hours per month, which is enough for me. So far I haven't hit that limit and I've been playing around for a few weeks fairly intensively.

so that's, yeah, but the Claude itself, so no. So one thing last, one last thing. So we've got the three main cloud providers. We know roughly what they're doing. we now have Oracle that I'm trying to find somebody to talk to, to tell me how, what their carbon footprint is. Core weave is probably up and coming as a new big one.

And then you have all the stuff that anthropic or open AI or whatever they're all running, where are they running? So we've now got some very large sources of carbon. That we need to get accounting data for. And as far as I can tell, they are not publishing that data currently. So that's currently, I'd say that's the next phase.

It's like how do we, can measure an individual GPU pretty easily, but the, GPU services we're using are not being allocated. We're not getting data from. So that's probably, where I should wrap up. That's where we are now, we need to find out how much I need to know how much carbon I'm actually generating by telling Claude to build this thing for my house.

I dunno. No clue.

Chris Adams: I think that feels like a very useful rallying cry for anyone who does want to see if it's possible to instrument any of these tools, and they're listening to this podcast. Adrian, thank you so much for giving us the time to chat with us. I really enjoyed noting out with you and going really, deep into the weeds.

If people are curious about any of these projects that you've mentioned, we're gonna share the show notes, but where would you direct people's attention? Like is there a URL or a website that you'd point people to?

Adrian Cockcroft: Well, there's the Green Software Foundation's Realtime Cloud, website, GitHub site, and my own Adrian.co. You don't have to spell Cockcroft, you just have to get the first two letters, and on GitHub, and that's where you can follow along my random rants and things like that. and I have a Medium account as well @adrianco.

And where I have my blog, if you want to chat to a, an AI copy of me. There's a service called Soopra. We'll put a link to that. Soopra.ai, and there's an, I think it's Soopra/cockcroft is my, I uploaded all my blog posts into a persona so you can go ask it, and it comes up with sometimes reasonable answers to questions about microservices and Netflix and things like that.

Chris Adams: I wanna ask you all these questions about do you ever use that to fob people off or email you? But I think I'll have to pass that for another day.

Adrian Cockcroft: mostly when somebody sends me a list of questions for a podcast, like if I have time, I feed them into it. I didn't have time

Chris Adams: Okay. Alright. Alright, Adrian, thank you so much for this and I hope you have a lovely weekend. All right, take care mate.

Adrian Cockcroft: All right. Cheers. Thank you.

Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.

To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.



Hosted on Acast. See acast.com/privacy for more information.

Show more...
3 months ago
1 hour 18 minutes 37 seconds

Environment Variables
Backstage: Software Standards Working Group SCI
In this Backstage episode of Environment Variables, podcast producer Chris Skipper highlights the Green Software Foundation’s Software Standards Working Group—chaired by Henry Richardson (WattTime) and Navveen Balani (Accenture). This group is central to shaping global benchmarks for sustainable software. Key initiatives discussed include the Software Carbon Intensity (SCI) Specification, its extensions for AI and the web, the Real-Time Energy and Carbon Standard for cloud providers, the SCI Guide, and the TOSS framework. Together, these tools aim to drive emissions reduction through interoperable, transparent, and globally applicable standards. 

Learn more about our people:
  • Chris Skipper: LinkedIn | Website
  • Navveen Balani: LinkedIn

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • Software Standards Working Group [00:18]
  • GSF Directory | Projects [01:06]
  • https://wiki.greensoftware.foundation/proj-mycelium [03:57]
  • Software Carbon Intensity (SCI) Specification | GSF [04:18] 
  • Impact Framework [08:09]
  • Carbon Aware SDK [09:11]
  • Green Software Patterns [09:32]
  • Awesome Green Software | GitHub [10:11]
  • Software Carbon Intensity for AI [10:58]
  • Software Carbon Intensity for Web [12:24]

Events:
  • Developer Week 2025 (July 3 · Mannheim) [13:20]
  • Green IO Munich (July 3-4) [13:35]
  • EVOLVE [25]: Shaping Tomorrow (July 4 · Brighton) [13:51]
  • Grid-Aware Websites (July 6 at 7:00 pm CEST · Amsterdam) [14:03]
  • Master JobRunr v8: A Live-Coding Webinar (July 6 · Virtual) [14:20]
  • Blue Angle for Software / Carbon Aware Computing (July 9 at 6:30 pm CEST · Berlin) [14:30]
  • Shaping Progress Responsibly—AI and Sustainability (July 10 at 6:00 pm CEST · Frankfurt am Main) [14:41]
  • Green Data Center for Green Software (July 15 at 6:30 pm CEST · Hybrid · Karlsruhe) [14:52]


If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Chris Skipper: Welcome to Backstage, the behind the scenes series from Environment Variables, where we take a look at the Green Software Foundation's key initiatives and working groups. I'm the producer and host, Chris Skipper. Today we are shining a spotlight on the Green Software Foundation's Software Standards working group. This group plays a critical role in shaping the specifications and benchmarks that guide the development of green software.

Chaired by Henry Richardson, a senior analyst at what time, and Navveen Balani, managing Director and Chief Technologist for Technology Sustainable Innovation at Accenture, the group's mission is to build baseline specifications that can be used across the world, whether you're running systems in a cloud environment in Europe or on the ground in a developing country.

In other words, the Software Standards Working Group is all about creating interoperable, reliable standards, tools that allow us to measure, compare, and improve the sustainability of software in a meaningful way.

Some of the major projects they lead at the Green Software Foundation include the Software Carbon Intensity Specification, or SCI, which defines how to calculate the carbon emissions of software; the SCI for Artificial Intelligence, which extends this framework to cover the unique challenges of measuring emissions from AI workloads; the SCI for Web, which focuses on emissions from websites and front end systems;

the Realtime Energy and Carbon Standard for Cloud Providers, which aims to establish benchmarks for emissions data and cloud platforms;

the SCI Guide, which helps organizations navigate energy carbon intensity and embodied emissions methodologies,

and the Transforming Organizations for Sustainable Software, or TOSS framework, which offers a broader blueprint for integrating sustainability across business and development processes.

Together these initiatives support the foundation's broader mission to reduce the total change in global carbon emissions associated with software by prioritizing abatement over offsetting, and building trust through open, transparent, and inclusive standards. Now for some recent updates from the working group.

Earlier this year, the group made a big move by bringing the SCI for AI project directly into its core focus. As the world turns more and more to artificial intelligence, figuring out how to measure AI's energy use and emissions footprint is becoming a priority. That's why they've committed to developing a baseline SCI specification for AI over the next few months, drawing on insights from a recent Green AI committee workshop and collaborating closely with experts across the space.

There's also growing interest in extending the SCI framework beyond carbon. In a recent meeting, the group discussed the potential for creating a software water intensity metric, a way to track water usage associated with digital infrastructure, especially data centers. While that comes with some challenges, including limited data access from cloud providers, it reflects the working group's commitment to looking at sustainability from multiple environmental angles.

To help shape these priorities,

they've also launched a survey across the foundation, which collected feedback from members. Should the group focus more on Web and mobile technologies, which represent a huge slice of the developer ecosystem? Should they start exploring procurement and circularity? what about realtime cloud data or hardware software integration?

The survey aims to get clear answers and direct the group's resources more effectively. The group also saw new projects take shape, like the Immersion Cooling Specifications, designed to optimize cooling systems for data centers, and the Mycelium project, which is creating a standard data model to allow software and infrastructure to better talk to each other, enabling smarter energy aware decisions at runtime.

So that's a brief overview of the software standards working group. A powerhouse behind the standards and specs that are quietly transforming how the world builds software. Now let's explore more of the work that the Software Standards Working Group is doing with the software Carbon Intensity Specification, the SCI. A groundbreaking framework designed to help developers and organizations calculate, understand, and reduce the environmental impact of their software.

The SCI specification offers a standardized methodology for measuring carbon intensity, empowering the tech industry to make more informed decisions in designing and deploying greener software systems. For this part of the podcast, we aim some questions at Navveen Balani from Accenture, one of the co-chairs of the Software Standards working group.

Navveen rather graciously provided us with some sound bites as answers. 

 

Chris Skipper: My first question for Navveen was about the SCI specification and its unique methodology.

The SCI specification introduces a unique methodology for calculating carbon intensity using the factors of energy efficiency, hardware efficiency, and carbon awareness. Can you share more about how this methodology was developed and its potential to drive innovation in software development?

Navveen Balani: Thank you, Chris. The software carbon intensity specification was developed to provide a standardized, actionable way to measure the

environmental impact of software. What makes it unique is its focus on three core levels,

energy efficiency, hardware efficiency, and carbon awareness. Energy efficiency

looks at how much electricity a piece of software consumes to perform a task.

So writing optimized code, minimizing unnecessary processing, and improving performance, all contribute. Hardware efficiency considers how effectively the software uses the infrastructure it runs on,

getting more done with fewer resources and carbon awareness adds a critical layer by factoring in when and where software runs.

By understanding the carbon intensity of electricity grids, applications can shift workloads to cleaner energy regions or time windows. The methodology was shaped through deep collaboration within the Green Software Foundation involving practitioners, academics, and industry leaders from member organizations.

It was designed to be not only scientifically grounded, but also practical, measurable and adaptable across different environments. What truly sets SCI apart and drives innovation is its focus on reduction rather than offsets. The specification emphasizes direct actions that teams can take to lower emissions, like optimizing compute usage, improving code efficiency, or adopting carbon aware scheduling.

These aren't theoretical ideas. They're concrete, easy to implement practices that can be embedded into the existing development lifecycle. So SCI is more than just a carbon metric. It's a practical framework that empowers developers and organizations to build software that's efficient, high performing, and environmentally responsible by design.

Chris Skipper: The SCI encourages developers to use granular, real world data where possible. Are there any tools or technologies you'd recommend to developers and teams to better align with the SCI methodology and promote carbon aware software design?

Navveen Balani: Absolutely.

One of the most powerful aspects of the SCI specification is its encouragement to use real world, granular data to inform decisions, and there are already a number of tools available to help developers and teams put this into practice. A great example is the Impact Framework, which is designed to make the environmental impact of software easier to calculate and share.

What's powerful about it

is that it doesn't require complex setup or custom code. Developers simply define their system using a lightweight manifest file,

and the framework takes care of

the rest — calculating metrics like carbon emissions in a standardized, transparent way, this makes it easier for teams to align with the SCI methodology and Track how the software contributes to environmental impact over time. Then there's the carbon aware SDK, which enables applications to make smarter decisions about when and where to run based on the carbon intensity of the electricity grid. This kind of dynamic scheduling can make a significant difference,

especially at scale.

There's also a growing body of Green Software Patterns available to guide design decisions. The Green Software Foundation has published a collection of these patterns, offering developers practical approaches to reduce emissions by design. In addition, cloud providers like AWS, Microsoft Azure and Google Cloud are increasingly offering their own sustainability focused patterns and best practices, helping teams make cloud native applications more energy efficient and carbon aware. And for those looking to explore even more, the awesome Green Software Repository on GitHub is a fantastic curated list of tools, frameworks, and research. It's a great place to discover new ways to build software that's not only efficient, but also environmentally conscious.

So whether you're just starting or already deep into green software practices, there's a growing ecosystem of tools and resources to support the journey. And the SCI specification provides the foundation to tie it all together.

Chris Skipper: Looking ahead, what are the next steps for the software standards working group and the SCI specification? Are there plans to expand the scope or functionality of the specification to address emerging challenges in green software?

Navveen Balani: Looking ahead, the Software Standards working group is continuing to evolve the SCI specification to keep pace with the rapidly changing software landscape. And one of the most exciting developments is the work underway on SCI for AI. While the existing SCI specification provides a solid foundation for measuring software carbon intensity, AI introduces new complexities.

Especially when it comes to defining what constitutes the software boundary, identifying appropriate functional units and establishing meaningful measurements for different types of AI systems. This includes everything from classical machine learning models to generative AI and emerging AI agent-based workloads.

To address these challenges, the SCI for AI initiative was launched. It's a focused effort hosted through open workshops and collaborative working groups to adapt and extend the SCI methodology specifically for AI systems. The goal is to create a standardized, transparent way to measure the carbon intensity of AI workloads while remaining grounded in the same core principles of energy efficiency, hardware efficiency, and carbon awareness.

Beyond AI, there are also efforts to extend the SCI framework to other domains such as

SCI for Web,

which focuses on defining practical measurement boundaries and metrics for Web applications and user facing systems. The broader aim is to ensure that whether you're building an AI model, a backend service, or a web-based interface, there's a consistent and actionable way to assess and reduce its environmental impact. So the SCI specification is evolving not just in scope, but in its ability to address the unique challenges of emerging technologies. It's helping to create a more unified, measurable, and responsible approach to software sustainability across the board.

 

Chris Skipper: Thanks to Navveen for those insightful answers. Next, we have some events coming up in the next few weeks.

First starting today on July 3rd in Manheim, we have Developer Week 2025. Get sustainability-focused talks during one of the largest software developer conferences in Europe. Next we have GreenIO, Munich, which is a conference powered by Apidays, which is a conference happening on the third and 4th of July. Get the latest insights from thought leaders in tech sustainability and hands-on feedback from practitioners scaling Green IT.

In the UK in Brighton, we have Evolved 25, shaping tomorrow, which is happening on July the fourth. Explore how technology can drive progress and a more sustainable digital future.

Next up on July the eighth from 7:00 to 9:00 PM CEST In Amsterdam, we have Grid-aware Websites, a new dimension in Sustainable Web Development hosted by the Green Web Foundation, where Fershad Irani will talk about the Green Web Foundation's latest initiative, Grid Aware.

Then next week Wednesday, there's a completely virtual event, Master JobRunr v8, a live coding webinar, July the 9th, sign up via the link below.

Then also on Wednesday, on the 9th of July in Berlin, we have the Green Coding Meetup, Blauer Engel, for software/carbon aware computing, happening from 6:30 PM.

Then on Thursday, July the 10th from 6:00 PM to 8:00 PM CEST, we have Shaping Progress, Responsibility, AI, and Sustainability in Frankfurt.

Then finally on Tuesday, July the 15th, we have a hybrid event hosted by Green Software Development, Karlsruhe in Karlsruhe, Germany, which is entitled Green Data Center for Green Software, Green Software for Green Data Center.

Sign up via the link below.

So we've reached the end of this special backstage episode on the Software Standards Working Group and the SCI Project at the GSF. I hope you enjoyed the podcast. As always, all the resources and links mentioned in today's episode can be found in the show notes below. If you are a developer, engineer, policy lead, or sustainability advocate, and you want to contribute to these efforts, this group is always looking for new voices.

Check out the Green Software Foundation website to find out how to join the conversation. And to listen to more episodes about green software, please visit podcast.greensoftware.foundation and we'll see you on the next episode. Bye for now.




Hosted on Acast. See acast.com/privacy for more information.

Show more...
4 months ago
16 minutes 6 seconds

Environment Variables
Environment Variables Year Three Roundup
It’s been three years of Environment Variables! What a landmark year for the Green Software Foundation. From launching behind-the-scenes Backstage episodes, to covering the explosive impact of AI on software emissions, to broadening our audience through beginner-friendly conversations; this retrospective showcases our mission to create a trusted ecosystem for sustainable software. Here’s to many more years of EV!

Learn more about our people:
  • Chris Adams: LinkedIn | GitHub | Website
  • Anne Currie: LinkedIn
  • Chris Skipper: LinkedIn
  • Pindy Bhullar: LinkedIn
  • Liya Mathew: LinkedIn
  • Asim Hussain: LinkedIn
  • Holly Cummins: LinkedIn
  • Charles Tripp: LinkedIn
  • Dawn Nafus: LinkedIn
  • Max Schulze: LinkedIn
  • Killian Daly: LinkedIn
  • James Martin: LinkedIn

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:

  • Backstage: TOSS Project (02:26)
  • Backstage: Green Software Patterns (04:51)
  • The Week in Green Software: Obscuring AI’s Real Carbon Output (07:41)
  • The Week in Green Software: Sustainable AI Progress (09:51)
  • AI Energy Measurement for Beginners (12:57)
  • The Economics of AI (15:22)
  • How to Tell When Energy Is Green with Killian Daly (17:47)
  • How to Explain Software to Normal People with James Martin (20:29)

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:
Chris Skipper: Welcome to Environment Variables from the Green Software Foundation. The podcast that brings you the latest in sustainable software development has now been running for three years.

So that's three years of the latest news in green software, talking about everything from AI energy through to the cloud, and its effect on our environment and how we as a software community can make things better for everybody else.

This past year Environment Variables has truly embodied the mission of the Green Software Foundation, and that's to create a trusted ecosystem of people, standards, tools, and best practices for creating and building green software. Now this episode's gonna feature some of the more key episodes that we did over the last year.

We're gonna be looking at a wide variety of topics and it's going to be hopefully a nice journey back through both the timeline of the podcast, but also the landscape of green software over the last year and how it has dramatically changed, not only due to the dramatic rise in use of AI amongst other things, but also just to the fantastic ideas that people have brought to the table in order to try and solve the problem of trying to decarbonize software. So without further ado, let's dive in to the first topic.

​

Chris Skipper: First, we brought about a new change in the way the podcast was structured. A new type of episode called Backstage.

Backstage is basically a behind the scenes look at the Green Software Foundation, internal projects and working groups. It's a space for our community to hear directly from project leaders to share the wins and their lessons learned and reinforce trust and transparency, which is one of the core tenets of the Green Software Foundation Manifesto.

Now, there were a bunch of great projects that were featured over the last year. We're gonna look at two specifically.

In our first backstage episode, we introduced the TOSS project. TOSS stands for Transforming Organizations for Sustainable Software, and it's led by the fantastic Pindy Bhullar. This project aims to embed sustainability into business strategy and operations through a four pillar framework.

. It's a perfect example of how the foundation operationalizes its mission to minimize emissions by supporting organizations on their sustainability journey.

Let's hear the snippet from Pendi explaining these four pillars.

Pindy Bhullar: Transforming organizations for sustainable software is the acronym for toss. Businesses will be able to utilize the toss framework as a guide to lay the groundwork for managing change and also improving software operations in the future, software practices within organizations can be integrated with sustainability in a cohesive and agile manner, rather than addressing green software practices in an isolated approach.

For a company to fully benefit from sustainable transformation of their software development processes, we need to review all aspects of technology. The Toss framework is designed to be embedded across multiple aspects of its business operations. Dividing the task framework along four pillars has allowed for simultaneous, top down and bottom up reinforcement of sustainable practices, as well as the integration of new tools, processes, and regulations that I merge over time.

The four pillars aim to foster a dynamic foundation for companies to understand where to act now, to adjust later and expand within organizational's sustainable software transformation. The four pillars are strategy, implementation. Operational compliance and regulations and within each of the pillars, we have designed a decision tree that will be constructed to guide organizations in transforming their software journey.

Chris Skipper: Some fantastic insights from pindi there, and I'm sure you can agree. The Toss project has an applicability outside of just software development. It's one of those projects that's really gonna grow exponentially in the next few years. Next up, we have green software patterns. Green software Patterns Project is an open source initiative designed to help software practitioners reduce emissions by applying vendor neutral best practices. Guests, Franziska Warncke and Liya Mathew; project leads for the initiative discussed how organizations like Aviva and MasterCard have successfully integrated these patterns to enhance software sustainability. They also explored the rigorous review process for new patterns, upcoming advancements, such as persona based approaches and how developers and researchers can contribute to the project.

That's one thing to remember about Backstage is actually highlights that there are so many projects going on at the GSF. We actually need more people to get involved. So if you are interested in getting involved, please Visit greensoftware.foundation to find out more. Let's hear now from Liya Mathew about the Green Software Patterns Project.

Liya Mathew: One of the core and most useful features of patterns is the ability to correlate the software carbon intensity specification. Think of it as a bridge that connects learning and measurement. When we look through existing catalog of patterns, one essential thing that stands out is their adaptability.

Many of these patterns not only aligned with sustainability, but also coincide with security and reliability best practices. The beauty of this approach is that we don't need to completely rewrite a software architecture. To make it more sustainable. Small actions like catching static data or providing a dark mode can make significant difference.

These are simple, yet effective steps that can lead us a long way towards sustainability. Also, we are nearing the graduation of patterns V one. This milestone marks a significant achievement and we are already looking ahead to the next exciting phase. Patterns we two. In patterns we two, we are focusing on persona based and behavioral patterns, which will bring even more tailored and impactful solutions to our community.

These new patterns will help address specific needs and behaviors, making our tools even more adaptable and effective.

Chris Skipper: Moving on. We also kept our regular episode format The Week in Green Software, also known affectionately as Twigs. So Twigs was originally hosted by Chris Adams and is now occasionally hosted by the Fabulous and Currie as well.

It offers quick actionable updates in the green software space with a rising sustainability news. With a rising tide of sustainability and AI developments, this format helps listeners stay current. I can tell you now that in the last year, the number of news topics has just exploded when it comes to anything to do with AI and the impact it's having on the environment.

And I think part of that is due to the work of the GSF and its community members. We used to have to really struggle to find news topics when this podcast first started back in 2022. But now in 2025, every week, I would say nearly every hour, there's a new topic coming out about how software is affecting the environment.

I. So The Week in Green Software is your one stop place for finding all that information dialed down into one place. And also you can sign up to the GSF newsletter as well via the link below, which will give you a rundown of all the week's latest new topics as well. So let's look at a couple episodes of twigs from the previous year.

The first one is an episode with the executive director of the GSF Asim Hussain. Asim really embodies the mission of the GSF in so many ways and is always passionate about the effect that software is having on the environment. In this episode, which was subtitled, Obscuring AI's Real Carbon Output , Asim joined Chris to unpack the complexities of AI's, carbon emissions, renewable energy credits, and regulatory developments.

This episode emphasized the need for better carbon accounting practices; work the foundation is helping to advance. Let's hear this little snippet from Asim now.

Asim Hussain: You can plant a tree, right? And then you planted the tree. That tree will grow and there's issue there. This drought tree will grow and it'll suck carbon from the atmosphere. And you can say that's a carbon credit at planting a tree. Or there's carbon avoidance offsets and there's many variant, and that's actually very good variance of carbon avoidance offsets.

But there is a variant of a carbon avoidance offset where I've got a tree and you pay me not to cut it down. And so where is the additionality? If I'm actually planting a tree, it's happening and planting a tree. I'm, I'm, I'm adding additional kind of capacity in, in carbon removal. And then the renewable energy markets is exactly the same.

You can have renewable energy, which if you buy means a renewable power plant is gonna get built and you can have renewable energy, which is just kind of sold. And if you buy it or you don't buy, there's no change. Nothing's gonna happen. There's no more new renewable plant's gonna get built. Only one of them has that additionality component.

And so therefore, only one of them should really be used in any kind of renewable energy claims. But both of them are allowed in terms of renewable energy claims.

Chris Skipper: One of the things I love about the way Asim talks about software in general is always, he uses idioms like that planting of a tree to explain a real complex, uh, topic and make it more palatable for a wider audience, which is something that we're gonna explore later on in this episode as well. But before we do that, let's move on to another episode of The Week in Green Software, which was subtitled Sustainable AI Progress.

I think you can see a theme that's been going on here. This was our hundredth episode, which was a massive milestone in its own, and the Fantastic Anne Currie hosted Holly Cummins to explore light switch ops, zombie servers, and sustainable cloud architecture. This conversation. Perfectly aligns with the foundation's mission to minimize emissions through smarter, more efficient systems, and having the really knowledgeable, brilliant.

Holly Cummins on to talk about light switch ops was just fantastic. , Let's listen to this next clip from her talking about light switch ops.

Holly Cummins: We have a great deal of confidence that it's reliable to turn a light off and on, and that it's low friction to do it. And so we need to get to that point with our computer systems and, and you can sort of, uh, roll with the analogy a bit more as well, which is in our houses, it tends to be quite a manual thing of turning the lights off and on.

You know, I, I, I, you know, I. Turn the light on when I need it. In institutional buildings, it's usually not a manual process to turn the lights off and on. Instead, what we end up is we end up with some kind of automation. So like often there's a motion sensor. So, you know, I used to have it that, um, if I would stay in our office late at night.

At some point if you sat too still because you were coating and deep in thought, the lights around you would go off and then you'd have to like wave your arms to make the lights go back on. And it's that, that, you know, it's this sort of idea of like, we can detect the traffic, we can detect the activity and not waste the energy.

And again, we can do. Exactly this with with our computer system so we can have it so that it's really easy to turn them off and on. And then we can go one step further and we can automate it and we can say, let's script to turn things off at 5:00 PM because we're only in one geo

Chris Skipper: So as you can see, there's always been this theme of Rise in AI, you know, and I think everybody who's involved in this, uh, community and even people outside of it are really kind of frightened and scared of the impact that AI is having on the environment. But one thing that the GSF brings is this anchoring, this hope that there is actually change for the better.

And there are people who are actively working against that, within the, within the software industry. And. There's, there's actually gonna be a lot of change coming in the next year, which will make things a lot more hopeful, uh, for the carbon output of the software industry. So between 2024 and 2025 AI's impact on the environment became one of the most discussed topics in our industry, and obviously on this podcast.

In 2023 alone data center, electricity consumption for AI workloads was estimated to grow by more than 20%. With foundation models like ChatGPT four, using hundreds of megawatt hours per training run,

obviously there are a lot of statistics out there that are quite frightening, but hopefully Environment Variables brings you some peace of mind. And with that, we wanted to expand our audience to a wider group of people that weren't just software developers to make things more palatable for your everyday computer user, for example. ,

So one of those episodes that we're gonna feature around that move to try and increase our audience growth is an episode called AI Energy Measurement for Beginners, where Charles Tripp and Dawn Nafus helped us break down how AI's energy use is measured and why it's often misunderstood.

Their beginner friendly approach supports one of the GFS key goals, which is making green practices more accessible And inclusive. Here is Charles talking about one of those points in this next snippet.

Charles Tripp: I think there's a, there's like a historical bias towards number of operations because in old computers without much caching or anything like this, right? Like, uh, I, I restore old computers and, um, like an old 3 86 or IBM xt, right? Like it's running, it has registers in the CPU and then it has main memory and it, and almost everything is basically how many operations I'm doing is going to.

Closely correlate with how fast the thing runs and probably how much energy it uses, because most of the energy consumption on those systems is, is just basically constant no matter what I'm doing. Right. Yeah. It's just, it doesn't like idle down the processor while it's not working. Right. There's a historical bias that's built up over time that like was focused on the, the, you know, and it's also at the programmer level.

Like I'm thinking about what is the, the computer doing? What do I have control over? Yeah. What's, what, yeah. One, am I able to, but it's only through, it's only through actually measuring it that you gain a clearer picture of like what is actually using energy. Um, and I think if you get that picture, then you'll gain, um, uh, uh, an understanding more of.

How can I make this software or the data center or anything in between, like job allocation, more energy efficient, but it's only through actually measuring that we can get that clear picture. Because if we guess, especially using kind of our biases from how we, how we learn to use computers, how we learn about how computers work, we're actually.

Very likely to get an incorrect understanding, incorrect picture of what the, what's driving the energy consumption. It's much less intuitive than people think.

Chris Skipper: thanks to Charles for breaking it down in really simple terms and for his contribution to the podcast. Another episode that tried to simplify the world of AI and the impact that it's having on the environment is called the economics of ai, which we did with Max Schultze.

He joined us to talk about the economics of cloud infrastructure and ai. He challenged the idea that AI must be resource intensive arguing instead for clearer data, stronger public policy, and greater transparency, all values that the GSF hold dear. Let's listen to that clip of Max talking about those principles.

Max Schulze: I think when as a developer you hear transparency and, okay, they have to report data. What you're thinking is, oh, they're gonna have an API where I can pull this information. Also, let's say from the inside of the data center now in Germany, it is also funny for everybody listening one way to fulfill that because the law was not specific.

Data centers now are hanging a piece of paper. I'm not kidding. On their fence with this information, right? So this is like them reporting this. And of course we as, I'm also a software engineer, so we as technical people, what we need is the data center to have an API that basically assigns the environmental impact of the entire data center to something.

And that something has always bothered me that we say, oh, it's the server or it's the, I don't know, the rack or the cluster, but ly. What does software consume? Software consumes basically three things. We call it compute, network, and storage, but in more philosophical terms, it's the ability to store, process and transfer data.

And that is the resource that software consumes. A software does not consume a data center or a server. It consumes these three things. Mm-hmm. And a server makes those things turns actually energy and a lot of raw materials into digital resources. Then the data center in turn provides the shell in which the server can do that function, right?

It, it's the factory building, it's the data center. The machine that makes the T-shirts is the server and the t-shirt is what people wear.

Chris Skipper: Again, it's those analogies that make things easier for people to understand the world of software and the impact it's having on the environment. Also, with that idea of reaching a broader audience, we try to also talk about the energy grid as well as software development as those two things are intrinsically linked. So one of the episodes that we wanna feature now is called How To Tell When Energy Is Green with Killian Daly.

Killian explained how EnergyTag is creating a standard for time and location-based energy tracking. Two topics that we've covered a lot on this podcast. This work enables companies to make verifiable clean energy claims, helping build trust across industries. Let's listen to this clip from Killian.

Killian Daly: Interestingly, uh, actually on the 14th of January, just before, uh, um, the inauguration of Donald Trump, uh, as US president, so the Biden administration issued an executive order, which hasn't yet been rescinded, um, basically on, uh, data centers, on federal lands. And, and in that they do require these three pillars.

Uh, so they, they do have a three pillar requirement on, uh, on electricity sourcing, which is very interesting, right? I think that's. Quite a good template. Uh, and I think, you know, we definitely need to think about like, okay, if you're gonna start building loads of data centers in Ireland, for example, Ireland, uh, 20%, 25% of electricity consumption in Ireland is, is from data centers.

That's way more than anywhere else in the world in relative terms. Yeah, there's a big conversation at the moment in Ireland about like, okay, well how do we make sure this is clean? How do we think about, um, uh, procurement requirements for building a new data center? That's a piece of legislation that's on being written at the moment.

And how do we also require these data centers to do reporting of their emissions once they're operational? So the Irish government, uh, is also putting together a reporting framework for data centers and the energy agency. So the. Sustainable Energy Authority of Ireland, SEAI, they published a report a couple of weeks ago saying, yeah, they do, you know what they need to do this hourly reporting based on, uh, contracts bought in Ireland.

So I think we're seeing already promising signs of, of legislation coming down the road in, um, you know, in other sectors outside of hydrogen. And I think data centers is, is, is probably an obvious one.

Chris Skipper: Fantastic clip there from Killian. It also highlights how the work that the GSF is having is having an impact on the political landscape as well in terms of public policy and the discussions that are having in the higher ups of governments.

Moving on. We wanna talk about our final episode that we wanna highlight in this episode from the last year, and that's the episode, How to Explain Software to Normal People with James Martin. We ended the year with this episode with James, who talked about strategies for communicating digital sustainability to non-technical audiences, which is something that we try to do here at Environment Variables too. From Frugal AI to policy advocacy, this episode reinforced the power of inclusive storytelling. Let's listen to this clip from James Martin.

James Martin: A few years ago, the, the French Environment Minister said people should stop, uh, trying to send so many, uh, funny, funny email attachments, you know? Oh, really? Like, like when you send a joke, a jokey video to all your colleagues, you should stop doing that because it's, it's not good for the planet. It honestly, the, uh, minister could say something that misguided, because that's not.

We, you and I know that's not where the impact is. Um, the, the impact is in the cloud. The impact is in, uh, hardware. So instead of, it's about the, the, the communication is repetition and the, the, the, I always start with digital is 4% of global emissions. 1% of that is, is data centers. 3% of that is hardware and software is sort of.

They're sort of all over the place. So that's the, the, the thing I, that's the figure I use the most to get things started. And I think the, the number one misconception that people need to get their heads around is the people tend to think that tech is, uh, immaterial. It's because of expressions like the cloud.

It just sounds. Like, is this floaty thing rather than massive industry? Ethereal. We need to make it, we need to make it more physical. If, uh, I can't remember who said that if, if data centers could fly, then it would, it would make our, our job a lot, a lot easier. Um, but no, that, that's why you need to always come back to the figures.

4% is double, uh, the emissions of planes. And yet, um. The airline industry gets tens of hundreds times more hassle than the tech industry in terms of, uh, trying to keep control of their, of their emissions. So what you need is a lot more, uh, tangible examples and you need people to, to explain this impact over time.

So you need to move away from bad examples like. Funny email attachments or The thing about, um, the keep hearing in AI is, uh, one, one chat GBT prompt is 10 times more energy than Google. That may or may not be true, but it's a bit, again, it's a bit of the, it's the wrong example because it doesn't, it doesn't focus on the bigger picture and it can Yeah, it kind of implies, yeah, and it can make people, if I just, if I just like reduce my, my, my usage of this, then I'm gonna have like 10 times the impact I'm gonna.

You know, that's all only too, that feels a bit kind of individual in a bit like individualizing the problem. Surely it does, and, and it's putting it on people's, it's putting the onus on the users, whereas it's, once again, it, it's not their fault. You need to see the bigger picture. And this is what I've, I've been repeating since I wrote that, uh, that white paper actually, you can't say you have a green IT approach if you're only focusing on data centers, hardware or software.

You've got to focus. Arnold all three, otherwise. Yeah, exactly. Holistically

Chris Skipper: With that, we've come to the end of this episode. Well, what a year it's been for Environment Variables, and we'll just take a look at some of the statistics.

Just to blow our own horn here a little bit. We've reached over 350,000 plays. Engagement and followers to the podcast have gone up by 30%, which indicates to us that Environment Variables really matters to the people that listen to it. And it's raising awareness to the need to decarbonize the software industry.

Looking ahead. We remain committed to the foundation's vision of changing the culture of software development.

So sustainability is as fundamental as security or performance. Year four, we'll bring new stories, new tools, new opportunities, new people hopefully and all in an effort to reduce emissions together. So thank you for being part of our mission, and here's to another year of action advocacy and green software innovation.

And now to play us out is the new and improved Environment Variables podcast theme.

 Hey everybody. Thanks for listening. This is Chris, the producer again, just reaching out to say thank you for being a part of this community and a bigger part of the GSF as a whole. If you wanna listen to more episodes of Environment Variables, please head to podcast.greensoftware.foundation to listen to more,

or click the link below to discover more about the Green Software Foundation and how to be part of the podcast as well.

And if you're listening to this on a platform, please click follow or subscribe to hear more episodes of Environment Variables.

We'll catch you on the next one. Bye for now.





Hosted on Acast. See acast.com/privacy for more information.

Show more...
4 months ago
25 minutes 20 seconds

Environment Variables
Open Source Carbon Footprints
Chris Adams is joined by Thibaud Colas; product lead at Torchbox, president of the Django Software Foundation, and lead on Wagtail CMS. They explore the role of open source projects in tackling digital carbon emissions and discuss Wagtail's pioneering carbon footprint reporting, sustainable default settings, and grid-aware website features, all enabled through initiatives like Google Summer of Code. Thibaud shares how transparency, contributor motivation, and clear governance can drive impactful sustainability efforts in web development, and why measuring and reducing emissions in the Python ecosystem matters now more than ever.

Learn more about our people:
  • Chris Adams: LinkedIn | GitHub | Website
  • Thibaud Colas: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • Wagtail CMS [01:46]
  • Web Almanac | HTTP Archive [08:03]
  • Google Summer of Code [11:07] 
  • Wagtail RFCs [19:51] 
  • A Gift from Hugging Face on Earth Day: ChatUI-Energy [27:55]
  • PyCon US [36:07]
  • Grid-aware websites - Green Web Foundation [39:22] 
  • Climate Action Tech [41:07] 
  • Agent Bedlam: A Future of Endless AI Energy Consumption? - My Framer Site 
  • Here's how re/insurers can curb GenAI emissions | Reinsurance Business 

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Thibaud Colas: If you get your contributors to work on high value and high impact things, that's the best way to motivate them. So that's kind of the idea here is, formalize that we have a goal to reduce our footprint. And by virtue of this, we, you know, make it a more impactful thing for people to work on by having those numbers, by communicating this specific change to images, here is the potential for it to reduce emissions.

Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams. 

 Hello and welcome to Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. If you want the way we build software to be more sustainable and more inclusive, one way to improve the chances of this happening is to make it easier to build it that way,

so building greener software goes with the grain of the software framework you're using. And one way to do that is update the defaults that prioritize accessibility and sustainability in the framework itself. One of the people I've seen who really exemplifies this idea and this approach is my guest today, Thibaud Colas,

a lead developer at the software agency, Torchbox, the current president of the Django Software Foundation and the product lead at the popular Wagtail Content Management System, which is also built on top of Django. The Wagtail CMS powers sites like the NASA Jet Propulsion Labs website, the University of Pennsylvania's website, the Tate Gallery, and even the main NHS website in the UK.

So while it might not have the same coverage as WordPress, which covers more than a third of the internet, still powers a large number of, a number of large sites, and changes made in this framework can have a decent reach. So changes made here are worth discussing because the Wagtail CMS docs, in my view, are probably the most advanced talking about sustainability for any open source CMS right now.

And there's a clear link between sustainability and embodied admissions of the hardware that you actually, that people need to use to access your websites too. And with that in mind, you can see it's got some of the most developed accessibility features as well. But we're getting ahead of ourselves though, and Thibaud is in a much better place to talk about this than me.

So Thibaud, thank you so much for joining us. Can I give you the floor to introduce yourself for our listeners?

Thibaud Colas: Hi. It's my pleasure, Chris. Thank you for having me. I'm Thibaud, my pronouns are he/him. And, yeah, I'm the product lead, for the Wagtail CMS at Torchbox. Wagtail is an open source project and products, and Torchbox, we are the original creators of the project and main contributors. And, yeah, as product lead I helped shape the work of Torchbox on Wagtail and of other contributors as well. And, as president of the Jingo Software Foundation, I have similar responsibilities for the Django Project. Django being a big Web framework, one of the biggest on Python. Just to give you a sense of scale, Wagtail, that's on the order of 10 to 20,000 sites out there. And Django, we're talking half a million to a million projects.

Chris Adams: Cool. Thank you, Thibaud. And, Thibaud, where are you calling me from today? Because, I,

Thibaud Colas: I'm in Cambridge, UK. got started on Wagtail way back in New Zealand,

but travels took me back to Europe and UK. I'm from France originally.

Chris Adams: Oh, cool. Alright, thank you for that. So I'm Chris Adams. I am the co-chair of the Green Software Foundation Policy Working Group. I'm also one of the, we're also, we also have show notes for this.

So all the projects and links that we discuss will be available. So in your quest to basically develop better sustain sustainable software engineering skills, that will all be available for this. So we look up podcast.greensoftware.foundation to find that. Alright, Thibaud, we've got a bunch of questions to get through.

Shall we start?

Thibaud Colas: Yeah, sure.

Chris Adams: Okay. Alright. So one thing that I, that really came up on my radar a few years ago when I saw this, was that Wagtail was one of the few, one of the only projects I've seen so far that actually tried to put together a kind of carbon footprint inventory of all over the websites that it's responsible for.

And I remember the posts and we'll share a link to this explaining some of this and some figures for this. Like "we reckon that Wagtail was kind of responsible for around like more than 8,000 tons of CO2 per year from all the sites that we run." Could you maybe talk a little bit about, basically the approach you took for that and why you even did that.

'Cause there's probably a few discussions about decisions you had to make and trade offs you had to choose between model uses and coming up with numbers and all that. But maybe we go from the beginning about why you started that. Then we can dive into some of the details.

Thibaud Colas: Yeah. Yeah. Well, simply enough, you know, when you start to think about the impact of technology on what we build, as developers, at least we love to try and quantify that impact. You know, put some figures on there. And the carbon footprint of websites, well, when you think of the sites, there are lots of components.

There's things that happen in the browser, things that happen server side. And when I say server side these days, you know, the infrastructure is quite varied and somewhat opaque as well. So yeah, server side. So when it comes to Wagtail, with it being an open source project, people are, it's quite interoperable with all sorts of databases and file storage and web browsers obviously. So it becomes quite tricky to

actually put a number on the emissions of one site. And I guess that's where we started at Torchbox specifically trying to quantify the emissions of our clients for 50 to a hundred websites. And from there, you know, you realize that, it makes lots of sense to try and do it for the whole white tail ecosystem so that you can make hopefully decisions for the whole ecosystem based on sites out there. So yeah.

I think it was back in 2023 that we did this first, and there were definitely lots of ways back then to quantify sites' emissions. We didn't necessarily reinvent any, but we tried and understand, okay, when we have little knowledge of those thousands of websites out there, which methodologies should we be referring to when we try and put those figures together? So I say specifically methodologies because I think that's one of the potential pitfalls for developers starting in there. They assume that, somewhat like performance, you can

have quite finite reproducible numbers, but we're just not there yet with the carbon footprint of websites.

So I think it's really important that you combine. So in our case, you combine web sustainability guidelines, related methodologies, so it's called sustainable web design model, and that you also combine things that look more closely at the servers, you know, CPU and resource use,

and also other aspects in the browser.

Chris Adams: Okay, cool. And one thing that I actually quite appreciated when you did this or when, the, you know, the team you are part of did this, was that you, yeah you shared all these numbers, but you also shared the underlying spreadsheets and the working so that other folks who might be running projects themselves can use as either a starting point or even possibly challenge and propose maybe improvements as we learn more about this because we know that

it's a difficult field to kind of navigate right now, but it is getting a bit easier, and as we learn more things, we are able to kind of incorporate that into the way we kind of model and think about some of the interventions we might make to maybe reduce the environmental footprint or improve it basically?

Thibaud Colas: Yeah. Yeah, it's a, you might actually be aware of a project, Chris, the HTTP Archive's Web Almanac. They reviewed the whole of the Web on the other of 20 million websites every year, and they produced numbers based on this data set of websites. So that's kind of, I suppose what I tried to follow with this methodology as well, of sharing our results to the fullest extent so that other people can verify the numbers and potentially also put same numbers together

for their own sites, individual sites, or also site ecosystem. So, you know, Wagtail, it's a CMS among many other CMSs. There's lots of competitors in that

space and nothing would make me happier than seeing other CMSs do the same and hopefully reuse

what we've spent time putting together. And yeah, obviously when we do this once for, Wagtail, we can try and do it also for Django.

So there's also these benefits of, across the whole tech stack, having that kind of methodology more nailed down for people who make those decisions. You know, like product level decisions.

Chris Adams: Oh, okay, cool. And just like we have release cycles for presumably new websites or like new CMSs and everything like that, as we learn more, we might be able to improve the precision and the accuracy of some of this to refine the assumptions, right. And, you know, many eyes make bugs shallow. So Drupal folks, if you're hearing this, or WordPress folks, yeah.

Over to you basically.

Thibaud Colas: Exactly. And you know, definitely the methodologies evolve over time. So one of the recent ones I really like is how, with Firefox browser, you can measure the CPU usage to render a single page.

And just that is becoming so much more accessible these days that we could potentially do it on every release of the CMS.

Chris Adams: Cool. Well, let's come back to that because this is one thing that I found quite interesting about the, some of the work that you folks have been doing is not only were you starting to measure this, but you're looking at actually options you can take to maybe set new defaults or improve some of this stuff.

And, as I understand it, Wagtail, you've had some luck actually finding some funding and finding ways to basically cover the cost of people to essentially work on this stuff via things like the Google Summer of Code and things like that. Maybe you could talk a little bit about some of that, because as I understand it, you're in year three of successfully saying, "Hey, this is important.

Can someone fund it?" And then actually getting some funding to pay people to work on this stuff.

Thibaud Colas: Yeah. Yeah. Well, yeah. So, once you have those numbers in place as to, you know, how much emissions the sites out there produce, try and refine it down to a few specific aspects of the sites that, you know, you go through

the quick wins, you figure out what you have the most leverage over, and then you realize there's this and that concepts that are potentially quite fundable if you

know just how to frame it and who to talk to. And we, as Torchbox, we have quite a few clients that care about the footprint of their websites,

but it's definitely also a good avenue. The Google Summer of Code program you mentioned, it's about getting new people excited with open source as new contributors in the open source space.

It's entirely funded by Google. And essentially Google, they trust projects like Wagtail and Django to come up with those ideas that are, you know, impactful, and also sensible avenues for people to get up to speed with open source. And so, yeah, we, it's been three years now that we've done this with a sustainability focus where we try every year to have an idea in this space.

And I think it's quite interesting as an option because, few people that come to open source, you know, early in their

career are aware of sustainability. It's quite a good, opportunity for them to learn something very new, even if they might already know the technology like Django and Wagtail. And for us, it allows us to work on those concepts that, you know, we saw the data, we saw the promise. So I think, the first year we did 2023, we looked at image optimization.

It's actually quite a big thing for a CMS, in my opinion at least, that, you know, people wanna add lovely visuals to all of their pages and you know, maybe sometimes there is a case for fewer images if you want to lower the footprints. But it's definitely also a case where you have to have images, you want them to be

as low footprint as possible. So for that specific project, we were joined by two contributors, who helped us. One worked on the AVIF support in the CMS. AVIF being one of the newer image formats that promises to have a lower file size

than the alternative. And the other one helped us essentially make, the APIs we have in Wagtail to generate multiple images, make that more ergonomic. So you'd be able to generate, say, three different variations of an image and then only send to the user the one that fits the best for how the image is displayed

so that hopefully it's smaller.

So it's this responsive images concept.

Chris Adams: Oh, I see. So you're basically are. It may be that the server needs to generate some of these images 'cause you don't have control over who's accessing your website. But when someone's accessing something with maybe a small, like a touch device or something, rather than send this massive thing over the wire, you can send something which is appropriately smaller.

So it might take up less space inside the memory and the DOM and less over the wire as well, right.

Thibaud Colas: Exactly. You were talking about the grain of Wagtail. Wagtail has very few opinions as far as how you create the pages, but we definitely try and leverage the grain of HTML, so this responsive images pattern is quite well put together in HTML and Web standards and, yeah, really happy with the results.

Honestly, I think for the specific trial sites we rolled it out, it was on the other of 30% lower page weight and, for the Wagtail web at large, like every year we see the improvements in those, audits about how much usage there is of modern image format, how much usage there is of responsive images.

We see the figures improve. So, really cool.

Chris Adams: Cool. We should actually share links to some of these things as well, actually. 'Cause one of the wonderful things about working with an open source project is you can say, well, if you want this to be a norm, then is the PR you could copy basically, right.

Thibaud Colas: Yeah. And something like AVIF support, I'm sure we'll talk about it at some points. Definitely. You know, we couldn't create the AVIF decoders and so on ourselves, so we've been relying on the Python ecosystem at large. And yeah. Now those things are in a place where lots of projects have those decoders where available.

Chris Adams: Cool. Are there any things, are there any other like, so that's, that was year one and this is year three and I think I can probably share with you is that, so we're a nonprofit organization. We publish a library called CO2.js. We've added, we've managed to get some funding from Google once for the Google Summer of Docs, not Google Summer of Code, where they actually funded us to make some of this library a bit easier for other people to use. And we found that quite helpful because that's been one of the ways people come to this for the first time is they use a library called CO2.js. And that wasn't something we could prioritize. So it's, kind of nice.

It just, it would be nice if there's more organizations funding this kind of work rather than just like one Web giant. Like it's nice that Google is doing this, but if you too work in a large tech company and you wanna actually fund this stuff or make it easier for your engineers to do this, then,

yeah, it's right there, folks. Okay. So maybe I can ask about some of the other years that you have running, like is there anything else that you'd like, like to talk about or draw people's attention to for 

some of the other ones?

Thibaud Colas: Google Summer of Code is a three month program, but lots of those things, to be honest, they keep chugging along in the background

for quite a while and making improvements. So, year two of this, we worked on the starter project for Wagtail. So a starter website where, just like as you mentioned earlier, the defaults, trying to make sure that it's easier for people to get a site up and running that has all of the right things in place to be low impact.

So that time, a contributor, Aman Pandey, helped us with the designs

as well as the coding of these templates. And, just from the get go, the idea was let's measure the designs even before they touch a Web browser. Let's make sure that we understand all of the, you know, newer standards, like the Web specific guidelines that those designs have that baked in so that when you generate the

sites, you are guaranteed better results. so this template, this project template's still in progress, but the designs at least are super promising. And year three, so year three,

starting as of this week, just to be clear, is grid awareness. 

So grid awareness is a big term. Essentially it means looking at ways that, as the website loads in, your browser, it'll be optimized based on the carbon intensity of your computer and your local grid electricity. So what that means is if it would take, produce lots of emissions for the site to load in your browser, we try and make the website optimized for the emissions to lower. And yeah, so our contributor for this, Rohit, he's been around the Wagtail community for a bit and has this interest in sustainability.

And again, I think a great example of something that will tangibly help us reduce the impact of Wagtail websites out there and also make more developers aware of those patterns and, you know, the underlying need for those patterns.

Chris Adams: I am glad you mentioned the names actually. 'cause, on the initial year, I was working closely, I was working with Aman Pandey and I think one of your colleagues might be working with Paarth. So, hi Paarth. Hi Aman. I hope you're listening. It's really nice to actually see this. 'Cause these were people who are, like you said, early career didn't get that much exposure, but honestly compared to like the industry, they're relative experts now. And that might say more about the state of the industry is right now, but is, this was something I actually found it quite nice working with someone who was relatively young, who was actually really keen and honestly worryingly productive, did make me worry a bit about my own job going forward.

But yeah, this was one thing that was, really cool from that actually.

Thibaud Colas: Paarth and Aman are two of the mentors working with me on this Wagtail websites

project this year. So this is also the other goal of this Google Summer of Code program is retaining those people in the open source world and, yeah, definitely, you know, we are at a point now where we have more and more of those people coming to open source with that realization. There's way more room for this to happen on other projects like Wagtail, but, baby steps. 

Chris Adams: Oh wow. I didn't realize that you actually had, there was a kind of program to kind of build like, I guess like invest in, provide some of that leadership so people who prioritize this are able to kind of have a bit more of influence inside that project, for example.

Thibaud Colas: Yeah, exactly. Well, you know, open source, we have, we have very different incentives compared to the corporates and, yeah, for profit world. So we don't necessarily have, super clear ways to retain people, but definitely people who are interested and have the drive, like we try and retain them by having them move from contributors first time to repeat maintainers, mentors and so on.

Chris Adams: Oh, cool. All right, so that is a nice segue to allow us to talk a little bit about, I guess, taking ownership of carbon emissions and like the strategies that you have there. Because, one thing we should add into this list is that there's actually a roadmap for Wagtail specifically.

I think it's, is RFC 90 or is there a particular term for like a request for comments or something that you folks use to kind of talk about governance and talk about what you prioritize in this? 

Thibaud Colas: It's a bit of a running joke. In Python they have the PEP

proposals, Python Enhancement Proposal, and in Django they have the DEPs. People have been wondering if Wagtail should have the WEPs,

but right now we just have RFC, requests for comments.

And Yeah. ,It's just a super, like, simple way for us to invite.

It's really rather than, you know, create those governance, or technical architecture decision. Go documents, in, private chats, put them in public, and then invite feedback from others. So, you know, we've had this RFC for, couple years now, I believe. I got some good support from one of the experts out there on open source governance, Lauri Apple.

She coached me through, you know, trying to build up community momentum and also trying to find ways to make this reusable again beyond Wagtail and yeah, so this RFC, like, if you're deep in this space, it's nothing super special. 

It's about building awareness, finding opportunities for fundraising,

working on the right concepts, but I think it's quite unique for open source projects to have that kind of clear, like direction for those things. Open source projects don't even necessarily have a roadmap of any kind to start with. And one on specific topics like this I think it's really important. I think there's something Lauri says often, which is

if you get your contributors to work on high value and high impact things, that's the best way to motivate them. So that's kind of the idea here is formalize that we have a goal to reduce our footprint. And by virtue of this, we, you know, make it a more impactful thing for people to work on

by having those numbers, by communicating this specific change to images, here is the potential for it to reduce emissions.

Chris Adams: Oh, I see. Okay. So I've just followed the link to the RFC that you have here, and there's actually quite a lot of stuff here. So I can see a link to the free green software for practitioners course for people who don't know that much about it, I can see that Wagtail itself has a sustainability statement.

So like this, these are our priorities. So there's some immediate kind of explicit statement that this is something you care about. And then as I understand it, there's some references to other things. So there's the prior work, with the GSOC, Google Summer of Code. There's references to the W3C Web sustainability guidelines and a bunch of stuff like that.

And there's few other. We'll show a link to this because I think it's actually a really good example for other people to be aware of or see, like, this is what a relatively large mature project does, and this is what it looks like when they start prioritizing this. Because, yeah, there are some, there are some organizations that are doing this quite well.

I know that there is a .NET based CMS that I've totally forgotten the name, Umbraco CMS, also have some quite strong, have also quite advanced in this. And they're another good example of this, but there's kind of, when you talk about, okay, prioritizing this and responsibility, there's a whole question about, okay, well,

whose job is it or who's responsible for this? Because you are building a piece of software and like you might not get that much control over who adopts the software, for example, like I think when you shared this breakdown, we saw like a, I think you mentioned there was one Vietnamese website, Vinmec, that was like making up like a third of the reported emissions. 

Thibaud Colas: Put me in touch.

Chris Adams: Yeah.

Thibaud Colas: Yes. So this is a very, with the caveat that carbon accounting

isn't my expertise, you know, in the corporate world, we have the very clear, greenhouse gas GHG protocol, and scope one, scope two, scope three standards. And in that corporate world, I think, there's this, I think scope three, category 11, use of 

Chris Adams: use of products. 

Thibaud Colas: The use of, it's worse than that. It's use of sold products.

Chris Adams: That's it. Yeah. Sold. Yeah.

Thibaud Colas: So if you're not a corporation, we're not a corporation, wagtail, we, have about 20 contributors on the

core team. And if you don't sell your product, which standards are you meant to be using, then, to decide essentially which, which emissions we should be reporting on?

So the disfigure of the carbon footprint of Watta on the order of five to 10 thousand tons a year. That's assuming, you know, we take some ownership for this usage of Wagtail and of the websites built with it. And it's actually, I think, quite tricky to navigate in the open source world.

Understanding, which standards of reporting are, helpful because, you know, in some respects, people who shop for a website builder or CMS or any tech really kind of expect specific standards to be met. You know, you mentioned having a sustainability statement. No one's expecting that just yet in the open source world, but we definitely want things to move that way. And we have to, you know, make sure that when we create those figures they are somewhat comparable to other projects. So, yeah, I guess for Wagtail, you know, there's the fact that you don't control who uses

it and you don't control how they use it, either. So, if someone wants to, you know, make a site that's partly big

and it's partly popular in some country, maybe

Chris Adams: Yeah. 

Thibaud Colas: adult entertainment websites

that don't have any. 

Chris Adams: Does PornHub go on WordPress' ledger? Right? is it on their accounts? Yeah.

Thibaud Colas: Exactly. We have a few like this in the Wagtail and Django world and, you know, technology, you know, it's open source license. We have no interest in taking any kind of control or having a more contractual relationship with those projects, but we still need to navigate how to account for their use essentially. 

What actually got me started on this, Chris,

I think it's worth I mentioned, is the work of Mozilla

and Mozilla Foundation. They were the first ones I saw, I think back in 2020 reporting the use of Firefox browser

as part of the emissions of Mozilla. And it was, I think it was 98% of the emissions of Firefox were like, sorry, the emissions of Mozilla came from Firefox.

And it just got me thinking, you know, for Wagtail and Django, obviously it's a similar type of scale.

Chris Adams: Also with Firefox, the browser, like you don't necessarily pay Firefox to use it, but you may be paying via the fact that your atten, you know, you kind of pay in your attention. And the fact that when you click on a search, an ad in Google, one of the search services, Firefox is being paid that way.

So you're not actually making a direct monetary, like you're not giving them money directly, but there is payment taking place and changing hands. And this is one thing that is actually quite difficult to figure out. Like, okay, how do you represent that stuff? Because like you said, it's not sold per se, or you're not paying in money, but you may be be paying in something else, for example.

And, it's almost like you know this, I mean it's, I guess it's a good thing that you do see some of these protocols for reporting being updated because they're not necessarily a good fit for how lots of like new business models on the internet actually work, for example.

Thibaud Colas: Yeah. And it's really important for us to get in this space as, open source technologists, I believe. Because I mentioned procurement. Definitely the expectations are rising in Europe, in particular in the EU, on the carbon impact of technology. And I think it's quite a good opportunity for open source.

You know, we have very high transparency standards for us to meet those requirements, not necessarily to lower the emissions dramatically, but at least be transparent on the impact of the software.

Chris Adams: Yeah, I mean, this is actually, you touched on quite an interesting thing, which is both a link to some of the Mozilla work, but also, in the kind of AI world, which is kind of adjacent to us as like webby people. There's, I know that Mozilla provided a bit of funding to Code Carbon, which is an open source Python library for people to understand the environmental footprint of AI training, and I think these days some inference as well via the kind of Energy Score AI,

a project that they have with hugging face, for example. So the, you know, one of the reasons you have that is because, oh God, I'm gonna murder the name. There's a French supercomputer, Jean Paul. Jean. Oh, do you know the one I'm talking 

Thibaud Colas: No, I don't actually. 

Chris Adams: Okay. So maybe the thing I'll do is I'll give you a chance to respind to this

while I look it up, but I do know that one of the reasons we have any numbers at all for the, environmental footprint of AI was because there was a, you know, publicly funded, supported supercomputer with some work by, some people at hugging face, I forget, the Bloom model. 

Thibaud Colas: Oh, The Bloom 

Chris Adams: yeah. Yeah, exactly. That, we have these numbers and there was a paper produced to actually highlight this.

Because the supercomputer, the people who are running the super supercomputer are able to share some of these numbers where it's, where traditionally we've had a real challenge getting some of these numbers back. So that's one place where having some open examples, at least give us something like a proxy in the face of like not quite deafening silence from groups like Open AI and Anthropic and stuff like that, but we're not seeing that much in the way of numbers. And given that we're seeing this massive build out, it's definitely something we are, I'm very glad. It's useful to have like open source organization, open source projects, and some, other ways of funding this to actually at least create some data for us to have a kind of data informed discussion about some of this.

Thibaud Colas: A hundred percent. Yeah. This Bloom large language model is, I think really, it's really essential for us for, to see this research being done because then when, you know, people talk about adding AI in a CMS or in their Django projects, we can point them to understanding like, you know, what the potential increase in the carbon footprint of the project is, and yeah. You know, in the AI world, there's this whole debate about what open source means for AI models.

Definitely it's not, there's lots of gray areas there, but if you wanna reuse their research, it's much easier if there's just a underlying philosophy of open source and open data in those organizations.

Chris Adams: Jean Zay, that's the name of the supercomputer in France, which has this, there's actually ones in Boston as well. There is one over there. And the, in the US NREL, the National Renewable Energy Labs folks, they did, they've shared a bunch of information about this as well when they've got access to this, and this is actually providing a bit of kind of light to a discussion, which is mo mostly about heat so far it seems. So that's actually quite kind of useful. You've just made me realize that later on this year, this might be one of the angles that we might see people talking about the use of AI for actually drilling for oil and gas and other kind of stuff which is not great for climate because, NE, which is a nationally, it's a state owned.

NE is a state owned energy company in Italy. They are one of the few people who actually have a publicly owned supercomputer. And because Italy is one of the countries that signed the Paris Agreement, there's currently a whole law court case about essentially suing NE to say, well, if you are state owned and, this is, and you've signed this, why the hell are you actually now using AI to drill for oil and gas, for example? And this might be one of the ways that we actually see some numbers coming out of this. 'Cause since 2019, we know that there are companies which are doing things with this.

But for example, we know that say companies like Microsoft are involved in helping use these tools to kind of get oil and gas and fossil fuels out this out of the ground. But there's not much visible, there's not much out there right now since the press release has stopped in 2019, and it feels like it's a real gap we have when we talk about sustainability and technology, and particularly AI, I suppose.

Thibaud Colas: Yeah, that's really interesting for us to consider for Wagtail as well because, you know, we talk about the carbon footprint of the websites, but it's also important to consider what the website might be enabling or, you know, in positives and negatives. And, yeah, even beyond websites, when I've tried to, you know, take my work from Wagtail to Django and even the Python ecosystem at large, with Python, you have to reckon with the footprint of web services built with Python, but also of all the data science that supports this same, you know, oil and gas industry. So it's tricky.

Chris Adams: Yeah, I mean, we've just saw, like we're speaking on the 4th of June and we had there two days ago, and, we've seen like massive drone attack wiping out like a third of Russia's bomber fleet. Right. And that was basically like some Arduino drone pilot software was one of the key pieces that was used inside that.

And it's not necessarily like the open source developers, they didn't build it for that. But we are now seeing all this stuff show up and like we haven't figured out ways to kind of talk about where the responsibility lies or how you even think about some of this stuff. Because yeah, this takes the like,

we might have words like dual use for talking about these kind of technologies, but in a world of open source, it becomes much, much harder to figure out where some of these boundaries lie and how you actually, well, I guess, set some norms. I mean, maybe this is actually one thing. Yeah, I'll leave you some space then I wanna ask you a little bit about, you mentioned the wider Python ecosystem,

'cause I know that's something you've actually been having some conversations with as well.

Thibaud Colas: Yeah, well, connecting the dots, you know, it's also the usage, but also as contributors, you have to consider that maybe there are only so many people in the Wagtail or Django world

that are responsible for how the tech is put together. So maybe in some sense you do share some kind of responsibility personally for the tech you produced out there, even though you don't control how people will ue it. Which is, you know, a whole dimension of how you or how much you take ownership of that. And yeah, in the Python world more widely, you know, Python is the most popular language out there. Even if it might not be the most performance, even if there might be simpler languages that help you get more optimized, lower emissions software, people are gonna use Python in all sorts of ways.

And some of them, you agree with it, some you don't. I think that's one of the, you know, realities of open source contribution you have to be aware of.

Chris Adams: You've actually said something quite interesting about, okay, yeah, there's a limited number of people inside the Wagtail community and you've been able to have some success in like helping set some norms or helping help kind of set some directions there. And there's maybe a slightly larger group, which is like in the Django land, like when with you in the kind of acting as a president now, I know there's some interest that you have there. And there's groups that I've been involved with, right. But you also mentioned that there's a kind of wider Python ecosystem there, people talking about this, I mean, is this, if someone is actually looking, let's say they're coding on Python, they wanna find out who else is doing this. Like, is there someone you'd look, you'd point people to?

Or are there any conversations you're aware of going on right now?

 

Thibaud Colas: The Python ecosystem is big.

So one of the big challenges to get started with is just putting enough people together to have those discussions. 

I have tried on the Python discussion forum. I think it's, "who's working on sustainability in Python?" is the thread I put together. And I guess, I think, to me, what's important at this point is just getting tech people, you know, aware of the fact that we have this climate change challenge and that they can do something about it. And then, you know, realizing that open source has a role to play and as open source contributors we can very much move the needle. So in the Python world, you know, it's being so big and the uses being so different, there are lots of ways to help by working on the performance of Python itself, but there are also lots of ways to help outside that. Even something as simple, you know, as the Python Software Foundation trying to quantify their own organization footprint or the footprint of a conference like PyCon US, that can go quite a long way, I think.

Chris Adams: Cool. Thank you for that. Actually, I'm really glad you mentioned PyCon US because there were a number of, talks that I heard other people on other podcasts talking about it. They were really pleased to see. So there, there seemed to be some latent interest there. And what we could do is we'll share a link to some movie videos that were up there, because yeah, I was pleasantly surprised to see them when I saw PyCon's videos come up on YouTube because,

wow, it came up really fast. Like there is, you know, really nice things about like design pressure and how to think about like your code. But yeah, there's a few people saying who are totally new to, like, I've been, you know, the existing green software field, there are people who seem to be quite new to it talking about this.

So that's, that's encouraging.

Thibaud Colas: Yeah. And in some ways, you know, AI, the whole negative impacts of AI, the whole like problem and kind of forms for our whole industry, but with, you know, LLMs being so costly to train, so, you know, energy intensive to train in some ways it also helps people understand better the implications. Yeah, exactly.

And just build up awareness. So I think what you're referring to at PyCon US is, the work on

Chris Adams: Yes. 

Yeah. Thank you.

Thibaud Colas: Yeah, Machine Learning in Python, quantifying the energy impact of that and LM specifically. And, yeah, people like him, you know, he's involved with Google Summer of Code for Django, so

definitely in the position. Yeah. Yeah. And, I think it's just a, it's a matter of, for us as open source people of, nurturing, you know,

those areas of expertise. Making sure we have those people having the conversations and, yeah, also sharing them in a wider sphere of the industry at large.

Chris Adams: And I suppose, I mean, one of the other things is that pretty much all the people you've mentioned who are going through the Summer of Code stuff, these are people who are in one of the regions which you're seeing like 50 degrees Celsius heat waves and stuff like that. It, there's kind of like moral weight that comes from someone talking about, they say, "Hey, I'm experiencing this stuff and I'm in an area which is very much exposed to this" in a way that if you are in some way, you are somewhat insulated to, from a lot of these problems, it doesn't, it might not carry the same weight actually.

Wow. Thank you. I hadn't realized that.

Thibaud Colas: I really liked this parallel that one of my colleagues at Torchbox put together about our work in accessibility

and the war in Ukraine, talking about other big topics, where, you know, practically speaking, there is a war, it's horrendous, people are getting maims

and they don't necessarily have the same life after.

And if you invest in accessibility, means being be better able to support. people who go through the conflict with major harm and, yeah, I think it's quite important for us in open source, you know, when I, when Lauri talks about high impact contributions, to hark back to those values you might have about helping others and realize the connection, even though, you know, there are quite a few layers between me, a human contributor, and a Wagtail website, we can have that impact.

Chris Adams: Well, I guess that's the whole point of the web, right? The Web is, this is for everyone like London Olympics, Tim, I mean, Tim Berners-Lee, his like massive thing. "This is for everyone" being a kind of, okay, we're getting a bit teary and a bit, get a bit carried away ourselves and we're running short on time, so I should probably kind of wrap this up before we go too, far down that rabbit hole. Thank you so much for coming on for this. As we wrap up, are there any projects or things that you would like to draw people's attention to that you found particularly exciting of late before we wrap up?

Thibaud Colas: Definitely. I'd recommend people check out this Grid-Aware websites work that the Green Web Foundation puts together. 

Chris Adams: I did not tell him to say that. Okay. Yeah. Thank you. 

Thibaud Colas: He did not. But you know, it is actually really impactful in my mind to put together multiple CMS partners through this project and, on a personal basis, this type of project,

I was really skeptical of the benefits at the beginning, and it's really interesting to get your thought process starting on, yeah, like tangible ways to move the needle on new sites, but also existing ones. So specifically the work we're doing for this project, Google Summer of Code. I think we'll have results to show for it in about a month's time and hopefully it's reusable for other people.

Chris Adams: Yeah, there's actually, okay, now that you mentioned this, I've just gotta touch on it. There is actually a grid aware SDK, which is currently out there, and you can think of Grid Aware as being very aware, kind of like quite comparable to carbon aware, basically, but with a few extra different nuances.

The thing I should probably share is that this is actually work that has had a degree of funding from SIDN, which is a 

Dutch foundation that has been trying to figure out what to do in like greening the internet. So there are pockets of interest if you know who to speak to. And hopefully we should see more of these things kind of bearing fruit over the coming months.

Alright. I don't wanna spend too much time talking about that, because we're coming up to time. Thibaud, thank you so much for giving us your attention and time and sharing your learnings about, both in the word of Django, Python and in Wagtail. If people are curious about what you're up to, where should people look?

Thibaud Colas: I had, simply enough I'd love for them to join yet another thing you didn't ask me to mention, which is the Climateaction.tech Slack. this is my favorite place to, you know, have this tight-knit community of tech people working on this stuff. And just DM me there. And I'll be very happy to answer any questions about any of this or just get you started with your own projects. For me, specifically, otherwise in the Wagtail world, the Wagtail Newsletter is a good place to have this work come to you on a weekly basis. And, yeah, just LinkedIn otherwise.

Chris Adams: Brilliant. Thank you so much for this. I hope you have a lovely day and yeah. Hopefully we'll cross paths again soon. All right. Take care of yourself.

Thibaud Colas: Pleasure, Chris.

Chris Adams: Cheers. Okay, bye. 

 Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.

To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 




Hosted on Acast. See acast.com/privacy for more information.

Show more...
4 months ago
42 minutes 35 seconds

Environment Variables
Cloud Native Attitude
Environment Variables host Anne Currie welcomes Jamie Dobson, co-founder of Container Solutions and author of the upcoming book Visionaries, Rebels and Machines. Together, they explore the history and future of cloud computing through the lens of sustainability, efficiency, and resilience. Drawing on insights from their past work, including The Cloud Native Attitude and Building Green Software, they discuss how cloud-native principles can support the transition to renewable energy, the potential and pitfalls of AI, and why behavioral change, regulation, and smart incentives are key to a greener digital future.

Learn more about our people:
  • Anne Currie: LinkedIn | Website
  • Jamie Dobson: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • The Cloud Native Attitude: Amazon.co.uk | Anne Currie, Jamie Dobson [01:21]
  • Building Green Software: O'Riley | Anne Currie, Sarah Hsu, Sara Bergman [01:38]
  • Visionaries, Rebels and Machines: Amazon.com | Jamie Dobson [03:28]
  • Jevons paradox - Wikipedia [11:41]  

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!


TRANSCRIPT BELOW:


Jamie Dobson: We're loaded up all these data centers, we're increasing data sets, but ultimately no matter how much compute and data you throw at an artificial neural network, I think it would never fully replace what a human does. 

Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams. 

Anne Currie: Hello and welcome to Environment Variables Podcast, where we give you the latest news and updates from the world of sustainable software development. So this week I am your guest host Anne Currie. And you don't have the dulcet tones of Chris Adams, you're left with me this week. So we're gonna do something a little bit different this week.

I have got an old friend and colleague and co-author, Jamie Dobson in to talk about it. So Jamie is the co-founder and CEO of a company called Container Solutions. And he's the author of the soon to be released book; Visionaries, Rebels and Machines, which I've read, and that's what we'll be talking a lot about.

And he's also the, one of my co-authors of a book I wrote nearly 10 years ago called the Cloud Native Attitude, which is about the principles of moving into the cloud. And there's an awful lot in there about sustainability with that, there's a lot we need to talk about around that. And it was actually for me, the precursor to the book that I wrote which came out with O'Reilly last year, with co-authors Sarah Hsu and Sara Bergman, Building Green Software, which as I always say every week,

everybody listening to this podcast should read because you'll find it very interesting and it is couldn't be more germane. So today we're gonna talk about those three books, really, and the thematic links between them all, which are really about resource efficiency, building at scale without it costing a ridiculous amount of money or using a ridiculous amount of resources.

And also resilience, which is something we're gonna really have to focus on when it comes to running on renewable power. So, let me let Jamie introduce himself and maybe tell us a little bit about his new book, Visionaries, Rebels, I can never remember whether it's Rebels, visionaries and Machines.

It's Visionaries, Rebels and Machines. Go for it, Jamie.

Jamie Dobson: Visionaries, Rebels and Machines. That's correct. Hello Anne. Thanks for having me on the podcast. And hello to all your listeners. who tune in every week? Yeah. So my name is Jamie. I am indeed the co-founder of a company called Container Solutions. But it's no longer, I'm no longer, I should say, the chief exec,

'cause I handed that role over about a year ago, which is probably why, or, you know, it explains why I could find the time to finish writing this damn book. So Container Solutions is a company that specializes in cloud transformation, helping customers, you know, get off whatever infrastructure they're running on now and get onto, you know, efficient cloud infrastructure.

And if we do that right, then it's kind of green and sustainable infrastructure, but it's hard to get right, which I'm sure we're gonna discuss today. 

Anne Currie: Indeed. Yes. Yes. So, so you've got a book that's about to come out, which I have read, but it's not yet available in, the, in the stores, but it will be available on, in all good book bookstores, Visionaries, Rebels and Machines. And I, the reason why I asked you to come on is because I think there are a lot of ideas in there that would, that we need to be talking about and thinking about.

So, so tell us a little bit about Visionaries, Rebels and Machines, and then I'll tell you why I think it's interesting.

Jamie Dobson: Absolutely. Yeah. So, so Visionaries, Rebels and Machines, we have to start at a point in time. And that point in time is about four or five years ago. And I was asked the question, "what's the cloud?" It was, the person asking me, it was a junior colleague, new to Container Solutions. And, you know, I started to answer, or at least I opened my mouth,

and of course I can answer that question, but I can't answer it necessarily succinctly. So I was asked the question, I think probably around about June, so maybe about five years ago today actually. And over the summer period I was thinking, "God, how do you answer that question? What is the cloud?" And so I started to creep backwards in time.

Well, the cloud is, you know, there's a bunch of computers in a warehouse somewhere. But what's a computer? And then once I asked that question. Well, computers are things made up of transistor. Well, what's a transistor? And what I came to the conclusion over the summer, was the following:

The cloud can only really be understood in its own historical context. And so interestingly, once we got to the point of, you know, answering the question, what is the cloud? The arrow was already flying. You know, there was a, an arrow was shot round about the late Victorian time at Thomas Edison's Menlo Park facility in New Jersey, and that arrow flew all the way through the last century through the web, through cloud computing, and it continues to fly with the rise of artificial intelligence. And so the last part of the book is, okay, now we know what the cloud is and what it does, where might it take us next in regards to artificial neural networks and all of that stuff? So that was the book. The Visionaries and the Rebels of the people who built teams, built teams that were innovative. All of them had psychological safety even though the, that concept wasn't known at the time. And so, these historical figures are not just ancient history, like not just Thomas Edison, but also the Jeff Bezos's of the world, the Reed Hastings's, and the modern figures of cloud computing. The visionaries and the rebels can teach the rest of us what to do with our machines, including how to make 'em sustainable. 

Anne Currie: And that is the interesting thing there. So I enjoyed the book. It's, it is quite, it is a readable romp. And I very much connect with your, with your initial motivation of trying to explain something that sounds simple, but actually you realize, oh gosh, I'm gonna have to write an entire book to even get my own head around this rather than, you know, 'cause that was true for, well, when we wrote, it's actually a, Cloud Native Attitude, which was the book that we wrote together started off 10 years ago, was pretty much for the same, it was kicked off in the same way. We were, we were saying, well, what is cloud native? What, what are people doing it for, and why are they doing it this way? And quite often, and Building Green Software,

the O'Reilly book, which is really germane to this, to this podcast, was again, the same thing. It's what is, what does the future look like for systems to be sustainable? How do we align, and make, what is the future gonna look like? And, where, and that's always seated in the past. What has been successful?

How did we get here? 

Jamie Dobson: Absolutely. So you can't move into the future unless you understand your past. And I think the similarities between the Cloud Native Attitude and Visionaries and Rebels is the tone. So my book deals with horrible things, child poverty, exploitation of people, and the truth is that a reader will put up with that for maybe one paragraph.

So if you want to, if you want to teach computing and how it can enslave the human race or not, or how it can liberate them and touch all of these really difficult themes, you've got to do it in a pretty lighthearted manner. And the reason people are saying, "oh, it's a page turner. it's entertaining, it's a bit of a rump,"

it's because we focus on the characters and all the things that happens to them. And I think that started with a cloud-native attitude because unless you can speak quite lightheartedly, you so quickly get bogged down in concepts that even for people like us who work in computing and are passionate about computing, it's just extremely boring. And there are some fantastic books out there right now about artificial intelligence, but they're so dry that the message fails to land. And I think I was trying to avoid that. 

Anne Currie: And you know for, 'cause we wrote Cloud Native Attitude together. But it is, if these, books are ideally a form of leadership. When you write a book, you are either, you are kind of saying, look, this is what I want to happen in the future.

You're trying to lead people and explain and reason and inspire. But you have to inspire. If it's boring, you're not gonna lead anyone. No one wants to follow you to the boring location they want to follow you to the exciting location.

Jamie Dobson: No. Exactly. And I think the problem is computer people, most of us have been to university, so we're on the academic path. And what happens is you forget to tell stories. So everything becomes about what the research says, "research indicates." So it's all exposition and no narrative. And the problem that is people switch off very quickly, and the paradox is that you don't make your point because you've bored your reader to death.

Anne Currie: Yeah. And this is something that's, that comes up for me over and over again in the green software movement that we quite often, we tell the story of it's being, everything being very sad. And everybody goes, "well, I don't wanna be there in that sad world." And, but it's not a sad story. I mean, it is like climate change is a really sad story.

It's terrible. It's something we need to avoid. We're running away from something, but we're also running towards something. Because there's something amazing here, which is renewables are so cheap. If we can build systems that run on solar and wind, and a little bit of storage, but not, but much less storage than we currently expect,

then we have a world in which there's really loads more power. We can do so much more than we do now, and it's just a matter of choosing what we do with it. It is a, we are not just running away from something. We're running towards something, which is amazing. And, so yeah, we tried to keep that tone.

And Building Green Software is designed to be funny. You are. It's the only O'Reilly book. One of, one of my reviewers says it's the only O'Reilly book where you actually get, you laugh out loud whilst reading it. You could read it on the beach.

Jamie Dobson: This is exactly why we created a conference at Container Solutions called WTF. What The F is Cloud Native? And it's basically because if you cannot entertain, you'll never get your message across. I've got a question for you, Anne, this wonderful future that we're heading towards, I see it as well. But 

in the research for visionaries and rebels, there was a big chapter I had on Henry Ford, and in the end it didn't, quite make it into the book, but basically, once Edison had created electricity, then all of a sudden you had elevators for the first time. So the New York landscape did not become a thing till we had electricity because there was a limit on how big the buildings could be. And that exact moment Henry Ford came in with the motorcar, and he was so successful in getting it off the production line cheaply, the beautiful boulevards of New York, of American cities, New York, St Louis, and places like that ended because basically people said, "well, we don't need to be in the city.

We can drive to the suburbs." And a lot of historians were saying if Henry Ford had just gone a bit slower, we would've adapted to the motor car quicker and therefore the cities of today would look very different. And one of my concerns with green software is,

the speed of which we're moving with data centers and AI is so quick.

I wonder if we're having another motor car moment. the future's within grasp, but if we go too quick, might we screw it up on the way?

Anne Currie: So I think what you are circling around here is the idea of, it is something that comes up quite often, which is Jevons Paradox, which is the idea that, as you get better at using something, you use more of it, it becomes cheaper, because actually because there's untapped demand.

So where there's, where people are going, "gee, you know, I really want to live in a high rise city because then naturally everybody can live together and it will be vastly better for us and we'll prefer it. And therefore we take more elevators and we go up because we've got elevators."

And people really want cars. I mean, it's one of the things, I don't drive. but everybody loves to drive. There's no point in, tying green with like nobody driving because they love to drive. And there was untapped demand for it, and therefore it was met. And remember at the time there was really, but back then we didn't consider there to be any problem with using more petrol. We didn't consider there to be any problem with using fossil fuels. And everybody went, "yeah, hooray! Let's use more and more of it."

But it did massively improve our quality of life. So I think all green messages we have to say, well, we want the improvement in quality of life, but we also want a planet and we have to optimize both of those in parallel.

We can't say that you're trading off. And this, I know that people have a tendency to look down on efficiency improvements, but efficiency improvements are what has driven humanity up until now. And efficiency improvements are so much more powerful than we think. We just don't understand how much more efficient things can get.

Jamie Dobson: Yeah. 

Anne Currie: And therefore we go, oh, well, you know, we, if people have 10 times as many cars or whatever, probably not 10 times as many. Well, compared to back to Henry Ford's days, we've got a lot more cars. We've got a lot more mobility. There is a almost seemingly limitless, demand for cars. But there are plenty of other areas of life where efficiency has outstripped the demand.

So in terms of electricity use, household electricity use in the west in the past 20, 30 years, household electricity use, despite the fact that everybody has automated their houses we've got, everybody's got washing machines and dishwashers and tumble dryers and TVs, and electricity use has still gone down.

And the reason why it's gone down is because all of those devices appeared, but then became more and more efficient. And efficiency improvements really are extraordinarily powerful. Much more than people realize. And if we force people to put the work in, and it's not free, it requires an enormous amount of work, but if people are motivated and incentivized to make those efficiency improvements, we can do an awful lot.

We can get.

Jamie Dobson: My suspicion is the world will change. So not many people realize that the car was actually very good for the environment. All around London, my children ask me, what's that thing outside the house?" It's a scraper for your feet, for your boots. And that's because all the streets of London were caked two inch shit deep of horse manure.

And at the end of every single street, the way it was piled high. So the public health issues with horses was an absolute nightmare. Not to mention the fact that people used to get kicked in the head or pulled into ditches. Fatalities from horses was, you know, a weekly account in New York City. But so it changed. So once we got the electricity, we got the lifts, the horses went away.

 My suspicion is right now we cannot run a sustainable culture or city without radically changing things.

 So, for example, did you ever stop to wonder why is your power pack warm? You know, when you charge your phone or your laptop, why does it get warm? Do you know what the answer to that question is?

Anne Currie: No, I don't actually. That's a very good question.

Jamie Dobson: There you go. So who won? Who won the battle? Tesla or Edison. So.

Anne Currie: Tesla. 

Jamie Dobson: Tesla did win. So it's basically AC versus DC. What's the best system to have? Well, DC, direct current kills you if you touch it direct current by accident and the voltage is right, you die. But what you feel on the back of your charger is heat, which is a side effect of converting AC back to DC because computer devices don't work on AC because it, the current has to go round and round, like water, in a fountain because that's the only way transistorized things work. So now people are saying, well, actually, arguably we should have a DC grid because globally we are wasting so much electricity because of this excess heat that is produced when we go from AC back to DC. So, and I get the feeling, and do you remember when we were kids, if you put your washing on at three in the morning, you got cheaper electricity.

I cannot help but think it's not just about renewable energy, but it's also the way we consume energy to make that more effective.

Anne Currie: Yeah.

Jamie Dobson: And I think if that doesn't change, I basically think, when Edison arrived, society as we knew it absolutely changed. We had no refrigerators and that changed our behaviors.

Now, some people would say, well, you became a slave to the machine. I think that's a little bit too far, but we certainly went into some sort of analog digital relationship with the machines we work, all of which drive efficiencies. I think the next chapter for sustainable energy and computing will be a change in our habits, but I can't, I don't know exactly what they're gonna be.

Anne Currie: Oh, that's definitely a thing. It's something I've talked about on the podcast before. It's the mind shift from fossil fuels, which are kind of always on, you know, easy to dispatch, so easy to turn on, easy to turn off to something, to solar and wind, which is really expensive to store,

really cheap if you use it as it is generated. But grids were designed, in many ways this is the same kind of things that you talk about in your book. Grids were originally designed specifically to provide power that was easily dispatchable, you know, that it was fossil fuels.

And that means that the whole of the philosophy of the grid is about something called supply side response. And that is all that is basically saying, "do you know, users, you don't need to worry." Flick of a switch, the electricity will always be there and it's the responsibility of the dev, of the providers of the electricity, of the grids to make sure that the electricity is always there to meet your demand.

You never have to think about it. But for renewables it's generally agreed that what we're gonna have to do is move to something called demand side response, where users are incentivized to change their use to match when the sun is shining and the wind is blowing. As you say, when we were kids in the UK, we used to have something called economy seven.

You had seven hours a day, which was usually at night. where, because it was all, because back in then, I'm guessing, 'cause it was a coal fired power station. Coal fired power stations were not so easy to turn off and on again, which gas is. So we don't have it anymore. And it's, and, but in those days you say the coal fired power station was running during the night and nobody was using the power.

So we wanted to actually get people to try and use the power during the night. And we used effectively what are now called time of use tariffs to incentivize people to use spare power, which was during the night in the UK. 

Jamie Dobson: It sounds like a huge dislocation to life, but when I first came to London, the London Mayor or the authorities made an announcement that when something like this, "oh, air pollution's really bad today. Don't go out running, close your windows. Old people don't go out, don't do any exercise."

And I remember thinking "this can't be real. Is this some sort of prank?" But this is a thing in London. And I remember thinking, but at no point would the Mayor of London say, "okay, the air pollution's bad. You're not allowed to drive your car today," right? And it showed where the priorities lie. But it wasn't that difficult.

So everybody just shrugs their shoulders and says, "oh, well, okay, I just won't do any out outdoor activities today." So I think that demand side response is possible. I do wonder what happens though if, let's say, obviously the sun's shining, so that's the time you should run your data centers. What happens when the sun's not shining?

Are the cloud providers gonna be happy to have an asset sat there doing nothing when it's dark, for example, or when the wind's not blowing?

Anne Currie: Well, it's interesting. I think it depends how much, if, it's all about what is the level of difference in electricity cost between the time when the sun is shining or the wind's blowing, and the time when it isn't. I'm massively impressed by work that India is doing at the moment on this, on time of use tariffs because they have tons of, or and they know what they're looking forward, they already know they're one of the fastest growing. So India is one of the fastest growing countries in the world for rolling out solar power. Unsurprisingly, 'cause it's pretty sunny in India. So they're looking forward and they're thinking, well, hang on a minute.

You know, we are gonna have this amazing amount of solar power in the future, but we are going to have to change people's behaviors to make sure that they run on it, not the other thing. So, the way they're doing that is that the strategy that they're adopting for incentivizing people to change their behavior.

And as you say, actually people will change behavior. They just need a little bit of a push and some incentives and they will change their behavior. The incentive they're using is time of use tariffs. And India is pushing out all of the province, the states in India to introduce time of use tariffs which reflect the actual cost of electricity and push people towards times of the day when they're, when they'll be. And it's, it is a gradual process, but you can see that it will roll on and on and they're, looking at a tenfold difference that what they're saying is. That the difference should be tenfold between when your electricity is generated from the sun and when it isn't.

And a tenfold difference in price does justify a lot of behavioral change. You might as say, you might not want to turn off your, your data center during the night. But some people will go, well, hang on a minute. If it's literally, because for most data centers, the main cost is electricity. If there's a tenfold difference in electricity cost between, the day and the night, then they'll start to adapt and start to do less and start to turn things down.

Necessity is the mother of invention. If you don't, if you give a flat tariffs to everybody, they're not gonna make any changes. But if you start to actually incentivize response, demand side response, it will happen.

Jamie Dobson: Then of course then that comes back to regulation, doesn't it? Because I think of the things that Edison, well actually it was his colleague, Samuel Insull, realized if you're gonna, it makes no sense to run the grid unless it's some sort of public utility or a natural monopoly. And you can only really fairly run a natural monopoly if the price is a negotiated and set in public and all the industries regulated. So do you think the, that these tariffs, the time of use tariffs, will become part of the regulatory framework of the governments.

Anne Currie: Oh, yeah, I mean, it already is. I was saying it's, in India. It's a regulatory thing. It is part of the industrial strategy of India. There are.

Jamie Dobson: Then indirectly, then indirectly the cloud providers will be regulated because they'll be regulated through the supply of electricity.

Anne Currie: Indeed. Yeah. I mean it's interesting. there's a battle in, so some European countries, it's happening at the moment. I think, Spain already has time of use tariffs. There are other countries that have time of use tariffs and it changes behavior. And in the UK there is a battle at the moment, over, between suppliers about where the time of use tariffs are introduced.

So that battle is kind of being spearheaded by the CEO of in the UK it's Octopus Energy. Greg Jackson isn't it, I think is really saying, "look, this is what we need to do." Because, I mean, in the UK it is ridiculous that the government really doesn't want, they fear that everybody will be panicked and not be able to handle a time of use tariff.

And, but even though we used to have them not very long ago.

Jamie Dobson: It's ridiculous. People always panic about the public sentiment, but you just need to look at COVID, how flexible people can be when they understand the need for it. That's number one. And number two, when I was a kid, and that's only 40 years ago, we used to tend to lights off 'cause it was too expensive.

So we did have different behavior in the evening when we needed more electricity than in the daytime when we didn't. It's not that difficult to imagine. You know what? Do you know? What made me laugh is the average serving of meat, I think in the 1970s was 200 grams. And if you look at 200 grams, it's actually quite tiny.

It sits on your plate like a little slither of lamb. I was like, "oh my God, that's not enough. That's not enough food." But then you realize that is what we all used to eat, only 30 or 40 years ago. And so we've slowly been sort of, you know, everything's been supersized, including what we expect from the electricity companies. I think a gradual shifting back, you would, you'd barely notice it. And that's exactly how the government took salt out of our diet. That just slowly regulated how much salt could be in processed food until it had gone all together.

Anne Currie: Yeah, but I think you have to be careful about how you pitch this. Well, I think one of the issues with green is that it's pitched as, it's a reduction in meat and it's a reduction in, there's a reduction in that. I don't think it only has to be a sad story. It has to be a good story.

Something we're, a hill that we're, that we want to take because it's worth taking, not just something that we're, we are running away from that. I like the time of use tariff approach in India because it's saying, if you do this, you'll get electricity, which is a 10th the price, you know that it is something, it's a win.

It's not just like run away from the bad thing. It is run towards the good thing. And it with a minor, and you're not saying, "change your behavior because we're ordering you to do it" or because we're going to make electricity much more expensive. Although inevitably, electricity, fossil fuel, electricity will become more expensive because it is naturally more expensive these days.

Renewables have become so cheap. 

Jamie Dobson: Could cloud computing become a forcing function for cheaper electricity? Because the cloud providers need so much electricity, could this possibly accelerate the sort of the raise to green energy? 

Anne Currie: Well, it definitely can, and it has done in the past. I mean, it in, the early days, the well, so until maybe five years ago or so, the biggest non-governmental purchaser of renewable power in the world was Google. And they were buying renewable power, they were buying and, bankrolling renewable power for their data centers.

And they, so they're not the biggest, non-governmental purchaser of renewables anymore because it is now amazon to power their data centers because they got a long way behind and we all made a giant fuss about it and said, well, why aren't your data centers green? And so they put a whole load of money into renewables.

A lot of the reason why there's enormous amount of renewables these days and enormous amount of investment has gone into it, was because of the cloud vendors. Now, that is not because the cloud vendors are all secretly social justice warriors. I mean, they did it for their own benefit. But they did do it.

Jamie Dobson: That's another pattern that reoccurs is so, at the turn of the last century, so many entrepreneurs were sat on so much money that class unrest was really bubbling. So all of a sudden you got the subway in New York, subway in Paris, the municipal control of transportation, all kinds of stuff.

And then you're left thinking, "oh, was, were, they all do-gooders? Was that the reason they did that?"

Some of them may have been, but mainly they were trying to avoid class unrest. And so it's interesting that these, a good outcome can come on the back of self-interest, that is true, isn't it?

Anne Currie: Yeah, it is true. I it, and it's very hard to know what the unintended consequences, positive and negative of, all behaviors are. So, a lot of investment in early stuff becomes wasted later. So, you, like, you mentioned, subways, railways in the UK and worldwide.

Lots of early investment in railways resulted in loads of over provisioning of railways. And then as things got a bit more efficient and everybody goes, well actually you only need one train to go between London and Edinburgh and not 16 different trains on different lines. You get some kind of consolidation down and improvements in efficiency and that's how actually things become cost effective because actually overprovisioning is very cost ineffective.

Jamie Dobson: Well, that's true, but that is a very cheeky way to transfer money from rich people to poor people, because obviously what happened is, rich people invested in the railways, railways were over provisioned, those people never got a return. The rest of us were left with cheap railway infrastructure. Exactly the same happened with internet. Everyone's like, right, we gotta wrap the world up in optic fibers. Private companies came in, private investors came in, paid for all of that. Then we had way too many optic fiber cables, and now we've all got practically free internet access. So that occasionally it, it goes either way.

Anne Currie: Yeah, and I have to say, I, and I see the same thing with AI. So AI is interesting 'cause on the one hand I rail against how, and AI is unbelievably inefficient at the moment that there's an awful lot of talk about, oh, we'll have to build nuclear because we need it for AI and all that kind of stuff and we'll build all the nuclear and we'll build all the, you know, and hopefully, we'll we need to try and steer people towards doing with nuclear and doing it with solar and wind rather than, rather than fossil fuels. But at the end, it's going to be, there's so much wasted inefficient code in AI. AI is going to need a fraction of the power that we eventually build, we initially build to power the AI. I mean, because at the moment I'm talking to people who are doing measurements and differences between different AI models that do, you know, an equivalent amount of stuff.

The ones that are optimized, 10,000 times more efficient, 600,000 times more efficient. I've even heard a million times more efficient. There's so much waste in AI at the moment.

Jamie Dobson: Absolutely, and I think people don't, are not focused particularly on theoretical breakthroughs. So Jeffrey Hinton came up with the back, back propagation of errors in neural networks. I think it was about 1983. That's in the book by the way. And that was a breakthrough. That breakthrough, that theoretical breakthrough's got nothing to do with computing power or anything. It's a theoretical breakthrough. Right now we're desperate for something like that. So 

we're loaded up all these data centers, we're increasing data sets, but ultimately no matter how much compute and data you throw at an artificial neural network, I think it would never fully replace what a human does. 

So I think it's nice to know that as we lay, you know, we lay down this computing infrastructure and fingers crossed all of its powered by, you know, renewable energy, in the background, researchers will be chipping away at the next theoretical And I think they have to come with artificial intelligence because I think there will be limits to what you can do with generative AI.

And I think we're probably reaching them limits right now.

Anne Currie: Well, improving AI efficiency does not require massive theoretical breakthroughs. It just, it can be done using the same techniques that we've used for 30 years to improve the efficiency of software. It is just software. I mean, if you look at, DeepSeek, for example, DeepSeek did, have done, I think, so DeepSeek had to make their AI more efficient because the Biden administration said they can't have the fancy chips.

So they just went, "oh, we can't have the fancy chips, so we're just gonna make some software changes." And they did it like that, effectively. They're a tiny company and they increased the efficiency tenfold pretty much instantly. And they used three different methods, all of which, well, one of which is probably Max House and it's probably was probably most of the 10 x.

The others, there's still so much room for additional efficiency improvement with them. They did, they got rid of over provisioning. They moved from 32 bits of precision to eight bit precision 'cause they didn't need the 32 bit. That was a classic case of over provisioning. So they've removed the over provisioning and that's been known about for years.

That's not new. AI engineers have known that 32 bit is, over egging it. And they could run on 8 bit for years. So they didn't do anything new. They didn't have to do any new research. All they had to do was implement something that has, that was well known, but people just couldn't be assed doing. 

Jamie Dobson: Yeah, all of this noise will soon die down and people behind the scenes away from the attention grabbing headlights will continue to crack on with these things. And so my prediction is that everything's going to, everyone's gonna be pissed off in the next six to 12 months. "AI failed to deliver,"

but in the background, use cases will get pieced together.

People will find these optimizations, they'll make it cheaper. And I do reckon, ultimately, generative AI will sink into the background just in the same way that nobody really talks about the internet, right? It's the Web or it's mobile phone applications that do something sat on top of the computer network infrastructure. I think that's probably what's gonna happen.

Anne Currie: I suspect that generative AI is not going to entirely disappear just because, so I used to work, many years ago, I worked in the, in the fashion industry. I was, I worked for a company that was one of the first in pure play internet e-commerce companies. And because it was fashion, we used a lot of photography.

An awful lot of photography, and a lot of it, we had a whole team of editors. So, you know, I can see companies that work with photography, they have, a surprisingly large number of people in the world edit photographs. And so you know that there's a huge, demand for making that easier.

The downside is that you then, even now, all photo, all photographs that you see online represent people who do not exist. You know, they, it is like all models you see, it's probably not, that model kind of is kind of based on a person, but.

Jamie Dobson: lots of people, isn't it? So I think that generative AI stuff will remain, but I think it will become specific. So for example, I saw yesterday that the government are piecing together a number of different tools that's, let's call that the substrate, but on top of that, it's to give civil servants conversational interface about what was our policies,

can you summarize this for me, can you suggest a new policy, which is dangerous because anything, any decision based on past data, it's a reflection of and not necessarily a vision of what could be. So I think that's probably what's gonna happen, but I could be wrong and because the truth is none of us actually know.

It's all speculation at this point. 

Anne Currie: Yeah, so, so before we, well actually we've still got a bit of time, but before we go, I want to focus a little bit on what I see are the themes that run through the creation of the internet and the creation of modern technology in Visionaries, Rebels and Machines, and the Cloud Native Attitude, and Building Green Software.

And I think a lot of the themes there are, trying to de deliver your results, the thing that you want, the thing that's gonna improve your life, or the thing that people think is gonna improve their life on fewer resources with fewer resources, because that's the only way it scales. The cloud was all essentially all about how do we deliver our Google's, I mean, it was the cloud was, came outta Google. And it came outta Google, which was the first hyperscaler, and Google was saying, well, actually we really need to deliver our services at incredible scale, but we can't spend the, you know, there's a limit to how much money we can spend on doing it.

So we have to do it using operational efficiency and code efficiency so that we deliver this service on fewer resources and also resilience, you know, because things fail at scale and therefore we need to be resilient to failure. But that efficiency and that ability to respond to changing circumstances is exactly what we need to handle the energy transition.

Jamie Dobson: So I think the common theme that goes all the way from Thomas Edison to the teams building systems using AI now is that technologies change, but human nature doesn't. So, so the way those teams were managed has been absolutely consistent. I think one of the great contributions of Visionaries and Rebels is to show to people, you don't need to change the way you manage your techies because actually these, this is of success stories that lasted 150 years. Second theme is that once the foundations are laid, it's not the creators of a technology that dictate its destiny, but the users. So once we had a grid, boof, people started inventing applications. Exactly the same once the internet was there, people started inventing web applications. And once the cloud was there, we had Netflix, and then we had Starling Bank and all the things built on top of the substrate. So I think for sure what's gonna come next for sustainable computing will not necessarily be dictated by those building cloud infrastructure. The teams out there, the safe teams, the innovative teams taking risks. I think they will find the use cases. They will dictate what happens next.

Anne Currie: Well, so that's interesting 'cause that actually instantly reminds me of the approach that India, which we already talked about, that India are taking where you say, well look, we'll incentivize people to stick a whole load of renewable power into the local grid, into the grid. We've got the grid.

The grid just distributes the power and we introduce those incentivizing time of use tariffs, and we say, look, you know, there's really cheap energy at these times. Fill your boots. You decide what you're gonna do with it. And then just leaving the users of the grid, the users of those time of use tariffs to work out what's gonna happen.

Jamie Dobson: And I think people will look to India. I think everybody looks at other countries that are doing these experiments. So if it works out in India, then of course you could imagine that other countries might say, "oh, well, that's actually worked out over there. We can copy that as well." But ultimately they're building on existing infrastructure.

You know, they say, well, this is what we've got, what, you know, how can we, what does that interface between our users look like? And by making a change there, they will change user behavior somewhere else. 

Anne Currie: Yeah.

Jamie Dobson: It's hard to predict, though. It's hard to predict.

Anne Currie: it is hard to predict it. it's kind of, it's an interesting, something that comes up in grid discussions about this, quite often, is this whole kind of idea that, in some ways countries that are less developed than America and the UK are in a much better position for the energy transition because governments can go, we'll have time of use tariffs in every day.

We'll, it's not that far. For, you know, the people quite used to microgrids, they're quite used to things being fluctuating. They're not, they haven't got used to everything being available at the flick of a switch and a hundred percent reliable. Reliability, to a certain extent, breeds fragility. It breeds people who've forgotten how to handle change.

Jamie Dobson: Yeah. So of course there are places in the world that have got cell phone infrastructure, but they don't have any telecommunications infrastructure, because by the time they came around to installing it, cell phones were a thing, so they just completely skipped. That whole step in technology. We've still got phone boxes in the UK that we, nobody knows what to do with. They're on the street corners, growing moss, and that's a legacy, exactly like what I mentioned earlier, the mud scrapers outside of people's houses. These are a legacy of previous sort of infrastructure. Horses in the case of the scraper and then the telephone boxes in case in the case of cellphones. So I think that's true that india probably has got places that are either off grid or nowhere near as reliable as what we have, for example, in the UK. So then it makes sense that the government can be more experimental because the people are not gonna lose anything. There's nothing to lose.

There's only gains. 

Anne Currie: Indeed. Yes. And in fact, actually, I mean, it is interesting that time of use tariffs being introduced in the UK is now controversial because we have become strategic snowflakes. We can't. We can't, they fear that we can't change, although I think they're wrong. And in fact, time of use tariffs were totally fine 30 years ago.

And nobody died as a result of economy seven heating.

Jamie Dobson: There's an absolute relationship between the reliability of a system and how spoiled its users has become. So if you, when I first went to the Netherlands, the train would be two minutes late and people would literally slam their feet on the ground in anger, right? And swear in Dutch about the state of the NS. Coming from the UK it's like, "well, whatever."

Now, exactly the same happened when, when the video store came along. Most people were used to consuming media as and when, you know, they chose to. But with the video shop, they only had limited editions of new releases. The frustration that created in users of video stores is exactly what led to Netflix's creation. So the more reliable something is, the more complacent, and the higher the expectations its users have of the system. But I think COVID taught the UK government that we could be way more flexible than they fear we are. 

Anne Currie: Yeah. I agree. And, except actually I don't think they learned that lesson because they immediately forgot it again. 

Jamie Dobson: Apparently there's loads of lessons they didn't learn. 'Cause apparently we're less ready for a pandemic now than we were before COVID.

Anne Currie: Yeah. It is a, it is amazing how many lessons we didn't learn that, but, I think that takes us through a final thing that we should discuss, which I think comes out of what you've just said there about resilience, which is some, and it's something that is a modern thing that we talk about a little bit in the book, in all those three books, which is Chaos Engineering, which is the modern approach to resilience, which is that you get more, ironically, you get more resilient systems by building them on top of systems that you don't expect to be a hundred percent resilient.

The expectations of, of a hundred percent availability, supply side response builds, in the end, more fragile systems. 

Jamie Dobson: The fragility has to go somewhere. So the more resilient the system is, the more fragile the users are. And then the converse of that is true. The more a system fails, the more flexible its users become, and the more workarounds they have because they're not sure if it's gonna be ready. I do know one of the key lessons I took while whilst putting Visionaries and Rebels together could be distilled into one sentence. A system that doesn't fail during its development will fail catastrophically in production. And so what you're left with is electricity grid, the internet, the cloud computing, they're so amazingly, you know, resilient and reliable, they are literally are literally always there. You start to take, you do start to take them for granted. but the paradox is that if you want to create resilient systems, you've got to simulate, stimulate failure in order to learn how to deal with failure, therefore avoid it in the future. It's all a little bit circular really.

Anne Currie: Yeah. Yeah. So the irony is that exposing end users to the fluctuation in the availability and price of electricity for renewables, it sounds scary, but it will produce, in the end, a more resilient society. A more resilient system on a countrywide scale.

Jamie Dobson: And in your opinion, what's the relationship between this, these type of tariffs and demand side behavior and cloud computing? Where is the link there?

Anne Currie: Well, I mean, data centers are users of a grid. They are users that, they are prime users of electricity. If we make a tenfold difference, and I don't think it's gonna, it's gonna affect, it is gonna work for anything less than a tenfold difference in price, we will start to see behavioral change.

We will start to see data centers go, "do you know, is there a way that we can, we can reduce the number of machines that are running," because at that point the cost will start. So we need to get it to a point where the cost, the different time of use tariff costs make it worthwhile switching to operations to when the sun is shining the winds blowing.

But that is what we have to do, because we need the demand side response behavior. We need the change of response from users. So we have to make it worth their while.

Jamie Dobson: You're gonna use economic nudges to make data centers consume green energy, right? So that's the energy side of the equation. What do we do about water supply? So, I don't know if you realize, but lots of data have been refused planning permission

because they will drain fresh water from people's houses governments, quite, you know, are not ready to sort of take that on the chin.

So what are your thoughts on the water issue?

Anne Currie: Well, again, that's, that is a known issue. If we actually, at the moment, if they don't have to do it, they won't do it. So if it will, cooling using water is very cheap and easy. And therefore they do, that's what they do.

That is the default. But there are alternatives. I mean, if you look at more modern chips that are, I mean Intel, it's a bit of an old fashioned chip these days, it's very hot. The Nvidia chips are very hot, but there are chips that are coming out that are much more efficient, that're much cooler, that, and that are often designed to be air cooled, not water cooled.

So, if we move towards, so it is not unknown, the technology exists for chips that don't get so hot that they require water cooling. The future is chips that can be air cooled. And if they can be air cooled, they're cooled with aircon. And aircon can be fueled by solar power, because obviously, you know, it's when it's hot and it's sunny that you have the biggest problem with heating, it's when it's not sunny and warm,

it's less of an issue. So, the future here, the solution is better and more efficient chips hardware that can be air cooled. That that is for most hardware. I think that has to be, that is at least a big part of the solution.

Jamie Dobson: Does the future involve huge data centers that fall under government regulation? Because one of the reasons why the electricity grid became a natural monopoly, is 'cause it made no sense to put six sets of cables down. There wouldn't've been enough space in the street and actually the electricity providers couldn't get economies of scale and therefore could not pass on cheap electricity to its users and therefore electricity would never be become widespread.

So is there a similar argument for the cloud providers presently?

Anne Currie: I have to say I'm a huge believer that we just do it through pricing, that we want data centers to be closed. So in Scotland, and we throw away, we turn off wind, we pay wind farms to turn off. We spend billions and billions of pounds every year to paying wind farms to turn off because there is no user for that power within easy reach of that wind farm.

And we're only talking about Scotland. We're not talking about Siberia. 

Jamie Dobson: I think we could build a data center there.

Anne Currie: Why don't we build a data center there?

Jamie Dobson: They've got plenty of wind and water.

Anne Currie: And an extremely well educated workforce. And it's a bit cooler up there as well, so you don't need to do quite so much cooling anyway. So, but there's no incentive. So while there's no incentive,

people won't act. Once there is incentive and a really juicy incentive in place, you know, a 10 x difference in price, we will see behavioral change. Because we do. People, humans are very good at changing their behavior, but only if there's a good reason to do so.

Jamie Dobson: Yeah, absolutely. Yeah.

Anne Currie: And actually that kind of brings us to the end of our hour.

And we, I think we've had a really interesting discussion. I hope the readers of the listeners and potentially in the future readers have enjoyed the discussion. All the links for everything we talked about, all the books, all the comments, will be in the show notes below, so you can go and have a look.

And you have to, yeah, actually, you know, you can pre-order Jamie's book, Visionaries, Rebels and Machines on Amazon or any good bookshop, now. You can also buy the Cloud Native Attitude or Building Green Software, which you can also read for free if you have an O'Reilly subscription. And when I get round to it, I'm eventually going to create a commons Building Green Software and I kick me.

Everybody should be kicking me all the time to do that because it's just bit work that I need to do. Anyway, so Jamie, thank you so much for being on the podcast. I've really enjoyed our chat. Is there anything final you wanna say before we disappear off?

Jamie Dobson: Nothing final for me. The book launch, there'll be a launch party in London at some point. It's available on Kindle, but for now, I'm just happy to get you know, feedback and it's been great to talk to you today, Anne, and I really hope your listeners took something away from this.

Anne Currie: So I hope people enjoyed the conversation. It was a bit, a little bit of an author's book club, so a bit different to normal. But I hope you enjoyed it and let us know if you want to hear more of this kind of discussion. Thank you very much and, until we meet again, goodbye from me. 

Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.

To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 




Hosted on Acast. See acast.com/privacy for more information.

Show more...
5 months ago
52 minutes 7 seconds

Environment Variables
How to Explain Green Software to Normal People
Host Chris Adams speaks with James Martin about how to communicate the environmental impact of software to a general audience. Drawing on his background in journalism and sustainability communications, James shares strategies for translating complex digital sustainability issues into accessible narratives, explains why AI's growing resource demands require scrutiny, and highlights France’s leadership in frugal AI policy and standards. From impact calculators to debunking greenwashing, this episode unpacks how informed storytelling can drive responsible tech choices.

Learn more about our people:
  • Chris Adams: LinkedIn | GitHub | Website
  • James Martin: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

News:
  • Environmental Footprint Calculator | Scaleway [14:19]
  • AI on a diet: how to apply frugal AI standards? - Schneider Electric Blog [26:03] 
  • Frugal AI Challenge | Hugging Face [33:33]
  • Greening digital companies: Monitoring emissions and climate commitments 

Resources:
  • Why Cloud Zombies Are Destroying the Planet | Holly Cummins [14:47]
  • European Sustainability Reporting Standards (ESRS) [21:22]
  • EcoLogits [21:54]
  • Empire of AI - Wikipedia [29:49]
  • Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI | Sasha Luccioni et al. [30:38] 
  • Sam Altman (@sama) on X [31:58]
  • Référentiel général d'écoconception de services numériques (RGESN) - 2024 [37:06] 
  • Frugal AI 

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

James Martin: When I hear the term AI for Good, which we hear a lot of at the moment, I would say that I would challenge that and I would encourage people to challenge that too by saying "are sure this AI is for good? Are you sure this tech is for good? Are you sure that the good that it does, far outweighs the potential harm that it has?"

Because it's not always the case. A lot of the AI for good examples see at the moment are just, they can't be backed with scientific data at all.

And that comes back to another of my points. If you can't prove that it's for good, then it's not, and it's probably greenwashing.

Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams. 

Welcome to Environment Variables, where we bring you the latest news and updates from the Board of Sustainable Software Development. I'm your host, Chris Adams. Our guest today is James Martin, a content and communications expert who has spent years translating complex text strategies into compelling narratives that drive change.

From leading communications with a special focus on sustainability at Scaleway, to founding BetterTech.blog, James has been at the forefront of making green tech more actionable and accessible. He's spoken at major climate and tech events, most recently ChangeNOW. He's written a comprehensive white paper on green IT, and played a key role in Gen AI Impact a French NGO working to measure the impact of AI. And also he's a Green Software Foundation champion.

So, James, thank you so much for joining the podcast. Really lovely to see you again after we last crossed paths in, I guess Paris, I think. Maybe I've tried to introduce you a little bit, but I figure there's maybe some things you might wanna talk about as well. So, can I give you the floor to just like introduce yourself and, talk a little bit about yourself?

James Martin: Yeah, thanks very much, Chris. First and foremost, I just wanted to say I'm really happy to, be on this podcast with you because, this podcast is one of the things that really got me excited, and it started me off on my green IT adventure. So, thanks to you and Anne for putting all, putting out all these amazing episodes.

Basically what I'm speaking today in the name of BetterTech, which is my blog, which I founded 2018. So I've been a, I've been a journalist for most of my career. And, so for about 15 years I was writing for a French cultural magazine. I had a page in that two weeks. And I started off writing out, "here's a new iPhone, here's a new console." And after that I got a bit bored of just saying the same thing every time. So I was drawn towards more responsible topics, like how do you reduce your screen time, how do you protect your data?

And also, of course, what is the impact of technology on the planet? So that started in that, in that magazine, and then I got so into it, i founded my own blog on the topic.

And then that was pretty much when an opportunity came up, in 2020, 2021 to work at Scaleway. I thought that sounds really interesting because, that is a European cloud provider, so not American. And also they were already very, communicating a lot about the sustainable aspect of what they do. So, yeah, I was very happy to join them and lead their communications from 2021 with this huge focus sustainability.

Chris Adams: Ah.

James Martin: Yeah, that's how, that's basically where it started. At that time, Scaleway had its centers and one of them called DC five, which is one of the most sustainable Europe because it doesn't have air conditioning, so it uses a lot less energy. That's it. It has adiabatic cooling. So we focused a lot of communication efforts on that. But then after year or two, Scaleway decided to sell its data centers. I had to look at are the other ways I could talk about sustainability in the cloud? So from digging around into green IT, especially into some green Software Foundation resources,

I basically understood that not just data centers, it's hardware and software. So I also, with a bit of help from one of a pivotal meeting, was meeting Neil Fryer from who the Green Software Foundation at a conference. I got him to come and speak at Scaleway to people like me were sort of concerned about the impact of tech. And then that led to the white paper that you mentioned that I erase in 2023, which is basic. It's basically how engineers can reduce the impact of of technology. So, and then that led to speaking opportunities and then to realize that, yeah, I'm not a, I'm not a developer. I'm not an engineer. I may be the first non-developer on this podcast. So I can't build Green Tech, but I can explain how it works and I think that's an important thing to be able to do, if we want to convince as many people as possible of how important this is, then it needs to be communicated properly.

And, yeah, so that's what I've been doing ever since.

Chris Adams: Okay, thanks. Okay, so I'm, I appreciate that you're coming here as not as a non, as someone who's not like a full-time techie who's like using GitHub on the daily and everything like that, because I think that means you, you get a bit of a chance to like see how normal people see this who aren't conversant in like object storage or block storage or stuff like that.

So maybe we can talk a little bit about that then, because when people start to think about, say, the environmental footprint of digital services, right? It's often coming from a very low base. And it's like people might start thinking about like the carbon footprint of their emails, and that's like the thing they should be focusing on first.

And like if you do have a bit of domain knowledge, you'll often realize that actually that's probably not where you'd start if you have a kind of more, more developed understanding of the problem. Now you've spent some of your time being this translator between techies and like people who are not full, you know, who, who aren't writing code and building applications all day long, for example.

So maybe we could talk a little bit about like the misunderstandings people have when they come to this in the first place and how you might address some of this because this seems to be your day job and this might be something that could, that might help who are other techies realize how they might change the way they talk about this for other people to make a bit more accessible and intelligible.

James Martin: Yes. So, thank you for mentioning day job first and foremost, because, so Scaleway was my former day job and I have another day job working for another french scale app. But here I'm very much speaking in the name of my blog. It's because I care so much about topics that I continue to talk about them, to write about them on the side because it's just, I just think something that needs to be done. So this is why today with my BetterTech hat

Chris Adams: Hat on. Yeah.

James Martin: so yeah, just wanted to make that clear. The first thing that people do when people misunderstand stuff, the first thing I want to say is it's not their fault. Sometimes they are led down the wrong path. Like,  a few years ago, French environment Minister said people should stop trying to send so many funny email attachments.

Chris Adams: Oh, really? 

James Martin: Like when you send a joking video to all your colleagues, you should stop doing that because it's not good for the planet. It honestly, the minister could say something that misguided because that's not where we, you and I know, not where the impact is. The impact is in the cloud.
The impact is in hardware. So it is sort of, about the communication is repetition and I always start with, digital is 4% of global emissions, and 1% of that is data centers, 3% of that is hardware, and software is sort of all over the place. That's the thing I, the figure I use the most to get things started. I think the, there's number one misconception that people need to get their heads around is the people tend to think that tech is, immaterial. It's because of expressions like the cloud. It just sounds,

Chris Adams: Like this floaty thing rather than massive industrial concrete things. Yeah.

James Martin: Need to make it more physical. If, I can't remember who said that if data centers could fly, then it would make our job easier. But no, that's where you need to always come back to the figures. 4% is double the emissions of planes. And yet, the AIrline industry gets tens of hundreds times more hassle than the tech industry in terms of trying to keep control of their emissions. So what you need is a lot more examples, and you need people to explain this impact over time, so you need to move away from bad examples, like funny email attachments or the thing about, we keep hearing in AI is, one ChatGPT prompt is 10 times more energy than Google. That may or may not be true, but it's a bit, again, it's a bit of the, it's the wrong example, because it doesn't. It doesn't focus on the bigger picture. 

Chris Adams: Yeah. That kind of implies that if I just like reduce my usage of this, then I'm gonna have like 10 times the impact. I'm gonna, you know, that's all I need to, that feels a bit kind of individual, a bit like individualizing the problem, surely. Right?

James Martin: And it's putting it on people's, it's putting the onus on the users, whereas it's not their fault. You need to see the bigger picture. And this is what I've been repeating since I wrote that white paper actually, you can't say you have a green IT approach if you're only focusing on data centers, hardware or software. You've got to focus on, yeah, exactly. Holistically.

That said, you should also encourage people to have greener habits because that's, me stopping using ChatGPT just on my own won't have much impact, but it will if I can convince, if I can tell my family, if I can tell my friends, if I can talk about it in podcasts and conferences, then maybe the more people question their usage, then maybe the providers of that tech start providing more frugal examples. But

Chris Adams: Ah, I see. So that's like maybe almost like choice architecture, giving people like, you know, foregrounding some of the options. So, you know, making it easier to do, possibly the more sustainable thing, rather than making people at the right, at the end of the process do all the hard work. You, it sounds like you're suggesting that okay, as a professional, part of my role is to kind of put different choices in front of someone who's maybe using my service to make it easier for them to do more sustainable things, rather than like things which are much more environmentally destructive, for example.

James Martin: Yes. And I would add a final thing, which is sort of super important because people, there are a topics like electric cars for example, which people get really emotional and angry about, 'cause people are very attached to their cars and yet cars are the number one source of emissions in most Western countries. The way around the emotion is to use, I really focus on only using science-based facts. If it's from the IPCC, if it's from the IEA, if it's like really serious scientific studies then you can use it. If it's just someone speculating on LinkedIn, no. So I always make sure that data I use as fully backed by science, by a sort of by all the GHG protocols, looking at all three scopes all that sort of thing. Because otherwise you just can't, it could be greenwashing.

Chris Adams: Okay. Alright, so maybe this is actually a nice segue for the next question because when people talk about, say, well basically in footprint from here, one of the challenges people have is like, like having some numbers, having tr, having some figures for any of this stuff. For example, if I'm using maybe a chat bot, i-, it's very hard for me to understand what the footprint might be. So in the absence of that, you can kind of see how people end up with an idea saying, oh yeah, every query is the same as a, you know, bottle of water, for example. Simply because there is a kind of dearth of information. And this is something that I think that I remember when you presented at Green IO, a conference around kind of green IT, you were talking about how this is actually something that you've had quite a lot of firsthand experience with now, particularly when you're working at Scaleway because there's like new calculators published and stuff like that. I mean, we can talk about the AI thing in a bit more detail later and, but I wanted to ask you a little bit about the impact calculators that I saw you present before.

So are there any principles or any kind of approaches that you think are really helpful when you're helping people engage with a topic like this when they're trying to use a calculator to kind of modify or like improve the footprint as like a professional.

James Martin: Yes. Well, one of the things that sort of peaked my curiosity when we were looking into the topic at Scaleway is, what percentage of servers or instances are really used?

And I was inspired by that, by the work of Holly Cummins from, from Red Hat, who famously said that instances possibly represent around 25% of cloud activity. When I asked around, do cloud providers in general try and identify that, that zombie activity and to just to shut it down, the, from asking around various cloud providers, the consensus I seem to get was, well, no, because people are paying for those instances. So we are just gonna, we are just going to, why would we flag that sort of thing?

So that also shows this sort of, the sort of pushback that a, that an environmental calculator might get. Even though, I mean, you could argue that, the fact that there are zombie instances is potentially more the client's fault than the cloud provider's fault. But yeah, the, building a project like that is just to say that you're going up against of habits where people haven't really, if you want more resources, you can have them, even if you've got too many. It's a

Chris Adams: Yeah, I guess the incentives. 

James Martin: Yeah, the cloud has been a, pretty much a all you can eat service in general for since it was invented. So going sort of try and get to get people to use it more responsibly can be seen a bit as going against the grain, but the good news is, it was, got lots of really positive feedback from clients about it and, I don't know how it's doing now, but I'm sure it's doing some really useful work.

Chris Adams: So you said, so I just wanna check one thing, 'cause we, you, we said, this idea of zombie instances. My, my guess when you say that is, that's basically a running virtual machine or something like that, that's consuming resources, but it doesn't appear to be doing any obviously useful work. Is that what a zombie is in this context?

Right. Okay, cool. And, I can kind of see like why you might not want to kind of turn people's stuff off, along that, because if you are, I mean, if you are running a data center, you're kind of incentivized to keep things up and if you're selling stuff, you're kind of incentivized to kind of make sure there's always stuff available.

Right. But I do, I, kind of see your point, like if you, if you're not at least making this visible to people, then yeah, how are people able to make kind of maybe any responsible choices about, okay, is this really the right size, for example? And if like a chunk of your revenue is reliant on that, that's probably another reason that you might not wanna do some of that stuff, so. Oh, okay. Alright. So there's like a change of incentives that we may need to think about, but I know that one thing that I have seen people talking about in France a lot is actually not just looking at energy and, yeah, okay, France has quite a clean grid because there's lots of things like low carbon energy, like nuclear and stuff like that, but is there something else to that? Like why, is it just because the energy's clean, there's nothing else to do? Or is there a bigger thing that you need to be aware of if you are building a calculator or making some of these, figures available to people?

Is energy the full picture or is there more to it that we should be thinking about?

James Martin: No. Exactly. That was the, that was really the real unique point about Scaleway's calculator, is it wasn't just the carbon calculators and so not just energy and emissions, but also the impact of hardware and also the impact of water, how much water is your data center using? And was a really important part of the project. And I remember my colleagues telling me the most challenging part of the project was actually getting the hardware data off the manufacturers. 'Cause they don't necessarily declare it. 

Nvidia, for example, still gives no lifecycle analysis data on their GPUs. So, it's incredible. But, there it is. So basically, what Scaleway set out to do is the opposite of what AWS does, which is, AWS says, we've bought all this green energy, renewable energy, we've bought enough carbon credits to cover us for the next seven years. Therefore, your cloud is green.

Chris Adams: Nothing to do. No changes. Yeah.

James Martin: Yeah. Which is completely false because it's ignoring the scope three, which is the biggest share of emissions, the emissions. So all of that is ignored. I worked out from a report a while ago that nearly 65% of the tech sector's emissions are unaccounted for. It's a complete, in the dark. Then if you consider that only 11% of tech impacts our emissions, the rest is hardware,

then we're really, what the information that we've got so far is like, it's portion of the real impact. So that was why, it was such a big deal that Scaleway was setting out to, to cover much of the real impact as possible. Because

once you have as broad a picture of as possible of that impact, then you can make the right decisions. As you were saying, Chris, the, then you can choose, I'm going to go for data centers in France because as they say, as you, they, because they have this lower carbon intensity, I might try and use this type of product because it uses less energy. I'd say that is a, that is an added value provider can bring that should attract more clients, I'd have thought, with what with, you've got things like CSRD and all sorts of other 

Chris Adams: Yeah, it's literally written into the standards that you need to declare scope three for cloud and services and data centers now. So if getting that number is easier, then yeah, I can see why that would be helpful actually.

James Martin: Absolutely.

Chris Adams: All right. We'll share a link to that specific part of the European Sustainability reporting standards. 'Cause it kind of blew my mind when I saw it actually. Like I didn't realize it was really that explicit. And that's something that we have.  So you mentioned Nvidia and you mentioned there's a kind of like somewhat known environmental footprint associated with the actual hardware itself. And as I understand it, you mentioned GenAI Impact, which is an organization that's been doing some work to make. Some of these numbers a bit more visible to people when they're using some of that. Maybe, I could just ask you a little bit, and I know as I understand it, is GenAI impact, is it based primarily in France? Is that

James Martin: Yeah. So the sort of my origin story for that was, it was again, Green IO more hats off to Gael. So that was at Green IO Paris 2023. It ended with a, from, Théo Alves Da Costa, who is the co-president of Data for Good, is ONG, which has this like 6,000 data scientists, engineers who are all putting their skills to for good, basically as volunteers. And so he did it this presentation, which, notably drew on a white paper from Data for Good, which said that we didn't really know that much at the time, but that the impact of inference could be anything from 20 to 200 times more than the impact of training.

And he showed it with these bubbles, and you just, and I just looked at it and went, oh my God, this is beyond the, this goes way beyond any level of cloud impact that we've been used to before. So, yeah, that drew me to get interested in, I went to Data for Good's next meeting launched, GenAI Impact, which is the, project which ended up producing Ecologits.ai, which is a super handy calculator for.

Chris Adams: this is a tool to give you to like plugs into like if you're using any kind generative AI tools it as I understand it, like, 'cause we looked through it ourselves. Like if you're using maybe some Python code to call ChatGPT or Mistral or something, it will give you some of the numbers as you do it and it'll give you like the hardware, the water usage and stuff like that.

It gives you some figures, right?

James Martin: Exactly. And the way it does it is, pretty clever so it will mostly measure open source models, easy because you know what their parameters are all the data is open. And it will compare that with closed models. So it will be able to give you an estimation of the impact closed models like ChatGPT so you can use it to say, what is the impact of writing a tweet with, chat g PT versus what is the impact of doing it with llama or whatever? And, because big tech is so opaque, and this is one of my big, bug bears, it means that it gives us a sort of

Chris Adams: That's the best you've got to go on for like me. Yeah.

James Martin: very educated guess, and which is something that should, people to use frugal, AI. That's the idea.

Chris Adams: Okay. So I, this is one thing that I'm always amazed by when I go to France because there seems to be the, field seems to be further along quite a, definitely in Ger than Germany, for example. And like for example, France had the AI Action Summit this year. It's the only country in the world where the kind of government supported frugal AI channel.

You've mentioned this a few times and I'm, might give you a bit of space to actually tell people what frugal AI actually is. I mean, maybe we could talk, how does a conversation About AI spec, for example, how does it differ in France compared to maybe somewhere else in the world, like, that you've experienced because I, it does feel different to me, but I'm not quite sure why.

And I figure as someone who's in France, you've probably got a better idea about what's different and what's driving that.

James Martin: Yeah, it's, it really is a, it is the place to be. So let's say. If you've seen that the Paris just moved ahead of London as the sort of one of the best places for startups to be at the moment. And one of the reason for that is that very strong AI ecosystem. Everyone thinks of Mistral first and foremost, but are lots of others. But yeah, I just wanted to talk first, before I get into that, I wanted why do we need frugal AI? Because, it's not something that people think about on a daily basis, like I was saying before. you can, My wife the other day was, she's a teacher and she was preparing her, she was using ChatGPT to prepare help prepare her lesson. And I was like, no, don't use that. There are lots of, there are lots of other alternative, but to her it's just of course, there and to 800 million people who use every week. They do it because it's free and they do it because works really well. But, what they don't know is that because of tools like, like ChatGPT and we know that ChatGPT is amongst the highest impact of, model. Data center energy consumption is going to triple or maybe even quadruple by the end of the decade. And data center water consumption is going to quadruple by the end of the decade. And there are lots of very serious studies which all, they all came out at the end of last year. Most of them, they all concur that this is, or, all of these, if you put all of their graphs together, they are very, they're very similar and the scariest thing about them, in fact, is that they show that data center energy consumption has been pretty much flat for the past years because whilst cloud usage has been surging something like 500%, the data center operators like Scaleway and lots of other companies have been able to optimize that energy usage and keep it flat. The problem is that AI is, because this has all been based on CPUs, because AI uses GPUs, which use four times more energy and heat up 2.5 times more than CPUs, the curve has gone like this. It's done a complete dog leg.

The consumption of GPUs is just on a such a different scale that the tricks to keep it under control before don't work anymore. So we are really in a sort of, we've reached a tipping point. And it is because, partly people are like generating like millions of Ghibli images, starter packs or, I'm simplifying a lot, but my, I'm, what I'm questioning is, how, when you look at that graph, how much of this activity is really useful? How much of it is curing cancer or, or the greatest joke of all, fixing climate change? When it's, happening is it's making it worse. And that this, again, this dog leg is so sharp that we can't build nuclear power quickly enough to fill up this demand. So what's happening is that, coal burning energy generators, or gas, are being kept open so that we can keep, making those images and doing our homework and all that sort of thing. So that is in a nutshell is, is why we need frugal AI. And we need it also because the, it has been built in a way.

If you, if you haven't read, book Empire of AI yet, by Karen Hao, it's very strongly recommended, because of the things it explains is that the genesis of OpenAI, at some point they decided, that bigger your model is, the more, basically the more compute power it uses, the better it will be. And they've just been built building on that premise ever since the launch of ChatGPT. Whereas the fact is, the most recent versions of ChatGPT or GPT, actually hallucinate more than the less powerful version. So why do we need to throw all that power at it? When, as we see from talking to people like the amazing Sasha Luccioni, with LLMs for example, you have models that are 30 to 60 times smaller, which can do just a, just as well a job, just as good a job. So these are the sort of conversations that you can have a lot, in France, which is really sort of standing out today as a frugal AI pioneer. The fact that the, over 90% of French electricity is carbon free is, that helps a lot. That's something that Mistral in particular, on a lot, say, we've got clean energy, therefore we are green. Watch out for the AWS effect. But it is a very important point, because all the ChatGPT and other impact that's happening in America. And so I was very happy to see because of, big tech's opacity, Ecologits, which as you mentioned is a Python library, it very quickly became a global reference because that's all we had.

Chris Adams: Okay, so when the bar's on the floor, it doesn't need to be very high, right? 

James Martin: Yeah exactly. It's like, my favorite tweet, I think my favorite tweet of the year so far is Sam Altman. I can even share the link to the tweet because I love slash love it so much. It basically said when all these hundreds of thousands of millions of Ghibli images happened, and he joked that GpUs were melting. He said, he shared this completely ridiculous graph, which said, this is the water impact of one ChatGPT query, is the water impact of one Burger.

Chris Adams: Yeah.

James Martin: Sam Altman's comment in the tweet was. Anti AI people making up shit about the impact of ChatGPT whilst eating burgers. And I just found it so cynical because A, I'm not anti AI, I'm just, I'm anti waste. And the, so that's the third point. the reason that people have to make shit up is because they don't declare access to any of the numbers. Yeah. if they did, we wouldn't even be having this conversation. We would be able to say, ChatGPT is this, is this, Llama is this. And we'd be able to compare everyone on a, on the same playing field. But

Chris Adams: On their merits. Yeah.

James Martin: Yeah. So coming back to France. Because I'm wary of going off on a rant. The French government is really, sort of, has been incredible on this topic. So they, around the time of that AI Action Summit, they supported A frugal AI challenge whereby people were encouraged to complete AI tasks across audio, text, and image. And they, you would win the challenge by doing, completing the tasks, whilst using x times less energy than the big LLMs. And so the projects that won, they used 60 times, one of them used 60 times less energy big LLM. Proving that these big LLMs are not necessary.

Chris Adams: And it was solving the same task. 'cause I think from memory, there was like, there was a few challenges which were like, you know, combat disinformation online, discover something useful there. The things like, which were, they weren't, they weren't something which was like, you know, these were considered socially useful problems, but people were free to use any kind of approach they were gonna, they were to take. And what, so what you're saying is that okay, you could use an LLM to solve one of them, but what, solve it one way, but there's other ways that they solved it. And some of the winners were quite, you know, 60 times more efficient, essentially 60 time less consumptive.

Right.

James Martin: Exactly. So, yeah, it's great to have projects like that. The French Government is also has obtained funding for around a dozen frugal AI projects, which are being run by municipalities all over France. So they're using it to optimize energy usage or to detect garbage in the street or that sort of thing. So that's great. The French government also supports the frugal AI guidelines of AFNOR. AFNOR is France's International, sorry, is France's Official Standards Organization, and what they've done is like basically to say for your AI to be frugal, it needs to correspond with these criteria. The first criteria, which I love is, can you prove that this solution cannot be solved by anything else than AI? And it's pretty strict. There are three first steps, but then it goes into a lot of detail about what is or is not frugal AI, and that's such pioneering work it's on track to become EU standard. That's some really some great work there. But I think, for me, one of the best arguments that I use about why should you bother with frugal AI is, very simply, the French Ministry for the Environment has said to startups, if you want to work with us, you have to prove that your AI is frugal first. So,

Chris Adams: Oh, okay. So it's like they're creating demand pool then essentially to like, so like, you know, this is how this is your carrot. Your carrot is a fat government contract, but you need to demonstrate that you're actually following these principles in what you do.

James Martin: I love that because it shows that doing things frugally can actually be good for your business.

Chris Adams: Okay. Alright. So, wow. I think we should definitely make sure we've got some links for a bunch of that stuff. 'Cause I wasn't aware that there were, I know that France in the kind of world of W3C, they have, I can never put, I never, it's the RGESN and I forget I'm not gonna, yeah. I'm not gonna butcher the pronunciation, but it broadly translates to like a general policy for EcoDesign, and I know that's like a standards track for Europe. 

James Martin: Yes. 

Chris Adams: If I can find the actual French words, I might try to share it, but, or maybe you might be to help me with that one because my French is not as, is, nowhere good enough to spell it properly. But I'm also aware that France is actually one of the first countries in the world to actually have like a digital sustainability law. There was one in 2020, the REEN, the Oh yeah.

James Martin: That's it. That's it. Yeah. I was very focused on AI with all those examples. But yeah, France is the only country which has a Digital Responsibility Act, called REEN, basically says, for example, that any municipality with over 50,000 inhabitants has to publish their digital responsibility strategy, even if it's just, we are going to buy older, we are going to keep our PCs going for longer or, sort of simple stuff like that. They, the, this French law demands that localities, municipalities, only make an effort on these things, but they show that they are making an effort. So 

Chris Adams: I see.

James Martin: in a sort of a great incentive.

Chris Adams: Ah, okay. So that I now understand. So the, with the RGESN, as I understand it, that was essentially something like a guide sort of guidelines for France. Ah, so,

James Martin: yeah, it's two different things. RGSN, the guidelines for econ conception. so the how to make your website not only more energy efficient, but also more accessible to people of varying abilities. There's also a law that just came into effect here in France to make websites more accessible. So that, it is great to see those two things going hand in hand. They also announced at the AI Action Summit that they were going to invest hundred billion in new data centers for AI by the end of the decade. You win some, you lose some. But maybe better to do that here with lower carbon than in the states, which is generally speaking, 10 times more carbon in the electricity.

Chris Adams: Okay. It sounds like there's a lot happening in France. So not only that, are they talking, so there is this whole, not only is this, there's an idea of like frugal AI in digital sobriety, which is this other French term, which when translated in English, always sounds really strange to my ears, but there's actually quite a lot of, for want of a better word, like policy support behind this stuff to actually encourage people to work in this way, basically, huh?

James Martin: Absolutely. And again, I would give a, another heads up to Data for Good for that because they were instrumental in that frugal AI challenge along with Sasha Luccioni.

Chris Adams: Okay.

James Martin: By the way, we'll be, we'll be speaking at Viva Tech. So, Viva Tech is France's biggest tech event. It's actually one of the biggest
tech events in Europe. Unfortunately, they had Elon Musk as their keynote last year and the year before. Fortunately they won't this year.

Chris Adams: Yeah.

James Martin: Sasha is going to be one of their keynotes this year, which is also great, I think it's a good sign.
And she will also be speaking on a panel as part of a sustainability summit with Kate Kallot, which is of Amini AI. And I'll be that conversation. So I'm happy these sort of conversations are happening. Not

Chris Adams: But more mainstream by the sounds of things.

James Martin: Not only between, people like you and me who care, and are, who understand all the tech. But it's super important, as I was saying at the beginning, to be having these conversations with as broad an audience as possible, because otherwise nothing's gonna change.

Chris Adams: Okay, so we've spoke about, we've gone quite deeply into talking about AI and hardware and water and stuff like that. If we pull back out. So you are, we talk about how people might engage with this topic in the first place.

If there's one thing you could change about how people talk about sustainability, particularly in technology, what would you change, James?

James Martin: I suppose I'd presume it as, don't believe the hype. And the hype tech is usually, bigger is better. What I would like people to try and really integrate is that bigger isn't always better. As we said before, it is very important to look at the holistic picture of impacts rather than just the individual ones. It's more important to pressure companies to change as you see with that French government example, rather than making users feel guilty because again, it's not their fault. And I just think people, what I try, what I'm trying to do as often as I can, Chris, is just bring people back to that sort of gold standard of green IT, which is only use the right tools for the right needs.

This is why this sort of bigger is better thing is just so irritating to me. The way AI is being done right now, it's a classic in tech. It's using a bazooka to swat a fly. It's not necessary. And it's actually, not only is it ridiculous, but it's also very bad the planet. So, if you only need to do this much, you only need a tool that does this much, not this much. And that's one of the reasons that why,when I hear the term AI for Good, which we hear a lot of at the moment, I would say that I would challenge that and I would encourage people to challenge that too by saying, "are sure this AI is for good? Are you sure this tech is for good? Are you sure? That the good, that it does, far outweighs the potential harm that it has?"

Because it's not always the case. A lot of the AI for good examples see at the moment, are just. they can't be backed with scientific data at all.And that comes back to another of my points. If you can't prove that it's for good, then it's not, and it's probably greenwashing.

Chris Adams: Okay. So show us your receipts then. Basically, yeah.

James Martin: Yeah. 

Chris Adams: Okay. Well thanks for that, James. James. we're just coming up to time now. So if people have found this interesting and they wanted to learn more about either your writing or where you'll be next, where should people be looking? Is there like, maybe, I mean, you mentioned the website for example, is there anywhere else people should be looking to kind of keep up with, like updates from you or anything like that?

The website is BetterTech.blog. So yeah, that's the main, that's where you can find a lot more resources about my work on the impact AI and on other things. I also post frequently on LinkedIn about, about this sort of thing, like things like the last one was about frugal prompting.

James Martin: That's, my latest discovery. and, yeah, those are the two, main sources. And, I'll work together to make sure that the.

Chris Adams: We have all the links for the show notes and everything like that.

James Martin: of this, of this episode.

Chris Adams: Brilliant. Well, James, thank you so much for giving me the time, and to everyone's listening, for all of this. And I hope you enjoy the rest of the day in what look appears to be sunny Paris behind you.

James Martin: It is been, it's been sunnier, but it's fine.

Chris Adams: Okay.

James Martin: It's still Paris, so grumble. Thanks very much.

Chris Adams: Indeed.

James Martin: Thanks very much, Chris. It's like I said, it's been a real honor to be on this podcast and I hope we've been able that's useful for people.

Chris Adams: Merci beaucoup, James.

James Martin: Merci as well, Chris. 

Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.

To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 




Hosted on Acast. See acast.com/privacy for more information.

Show more...
5 months ago
46 minutes 27 seconds

Environment Variables
Why You Need Hardware Standards for Green Software
Chris Adams is joined by Zachary Smith and My Truong both members of the Hardware Standards Working Group at the GSF. They dive into the challenges of improving hardware efficiency in data centers, the importance of standardization, and how emerging technologies like two-phase liquid cooling systems can reduce emissions, improve energy reuse, and even support power grid stability. They also discuss grid operation and the potential of software-hardware coordination to drastically cut infrastructure impact. 

Learn more about our people:
  • Chris Adams: LinkedIn | GitHub | Website
  • Zachary Smith: LinkedIn | Website
  • My Truong: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • Hardware Standards Working Group | GSF [06:19]
  • SSIA / Open19 V2 Specification [12:56]
  • Enabling 1 MW IT racks and liquid cooling at OCP EMEA Summit | Google Cloud Blog [19:14] 
  • Project Mycelium Wiki | GSF [24:06]
  • Green Software Foundation | Mycelium workshop 
  • EcoViser | Weatherford International [43:04]
  • Cooling Environments » Open Compute Project [43:58]
  • Rack & Power » Open Compute Project 
  • Sustainability » Open Compute Project 
  • 7x24 Exchange [44:58]
  • OpenBMC [45:25]

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!


TRANSCRIPT BELOW:

Zachary Smith: We've successfully made data centers into cloud computing over the past 20 or 25 years, where most people who use and consume data centers never actually see them or touch them. And so it's out of sight, out of mind in terms of the impacts of the latest and greatest hardware or refresh. What happens to a 2-year-old Nvidia server when it goes to die? Does anybody really know Hello, and welcome to Environment Variables,

Chris Adams: brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams. 

Hello and welcome to Environment Variables, the podcast where we explore the latest in sustainable software development. I'm your host, Chris Adams. Since this podcast started in 2022, we've spoken a lot about green software, how to make code more efficient so it consumes fewer resources or runs on a wider range of hardware to avoid needless hardware upgrades, and so on.

We've also covered how to deploy services into data centers where energy is the cleanest, or even when energy is the cleanest, by timing compute jobs to coincide with an abundance of clean energy on the grid. However, for many of these interventions to work, they rely on the next layer down from software,

the hardware layer, to play along. And for that to work at scale, you really need standards. Earlier this year, the SSIA, the Sustainable and Scalable Infrastructure Alliance, joined the Green Software Foundation. So now there's a hardware standards working group for HSWG within the Green Software Foundation too.

Today we're joined by two leaders in the field who are shaping the future of sustainable software. So, oops, sustainable hardware. We've got Zachary Smith formerly of Packet and Equinix, and My Truong from ZutaCore. We'll be discussing hardware efficiency, how it fits into the bigger sustainability picture, the role of the Open19 standard, and the challenges and opportunities of making data centers greener.

So let's get started. So, Zachary Smith, you are alphabetically ahead of My Truong, Mr. Truong. So can I give you the floor first to introduce yourself and tell a little bit about yourself for the listeners?

Zachary Smith: Sure. Thanks so much, Chris. It's a pleasure being here and getting to work with My on this podcast. As you mentioned, my name's Zachary Smith. I've been an entrepreneur, primarily in cloud computing for, I guess it's about 25 years now. I went to Juilliard. I studied music and ended up figuring that wasn't gonna pay my rent here in New York City and in the early two thousands joined a Linux-based hosting company. That really gave me just this full stack view on having to put together hardware. We had to build our own computers, ran data center space, oftentimes helped build some of the data centers, connect them all with networks, travel all around the world, setting that up for our customers. And so I feel really fortunate because I got to touch kind of all layers of the stack. My career evolved touch further into hardware. It just became a true passion about where we could connect software and hardware together through automation, through accessible interfaces, and other kinds of standardized protocols, and led me to start a company called Packet, where we did that across different architectures, X86 and ARM, which was really coming to the data center in the 2014/15 timeframe. That business Equinix, one of the world's largest data center operators. And at that point we really had a different viewpoint on how we could impact scale, with the sustainability groups within Equinix as one of the largest green power purchasers in the world, and start thinking more fundamentally about how we use hardware within data centers, how data centers could speak more or be accessible to software users which as we'll, unpack in this conversation, are pretty disparate types of users and don't often get to communicate in good ways. So, I've had the pleasure of being at operating companies. I now invest primarily businesses around the use of data centers and technology as well as circular models to improve efficiency and the sustainability of products.

Chris Adams: Cool. Thank you Zachary. And, My, can I give you the floor as well to introduce yourself from what looks like your spaceship in California?

My Truong: Thanks. Thanks, Chris. Yes. So pleasure being here as well. Yeah, My Truong, I'm the CTO at ZutaCore, a small two-phase liquid cooling organization, very focused on bringing sustainable liquid cooling to the marketplace. Was very fortunate to cross over with Zach at Packet and Equinix and have since taken my journey in a slightly different direction to liquid cooling. Super excited to join here. Come from, unfortunately I'm not a musician by a classical training. I am a double E by training. I'm joining here from California on the west coast of the Bay Area.

Chris Adams: Cool. Thank you for that, My. Alright then. So, my name is Chris. If you're new to this podcast, I work in the Green Web Foundation, which is a small Dutch nonprofit focused on an entirely fossil free internet by 2030. And I'm also the co-chair of the policy working group within the Green Software Foundation.

Everything that we talk about, we'll do our best to share links to in the show notes. And if there's any particular thing you heard us talking about that you're really interested that isn't in the show notes, please do get in touch with us because we want to help you in your quest to learn more about green software and now green hardware.

Alright then looks like you folks are sitting comfortably. Shall we start?

Zachary Smith: Let's do it.

Chris Adams: All right then. Cool. Okay. To start things off, Zachary, I'll put this one to you first. Can you just give our listeners an overview of what a hardware standards working group actually does and why having standards with like data centers actually helps?

I mean, you can assume that our listeners might know that there are web standards that make websites more accessible and easier to run on different devices, so there's a sustainability angle there, but a lot of our listeners might not know that much about data centers and might not know where standards would be helpful.

So maybe you can start with maybe a concrete case of where this is actually useful in helping make any kind of change to the sustainability properties of maybe a data center or a facility.

Zachary Smith: Yeah. That's great. Well, let me give my viewpoint on hardware standards and why they're so critical. We're really fortunate actually to enjoy a significant amount of standardization in consumer products, I would say. there's working groups, things like the USB Alliance, that have Really provided, just in recent times, for example, standardization, whether that's through market forces or regulation around something like USB C, right, which allowed manufacturers and accessories and cables and consumers to not have extra or throw away good devices because they didn't have the right cable to match the port.

Right? And so beyond this interoperability aspect to make these products work better across an intricate supply chain and ecosystem, they also could provide real sustainability benefits in terms of just reuse. Okay. In data centers, amazing thing, being that we can unpack some of the complexities related to the supply chain. These are incredibly complex buildings full of very highly engineered systems that are changing at a relatively rapid pace. But the real issue from my standpoint is, we've successfully made data centers into cloud computing over the past 20 or 25 years, where most people who use and consume data centers never actually see them or touch them. And so it's out of sight, out of mind in terms of the impacts of the latest and greatest hardware or refresh. What happens to a 2-year-old, Nvidia server when it goes to die? Does anybody really know? You kind of know in your home or with your consumer electronics, and you have this real waste problem, so then you have to deal with it.

You know not to put lithium ion batteries in the trash, so,

you find the place to put them. But you know, when it's the internet and it's so far away, it's a little bit hazy for, I think most people to understand the kind of impact of hardware and the related technology as well as what happens to it. And so that's, I'm gonna say, one of the challenges

in the broader sustainability space for data center and cloud computing. One of the opportunities is that maybe different from consumer, we know actually almost exactly where most of this physical infrastructure shows up. Data centers don't move around usually. Um, And so they're usually pretty big. They're usually attached to long-term physical plants, and there's not millions of them. There's thousands of them, but not millions. And so that represents a really interesting opportunity for implementing really interesting, which would seem complex, models. For example, upgrade cycles or parts replacement or upskilling, of hardware. Those things are actually almost more doable logistically in data centers than they are in the broader consumer world because of where they end up. The challenge is that we have this really disparate group of manufacturers that frankly don't always have all the, or aligned incentives, for making things work together. Some of them actually define their value by, "did I put my, logo on the left or did I put my cable on the right?" You have, a business model, which would be the infamous Intel TikTok model, which is now maybe Nvidia. My, what's NVIDIA's version of this?

IDK. But its 18 month refresh cycles are really like put out as a pace of innovation, which are, I would say in many ways quite good, but in another way, it requires this giant purchasing cycle to happen and people build highly engineered products around one particular set of technology and then expect the world to upgrade everything around it when you have data centers and the and related physical plant, maybe 90 or 95% of this infrastructure Can, be very consistent. Things like sheet metal and power supplies and cables and so like, I think that's where we started focusing a couple years ago was "how could we create a standard that would allow different parts of the value chain throughout data center hardware, data centers, and related to, benefit from an industry-wide interoperability. And that came to like really fundamental things that take years to go through supply chain, and that's things like power systems, now what My is working on related cooling systems, as well as operating models for that hardware in terms of upgrade or life cycling and recycling. I'm not sure if that helps but, this is why its such a hard problem, but also so important to make a reality.

Chris Adams: So if I'm understanding, one of the advantages having the standards here is that you get to decide where you compete and where you cooperate here with the idea being that, okay, we all have a shared goal of reducing the embodied carbon in maybe some of the materials you might use, but people might have their own specialized chips.

And by providing some agreed standards for how they work with each other, you're able to use say maybe different kinds of cooling, or different kinds of chips without, okay. I think I know, I think I know more or less where you're going with that then.

Zachary Smith: I mean, I would give a couple of very practical examples. Can we make computers that you can pop out the motherboard and have an upgraded CPU, but still use

Like the rest of the band. Yeah.

the power supplies, et cetera. Is that a possibility? Only with standardization could that work. Some sort of open standard. And standards are a little bit different in hardware.

I'm sure My can give you some color, having recently built Open19 V2 standard. It's different than the software, right? Which is relatively, I'm gonna say, quick to create,

quick to change.

And also different licensing models, but hardware specifications are their own beast and come with some unique challenges.

Chris Adams: Cool. Thank you for that, Zach. My, I'm gonna come bring to the next question to you because we did speak a little bit about Open19 and that was one thing that was a big thing with the SSIA. So as I understand it, the Open19 spec, which we referenced, that was one of the big things that the SSIA was a kind of steward of. And as I understand it, there's already an existing different standard that def, that defines like the dimensions of like say a 19 inch rack in a data center.

So, need to be the same size and everything like that. But that has nothing to say about the power that goes in and how you cool it or things like that. I assume this is what some of the Open19 spec was concerning itself with. I mean, maybe you could talk a little bit about why you even needed that or if that's what it really looks into and why that's actually relevant now, or why that's more important in, say, halfway through the 2020s, for example.

My Truong: Yeah, so Open19, the spec itself originated from a group of folks starting with the LinkedIn or organization at the time. Yuval Bachar put it together along with a few others.

As that organization grew, it was inherited by SsIA, which was, became a Linx Foundation project. What we did when we became a Linux Foundation project is rev the spec. the original spec was built around 12 volt power. It had a power envelope that was maybe a little bit lower than what we knew we needed to go to in the industry. And so what we did when we revised the spec was to bring, both 48 volt power, a much higher TDP to it, and brought some consistency the design itself.

So, as you were saying earlier, EIATIA has a 19 inch spec that defines like a rail to rail, but no additional dimensions beyond just a rail to rail dimension. And so what we did was we built a full, I'm gonna air quote, a "mechanical API" for software folk. So like, do we consistently deliver something you can create variation inside of that API, but the API itself is very consistent on how you go both mechanically, bring hardware into a location, how you power it up, how do you cool it? For variations of cooling, but have a consistent API for bringing cooling into that IT asset. What it doesn't do is really dive into the rest of the physical infrastructure delivery. And that was very important in building a hardware spec, was that we didn't go over and above what we needed to consistently deliver hardware into a location. And when you do that, what you do is you allow for a tremendous amount of freedom on how you go and bring the rest of the infrastructure to the IT asset.

So, in the same way when you build a software spec, you don't really concern yourself about what language you put in behind it, how the rest of that infrastructure, if you have like, a communication bus or is it like a semi API driven with a callback mechanism? You don't really try to think too heavily around that.

You build the API and you expect the API to behave correctly. And so what that gave us the freedom to do is when we started bringing 48 volt power, we could then start thinking about the rest of the infrastructure a little bit differently when you bring consistent sets of APIs to cooling and to power. And so when we started thinking about it, we saw this trend line here about like. We knew that we needed to go think about 400 volt power. We saw the EV industry coming. There was a tread line towards 400 volt power delivery. What we did inside of that hardware spec was we left some optionality inside of the spec to go and change the way that we would go do work, right?

So we gave some optional parameters the, infrastructure teams to go and change up what they needed to go do so that they could deliver that hardware, that infrastructure a little bit more carefully or correctly for their needs. So, we didn't over specify, in particular areas where, I'll give you a counter example and in other specifications out there you'll see like a very consistent busbar in the back of the infrastructure that delivers power. It's great when you're at a

Chris Adams: So if I can just stop for you for a second there, My. The busbar, that's the thing you plug a power thing instead of a socket. Is that what you're referring to there?

My Truong: Oh, so, good question Chris. So in what you see in some of the Hyperscale rack at a time designs, you'll

see two copper bars sitting in the middle of the rack in the back delivering power. And that looks great for an at scale design pattern, but may not fit the needs of smaller or more nuanced design patterns that are out there. Does that make sense?

Chris Adams: Yeah. Yeah. So instead of having a typical, kinda like three-way kind of kettle style plug, the servers just connect directly to this bar to provide the power. That's that's what one of those bars is. Yeah. Gotcha.

My Truong: Yep. And so we went a slightly different way on that, where we had a dedicated power connection per device that went into the Open19 spec. And the spec is up, I think it's up still up on our ssia.org, website. And so anybody can go take a look at it and see the, mechanical spec there.

It's a little bit different.

Chris Adams: Okay. All right. So basically previously there was just a spec said "computers need to be this shape if they're gonna be server computers in rack." And then Open19 was a little bit more about saying, "okay, if you're gonna run all these at scale, then you should probably have some standards about how power goes in and how power goes out."

Because if nothing else that allows 'em to be maybe some somewhat more efficient. And there's things like, and there's various considerations like that, that you can take into account. And you spoke about shifting from maybe 48 volts to like 400 volts and that there is efficiency gained by, which we probably don't need to go into too much detail about, when you do things like that because it allows you to use, maybe it allows you to move along more power without so much being wasted, for example.

These are some of the things that the standards are looking into and well, in the last 10 years, we've seen a massive, we've seen a shift from data center racks, which use quite a lot of power to some things which use significantly more. So maybe 10 years ago you might had a cloud rack would be between five and 15 kilowatts of power.

That's like, tens of homes. And now you we're looking at racks, which might be say, half a megawatt or a megawatt power, which is maybe hundreds if not thousands of homes worth of power. And therefore you need say, refreshed and updated standards. And that's where the V2 thing is moving towards.

Right.

My Truong: Okay.

Chris Adams: Okay, cool. So yeah.

Zachary Smith: Just, the hard thing about hardware standards, where the manufacturing supply chain moves slow

unless you are end-to-end verticalizer, like some of the hyperscale customers can really verticalize. They build the data center, they build the hardware, lots of the same thing.

They can force that. But a broader industry has to rely upon a supply chain. Maybe OEMs, third party data center operators, 'cause they don't build their own data center,

they use somebody else's. And so what we accomplish with V2 was allow for this kind of innovation within the envelope and do the, one of our guiding principles was how could we provide the minimal amount of standardization that we would allow for more adoption to occur while still gaining the benefits?

Chris Adams: Ah.

Zachary Smith: And so that it's a really difficult friction point because your natural reaction is to like, solve the problem. Let's solve the problem as best we can.

The that injects so much opinion that it's very hard to get adopted throughout the broader industry. And so even things like cooling,

single phase or two phase, full immersion or not, this kind of liquid or this way, different types of pressure, whatever. There's all kinds of implications, whether those are technology use, those are regulatory situations across different environments, so I think like that's the challenge that we've had with hardware standards, is how to make it meaningful while still allowing it to evolve for specific use cases. 

Chris Adams: Alright. Okay. So, I think I am, I'm understanding a bit now. And like I'll try and put it in context to some of the other podcast episodes we've done. So we've had people come into this podcast from like Google for example, or Microsoft, and they talk about all these cool things that they're entirely vertically designed data centers where they're in the entire supply chain. They do all these cool things with the grid, right? But all those designs, a lot of the time they, there's maybe these might be custom designs in the case of Google when no one gets to see them. Or in some cases, like say Meta or some other ones, it may be open compute, which is a, it's a different size to most people's data centers, for example. So you can't just like drop that stuff in, like there's a few of them arouned, but it's still 19 inches that's the default standard in lots of places. And if I understand it, one of the things that, one of the goals of Open19 is to essentially bring everyone else along who already have standardized on these kind of sides so they can start doing some of the cool grid aware, carbon aware stuff that you see people talking about that you probably don't have that much access to if you're not already meta Google or Facebook with literally R&D budgets in the hundreds of millions.

Zachary Smith: Yeah, maybe add some zeros there.

Yeah, I think absolutely, right, which is democratizing access to some of this innovation, right? while still relying upon and helping within the broader supply chain. For example, if EVs are moving into 400 volt, like we can slipstream and bring that capability to data centre hardware supply chains.

'Cause the people making power supplies or components or cabling are moving in those directions, right? But then it's also just allowing for the innovation, right? Like, I think, we firmly seen this in software. I think this is a great part of Linux Foundation, which is, 

no one company owns the, you know, monopoly on innovation. And what we really wanna see was not to like, can we make a better piece of hardware, but can we provide, some more foundational capabilities so that hundreds of startups or different types of organizations that might have different ideas or different needs or different goals could innovate around the sustainability aspect of data center hardware and, I think what we're focused on now within GSF is really taking that to a more foundational level. There's just a huge opportunity right now with the data center construction industry happening to really find a even more interesting place where we can take some of those learnings from hardware specifications and apply it to an even broader impact base 

Chris Adams: Ah, okay. Alright. I'll come back to some of this because there's, I know there's a project called Project Mycelium that Asim Hussain, the executive Director of the Green Software Foundation is continually talking about. But like we've spoken a little about, you mentioned, if I understand it, like, this allows you to maybe have more freedom to talk about maybe, instead of having like tiny fans, which scream at massive, thousands and thousands of RPM, there's other ways that you could maybe call down chips for example. And like, this is one thing that I know that the hardware standards working group is looking at, is finding ways to keep the servers cool, for example. Like as I understand it,

using liquid is, can be more efficient, quite a bit more efficient than having tiny fans to cool at massive RPM to cool things down. But also, I guess there's a whole discussion about, well there's different kinds of, there's different ways of cooling things which might reduce the kind of local power draw, local water usage in a data center, for example.

And like, maybe this is one thing we could talk a little bit more about then, 'cause I dunno that, we've had people talk about, say, liquid calling and things like that before, as like, these are some alternative ways to more sustainably cool down data centers in terms of how much power they need, but also what their local footprint could actually be.

But we've never had people who actually have that much deep expertise in this. So maybe I could put the questions to one of you. Like, let's say you're gonna switch to liquid calling, for example, Instead of using itty bitty fans or even just bigger, slightly bigger fans, running a little bit slower. Like, how does that actually improve it? Maybe you could, maybe I could put this to you, My. 'Cause I think this is one thing that you've spent quite a lot of time looking into, like, yeah, where are the benefits? Like what, how does, how did the benefits materialize if you switch from, say, air to a liquid calling approach like this?

My Truong: Yeah, so on the liquid cooling front, there's a number of pieces here. The fans that you were describing earlier, they're moving air, which is effectively a liquid when you're using it in a cooling mode at 25,000 RPM, you're trying to more air across the surface and it doesn't have a great amount of, 

Zachary Smith: Heat transfer capability. 

My Truong: removal and rejection. Yeah. heat transfer capabilities. Right. So in this world where we're not moving heat with air, we're moving it with some sort of liquid, either a single phase liquid, like water or a two-phase liquid taking advantage of two phase heat transfer properties.

There's a lot of significant gains and those gains really start magnifying here in this AI space that we're in today. And I think this is where Project Mycelium started to come into fruition was to really think about that infrastructure end to end. When you're looking at some of these AI workloads, especially AI training workloads, their ability to go and move hundreds of megawatts of power simultaneously and instantaneously becomes a tricky cooling challenge and infrastructure challenge. And so really what we wanted to be able to think through is how do we go and allow software to signal all the way through into hardware and get hardware to help go and deal with this problem in a way that makes sense.

So I'll give you a concrete example. 

If you're in the single phase space and you are in the 100 megawatt or 200 megawatt data center site, which is, this is what xAI built out Memphis, Tennessee. When you're going and swinging that workload, you are swinging a workload from zero to a hundred percent back to zero quite quickly. In the timescale of around 40 milliseconds or so, you can move a workload from zero to 200 megawatts back down to zero. When you're connected to a grid, when you're connected to a grid,

Chris Adams: right.

My Truong: that's called a grid distorting power event, right?

You can go swing an entire grid 200 megawatt, which is, probably like, maybe like a quarter of the LA area of like the ability to go and distort a grid pretty quickly. When you're an isolated grid like Ercot, this becomes like a very, tricky proposition for the grid to go and manage correctly. On the flip side of that, like once you took the power you, created about 200 megawatt of heat as well. And when you start doing that, you have to really think about what are you doing on your cooling infrastructure. If you're a pump based system, like single phase, that means that you're probably having to spool up and spool down your pump system quite rapidly to go respond to that swing in power demand. But how do you know? How do you prep the system? How do you tell that this is going to happen? And this is where we really need to start thinking about, these software hardware interfaces. Wouldn't it be great if your software workload could start signaling to your software or your hardware infrastructure? "Hey I'm about, to go and start up this workload, and I'm about to go and swing this workload quite quickly." You would really want to go signal to your infrastructure and say, "yes, I'm about to go do this to you," and maybe you want to even signal to your grid, "I'm about to go do this for you" as well. You can start thinking about other, options for managing your power systems correctly, maybe using like a battery system to go and shave off that peak inside of the grid and manage that appropriately. So we can start thinking about this. Once we have this ability to go signal from software to hardware to infrastructure and building that communication path, it becomes an interesting thought exercise that we can realize that this is just a software problem.

have been in this hardware, software space, we've seen this before. And is it worth synchronizing this data up? Is it worth signaling this correctly through the infrastructure? This is like the big question that we have with Project Mycelium. Like, it would be amazing for us to be able to do this.

Chris Adams: Ah, I see.

My Truong: The secondary effects of this is to really go think through, now, if you're in Dublin where you have offshore power and you now have one hour resolution on data that's coming through about the amount of green power that's about to come through, it would be amazing for you to signal up and signal down your infrastructure to say, you should really spool up your workload and maybe run it at 150% for a while, right?

This would be a great time to go really take green power off grid and drive your workload on green power for this duration. And then as that power spools off, you can go roll that power need off for a time window. So being able to think about these things that we can create between the software hardware interface is really where I think that we have this opportunity to really make game changing and really economy changing outcomes. 

Chris Adams: Okay. 

Zachary Smith: I have a viewpoint on that, Chris, 

Chris Adams: too.

Yeah, please do.

Zachary Smith: My TLDR summary is like, infrastructure has gotten much more complicated and the interplay between workload and that physical infrastructure is no longer, "set it in there and just forget it and the fans will blow and the servers will work and nobody will notice the difference in the IT room."

These are incredibly complex workloads. Significant amount of our world is interacting with this type of infrastructure through software services. It's just got more complicated, right? And what we haven't really done is provide more efficient and advanced ways to collaborate between that infrastructure and the kind of workload. It's, still working under some paradigms that like, data centers, you put everything in there and the computers just run. And that's just not the case anymore. Right. I think that's what My was illustrating so nicely, is that workload is so different and so dynamic and so complex that we need to step up with some ways to, for the infrastructure and that software workload to communicate.

Chris Adams: Ah, I see. Okay. So I'll try and translate some of that for some of the listeners that we've had here. So you said something about, okay. A 200 megawatt like power swing, that's like, that's not that far away from a half a million people appearing on the grid, then disappearing on the grid every 14 milliseconds.

And like obviously that's gonna piss off people who have to operate the grid. But that by itself is one thing, and that's also a change from what we had before because typically cloud data centers were known for being good customers because they're really like flat, predictable power draw.

And now rather than having like a flat kind of line, you have something more like a kind of seesaw, a saw tooth, like up, down, up, down, up, down, up, down. And like if you just pass that straight through to the grid, that's a really good way to just like totally mess with the grid and do all kinds of damage to the rest of the grid.

But what it sounds like you're saying is actually, if you have some degree of control within the data center, you might say, "well, all this crazy spikiness, rather than pulling it from the grid, can I pull it from batteries, for example?" And then I might expose, or that I might expose that familiar flat pattern to the rest of the grid, for example.

And that might be a way to make you more popular with grid operators, but also that might be a way to actually make the system more efficient. So that's one of the things you said there. So that's one kind of helpful thing there. But also you said that there is a chance to like dynamically scale up how, when there is loads and loads of green energy, so you end up turning into a bit more of a kind of like better neighbor on the grid essentially.

And that can have implications for the fact that because we are moving to a, like you said before, there's complexity at the power level and it allows the data centers to rather than make that worse, that gonna address some of those things. So it's complimentary to the grid, is that what you're saying?

My Truong: Yeah. I think you got it, Chris. You got it, Chris. Yeah.

Exactly. So that's on the power side. I think that we have this other opportunity now that as we're starting to introduce liquid cooling to the space as well, we're effectively, efficeintly removing heat from the silicon. Especially in Europe

this is becoming like a very front and center, conversation of data centers operating in Europe is that this energy doesn't need to go to waste and be evacuated into the atmosphere. We have this tremendous opportunity to go and bring that heat into local municipal heat loops

and really think about that much more, in a much more cohesive way. And so this is again, like where we really, like, as Zach was saying, we really need to think about this a bit comprehensively and really rethink our architectures to some degree with these types of workloads coming through. And so bringing standards around the hardware, the software interface, and then as we start thinking through the rest of the ecosystem, how do we think through bringing consistency to some of this interface so that we can communicate "workload is going up, workload is going down. The city needs x amount of gigawatt of power into a municipal heat loop," like help the entire ecosystem out a little bit better. In the winter, probably Berlin or Frankfurt would be excited to have gigawatts of power in a heat loop to go and drive a carbon free heating footprint inside of the area. But then on the flip side of that, going and building a site that expects that in the winter, but in the summer where you're not able to take that heat off, how do we think about more innovative ways of driving cooling then as well? How do we go and use that heat in a more effective way to drive a cooling infrastructure?

Chris Adams: So, okay, so this is that. I'm glad you mentioned the example, 'cause I live in Germany and our biggest driver of fossil fuel use is heating things up when it gets cold. So that's one of the good, good ways to, like, if, there's a way to actually use heat, which doesn't involve burning more fossil fuels, totally. Or I'm all for that. There is actually, one question I might ask actually is like, what are the coolants that people use for this kind of stuff? Because the, when you, I mean, when we move away from air, you're not norm, you're not typically just using water in some, all of these cases, there may be different kinds of chemicals or different kinds of coolants in use, right?

I mean, maybe you could talk a little bit about that, because I know that we had switches from when we've looked at how we use coolant elsewhere, there's been different generations of coolants for our, and in Europe, I know one thing we, there's a whole ongoing discussion about saying, "okay, if we're gonna have liquid cooling, can we at least make sure that the liquid, the coolants we're using are actually not the things which end up being massively emitting in their own right," because one of the big drivers of emissions is like end of life refrigerants and things like that. Maybe you could talk a little bit about like what your options are if you're gonna do liquid cooling and like, what's on the table right now?

To actually do something which is more efficient, but is also a bit more kind of non-toxic and safe if you're gonna have this inside a, in inside a given space.

My Truong: Yeah. So in liquid cooling there's a number of fluids that we can use. the most well understood of the fluids, as used both in the facility and the technical loop side is standard de-ionized water. Just water across the cold plate. There's variations that are used out there with a propylene glycol mix to manage microbial growth. The organization that I'm part of, we use a two-phase approach where we're taking a two-phase fluid, and taking advantage of phase change to remove that heat from the silicon. And in this space, this is where we have a lot of conversations around fluids and fluid safety and how we're thinking about that fluid and end of life usage of that fluid. Once you're removing heat with that fluid and putting it into a network, most of these heat networks are water-based heat networks where you're using some sort of water with a microbial treatment and going through treatment regimes to manage that water quality through the system.

So this is a very conventional approach. Overall, there's good and bads to every system. Water is very good at removing heat from systems. But as you start getting towards megawatt scale, the size of plumbing that you're requiring to go remove that heat and bring that fluid through, becomes a real technical challenge.

And also

at megawatts. Yeah. Yeah.

Zachary Smith: If I'm not mistaken.

Also, there's challenges if you're not doing a two-phase, approach to actually removing heat at a hot enough temperature that you can use it for something else, right?

My Truong: Correct. Correct, Zach. It's, so there's, like, a number of like very like technical angles to this. So as you're, going down that path, Zach, so in single phase what we do is we have to move fluid across that surface a good enough clip to make sure that we're removing heat keeping that silicon from overheating. Downside of this is like, as silicon requires colder and colder temperatures to keep them operating well, their opportunity to drive that heat source up high enough to be able to use in a municipal heat loop becomes lower and lower. So let's say, for example, your best in class silicon today asking for what's known as a 65 degree TJ. That's a number that we see in the silicon side. So you're basically saying, "I need my silicon to be 65 degrees Celsius or lower to be able to operate properly." flip side of that is you're gonna ask your infrastructure go deliver water between 12 to 17 degrees Celsius to make sure that, that cooling is supplied. But the flip side of that is that if you allow for, let's say, a 20 degree Celsius rise, your exit temperature on that water is only gonna 20 degrees higher than the 70 degrees inlet, so that water temperature is so low

And that's not a very nice shower, basically. Yeah.

You're in a lukewarm shower at best.

So, we have to do, then we have to tr spend a tremendous amount of energy then bring that heat quality up so that we can use it in a heat network. And two phase approaches, what we're taking advantage of is the physics of two-phase heat transfer, where, during phase change, you have exactly one temperature, which that fluid will phase change.

To a gas. Yeah. Yeah.

To a gas. Exactly.

Yeah.

And so the easiest way, like, we'll use the water example, but this is not typically what's used in two phase tech technologies, is that water at a atmospheric pressure will always phase change about a hundred degrees Celsius. It's not 101, it's not 99. It's always a hundred degrees Celsius at hemispheric pressure. So your silicon underneath that will always be at a, around a hundred degree Celsius or maybe a little bit higher depending on what your heat transfer, characteristics look like. And this is the physics that we take advantage of. So when you're doing that, the vapor side this becomes like a very valuable energy source and you can actually do some very creative things with it on two phase.

So that's, there's some, every technology has a, is a double-edged sword and we're taking advantage of the physics of heat transfer to effectively and efficiently remove heat in two-phase solutions.

Chris Adams: Ah, so I have one kind of question about the actual, how that changes what data center feels like to be inside, because I've been inside data centers and they are not quiet places to be. Like, I couldn't believe just how uncomfortably loud they are. And like, if you're moving away from fans, does that change how they sound, for example?

Because if, even if you're outside some buildings, people talk about some of the noise pollution aspects. Does a move to something like this mean that it changes some of it at all?

Zachary Smith: Oh yeah.

My Truong: In inside of the white space. Absolutely. Like one of the things that we fear the most inside of a data center is dead silence.

You might actually be able to end up in a data center where there's dead silence, soon.

And that being a good thing. Yeah.

With no fans. Yeah. We'd love to remove the parasitic draw of power from fans moving air across data centers, just to allow that power to go back into the workload itself.

Chris Adams: So for context, maybe someone, if you haven't been in a data center... I mean, it was around, I think it felt like 80 to 90 decibels for me, which felt like a, I mean, defects have a

Yeah, plus could have been more actually. Yeah. So I mean, it was a, I mean, if you have like an, if you have a something on a wearable, on a phone, as soon as it's above 90 degrees, 90 decibels, that's like

louder than lots of nightclubs, basically. Like maybe there's a comp. So this is one thing that I fell and this sounds like it does, like it can introduce some changes there as well rather than actually just, we're just talking about energy and water usage. Right.

Zachary Smith: Yeah, most data center technicians wear ear protectors all the time, can't talk on the phone, have to scream at each other, because it's so loud. Certainly there's, some really nice quality of life improvements that can happen when you're not blowing that much air around and spinning up multiple thousand

My Truong: 25,000 to 30,000 RPM fans will, require you double hearing protection to be able to even function as out of the space.

Yeah, that's the thing.

A lot of energy there.

Chris Adams: Oh, okay. Cool. So, so this is the, these are some of the, this is some of the shifts that make possible. So the idea, you can have, you might have data centers of what you're able to be more active in terms of actually working with the grid because for all the kind of things we might do as software engineers, there's actually a standard which makes sure that the things that we see Google doing or Meta talking about in academic papers could be more accessible to more people.

That's one of the things that having standards and for like Open19 things might be because there's just so many more people using 19 inch racks and things like that. That seems to be one thing. So maybe I could actually ask you folks like. This is one thing that you've been working on and My, you obviously running an organization, Zuta Core here, and Zach, it sounds like you're working on a number of these projects.

Are there any particular like open source projects or papers or things with really some of these. Some of the more wacky ideas or more interesting projects that you would point people to? Because when I talk about data centers and things like this, there's a paper from, that's called the Ecoviser paper, which is all about virtualizing power so that you could have power from batteries going to certain workloads and power from the grid going to other workloads.

And we've always thought about it as going one way, but it sounds like with things like Project Mycelium, you can go have things going the other way. Like for people who are really into this stuff, are there any, are there any good repos that you would point people to? Or is there a particular paper that you found exciting that you would direct people to who are still with us and still being able to keep up with the kind of, honestly, quite technical discussion we've had here.?

Zachary Smith: Well, I would, not to tout My's horn, but, reading the Open19 V2 specification, I think is worthwhile. Some of the challenges we dealt with at a kind of server and rack level, I think are indicative of where the market is and where it's going. There's also great stuff within the OCP Advanced Cooling working group. And I found it very interesting, especially to see some of what's coming from Hyperscale where they are able to move faster through a verticalized integration approach. And then I've just been really interested in following along the power systems, and related from the EV industry, I think there's, that's an exciting area where we can start to see data centers not as buildings for IT, but data centers as energy components.

So when you're looking at, whether it's EV or grid scale kind of renewable management, I think there's some really interesting tie-ins that our industry, frankly is not very good at yet.

Ah.

Most people who are working in data centers are not actually power experts from a generation or storage perspective.

And so there's some just educational opportunities there. I've found, just as one resource, My, I don't know if they have it, at the, the seven by 24 conference group, which is the critical infrastructure conference, which everything from like water systems, power systems to data centers, has been really a great learning place for me.

But I'm not sure if they have a publication that is useful. We, have some work to do in moving our industry into transparent Git repos.

My Truong: Chris, my favorite is actually the open BMC codebase. It provides a tremendous gateway where this used to be a very closed ecosystem, and very hard for us to think about being able to look through a code repo of a redfish API, and able to rev that spec in a way that could be useful and, implementable into an ecosystem has been like my favorite place outside of hardware specifications like

Chris Adams: Ah, okay. So I might try and translate that 'cause I, the BMC thing, this is basically the bit of computing, which essentially tells software what's going on inside of data, how much power it's using and stuff like that. Is that what you're referring to? And is Open BMC, like something used to be proprietary, there is now a more open standard so that there's a visibility that wasn't there before.

Is that what it is? 

My Truong: Right. that's exactly right. So there you have to, in years past, had a closed ecosystem on the service controller or the BMC, the baseboard controller dule inside of a server and being able to look into that code base was always very difficult at best and traumatic at worst. But having open BMC reference code out there,

being look and see an implementation and port that code base into running systems has been very useful, I think, for the ecosystem to go and get more transparency, as Zach was saying, into API driven interfaces.

oh.

What I'm seeing is that prevalence of that code base now showing up in a, number of different places and the patterns are being designated into, as Zach was saying, power systems. We're seeing this, become more and more prevalent in power shelves, power control, 

places where we used to not have access or we used to use programmable logic controllers to drive this. They're now becoming much more software ecosystem driven and opening up a lot form possibilities for us.

Chris Adams: Okay. I'm now understanding the whole idea behind Mycelium, like roots reaching down further down into the actual hardware to do things that couldn't be done before that. Okay. This now makes a lot more sense. Yeah.

Peel it back. One more layer.

Okay. Stacks within Stacks. Brilliant. Okay. This makes sense. Okay folks, well, thank you very much for actually sharing that and diving into those other projects.

We'll add some, if we can, we'll add some links to some of those things. 'Cause I think the open BMC, that's one thing that is actually in production in a few places. I know that Oxide Computer use some of this, but there's other providers who also have that as part of their stack now that you can see.

Right.

My Truong: We also put into production when we were part of the Packet Equinix team. So we have a little bit of experience in running this tech base in, real production workloads.

Chris Adams: Oh wow. I might ask you some questions outside this podcast 'cause this is one thing that we always struggle with is finding who's actually exposing any of these numbers for people who are actually further up the stack because it's a real challenge. Alright. Okay, we're coming up to time, so, I just wanna leave one question with you folks, if I may.

If people have found this interesting and they want to like, follow what's going on with Zach Smith and My Truong, where do they look? Where do they go? Like, can you just give us some pointers about where we should be following and what we should be linking to in the show notes? 'Cause I think there's quite a lot of stuff we've covered here and I think there's space for a lot more learning actually.

Zachary Smith: Well, I can't say I'm using X or related on a constant basis, but I'm on LinkedIn @zsmith, connect with me there. Follow. I post occasionally on working groups and other parts that I'm part of. And I'd encourage, if folks are interested, like we're very early in this hardware working group within the GSF.

There's so much opportunity. We need more help. We need more ideas. We need more places to try. And so if you're interested, I'd suggest joining or coming to some of our working group sessions. It's very early and we're open to all kinds of ideas as long as you're willing to, copy a core value from Equinix,

as long as you can speak up and then step up, we'd love the help. there's a lot to do.

Brilliant, Zach. And My, over to you.

My Truong: LinkedIn as well. Love to see people here as part of our working groups, and see what we can move forward here in the industry.

Chris Adams: Brilliant. Okay. Well, gentlemen, thank you so much for taking me through this tour all the way down the stack into the depths that we as software developers don't really have that much visibility into. And I hope you have a lovely morning slash day slash afternoon depending on where you are in the world.

Alright, cheers fellas.

Thanks Chris.

Thanks so much. 

 Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.

To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 




Hosted on Acast. See acast.com/privacy for more information.

Show more...
5 months ago
50 minutes 34 seconds

Environment Variables
Cloud Infrastructure, Efficiency and Sustainability
Host Anne Currie is Joined by the esteemed Charles Humble, a figure in the world of sustainable technology. Charles Humble is a writer, podcaster, and former CTO with a decade’s experience helping technologists build better systems—both technically and ethically. Together, they discuss how developers and companies can make smarter, greener choices in the cloud, as well as the trade-offs that should be considered. They discuss the road that led to the present state of generative AI, the effect it has had on the planet, as well as their hopes for a more sustainable future.

Learn more about our people:
  • Anne Currie: LinkedIn | Website
  • Charles Humble: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

News:
  • The Developer's Guide to Cloud Infrastructure, Efficiency and Sustainability | Charles Humble [01:13] 
  • Charles Humble on O'Riley [01:50] 
  • Building Green Software [Book] [02:09]
  • Twofish Music [48:03]

Resources:
  • User Interface Design For Programmers – Joel Spolsky [12:03] 
  • Environment Variables Episode 100: TWiGS: Sustainable AI Progress w/ Holly Cummins [18:12] 
  • Green Software Maturity Matrix [19:09] 
  • Writing Greener Software Even When You Are Stuck On-Prem • Charles Humble • GOTO 2024 [23:42]
  • Electricity Maps [23:57]
  • Cloud Carbon Footprint [36:52] 
  • Software Carbon Intensity (SCI) Specification | GSF [37:06]
  • ML.energy [38:31]
  • Perseus (SOSP '24) - Zeus Project | Jae-Won Chung [41:26] 

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Charles Humble: In general, if you are working with vendors, whether they're AI vendors or whatever, it is entirely reasonable to go and say, "well, I want to know what your carbon story looks like." And if they won't tell you, go somewhere else. 

Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams. 

Anne Currie: Hello and welcome to Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. Today I'm your guest host Anne Currie, and we'll be zooming in on an increasingly important topic, cloud infrastructure, efficiency and sustainability.

Using the cloud well is about making some really clever choices, really difficult choices upfront. And they have an enormous, those choices an enormous impact on our carbon footprint, but we often just don't make them. So our guest today is someone who's thought very deeply about this.

So Charles Humble is a writer, podcaster, and former CTO who has spent the past decade helping technologists build better systems, both technically and ethically. He's the author of The Developer's Guide to Cloud Infrastructure, Efficiency and Sustainability, a book that breaks down how cloud choices intersects with environmental impacts and performance.

So before we go on, Charles, please introduce yourself.

Charles Humble: Thank you. Yes, so as you said, I'm Charles Humble. I work mainly as a consultant and also an author and a technologist. I have a, my own business is a company called Conissaunce, which I run. And I'm very excited to be here. I speak a lot at conferences, most recently, mainly about sustainability. I've written a bunch of stuff with O'Reilly, including a series of shortcut articles called Professional Skills for Software Engineers, and as you mentioned most recently, this ebook, which I think is why you've invited me on.

Anne Currie: It is indeed. Yes. So, to introduce myself, my name is Anne Currie. I've been in the tech industry for pretty a long time. Pretty much the same as Charles, about 30 years. And I am one of the authors of O'Reilly's new book, building Green Software, which is entirely and completely aimed at the folks who will be listening to this podcast today.

So if you haven't listened to it, if you haven't read it or listened to it because it is available in an audio version as well, then please do so, you'd enjoy it. So, let's get on with the questions that we want to ask about today. So, Charles, you've written this great ebook, which is also something everybody who's listening to the podcast should be reading.

And we'll link to it in the show notes below. In fact, everything we'll be talking about today will be linked to in the show notes below. But let's start with one of the key insights from your book, which is that choices matter. Things like VM choices matter, but they're often overlooked when it comes to planning your cloud infrastructure.

What did you learn about that? What do you feel about that, Charles? 

Charles Humble: it's such an interesting place to start. So I think, when I was thinking about this book and how I was putting it together, my kind of starting point was, I wanted like a really easy on-ramp for people. And that came from, you know, speaking a lot at conferences and through some of the consulting work I've done and having people come up to me and say, "well, I kind of want to do the right thing, but I'm not very clear what the right thing is." 

And I think one of the things that's happened, we've been very good about talking about some of the carbon aware competing stuff, you know, demand shifting and shaping and those sorts of things. But that's quite a, quite an ambitious place to start. And oftentimes there are so many kind of easier wins, I think. And I kind of feel like I want to get us talking a little bit more about some of the easy stuff. 'Cause it's stuff that we can just do. The other thing is, you know, human beings, we make assumptions and we learn things and then we don't go back and reexamine those things later on. So I've occasionally thought to myself, I ought to write a work called something like Things That Were True But Aren't Anymore or something like that 

because we all have these things. Like my mental model of how a CPU works until probably about two years ago is basically a Pentium two .And CPUs haven't looked like a Pentium two for a very long time, and I have a feeling I'm not the only one. So, you were specifically asking about like CPUs and VM choices, and I think a lot of the time, those of us, certainly those of us of a certain age, but I don't think it's just us, came through this era where Windows and Intel were totally dominant. And so we naturally default to well, "Intel will be fine"

because it was right for a long time.

Anne Currie: Yeah.

Intel 

Charles Humble: was the right 

Anne Currie: Who could ever have imagined that Intel would lose the data center? It's 

Charles Humble: Absolutely it is extraordinary. I mean obviously they lost mobile mainly to ARM and that was very much a sort of power efficiency thing. Fair enough. But yes, the idea that they might be losing the data center or might have lost the data center is extraordinary. But you know, the reality is first of all, if you are thinking about running your workloads. So, AMD processors, more or less how a cross compatible of Intel wants. It's not totally true, but it kind of is. So they have an X86 compatible instruction set. So for the most part, your workloads that will run on Intel will run on AMD.

But not only will they run on AMD, they will probably run on AMD better.

Again, for the most part, there are places where Intel probably has an edge, I would think. If you're doing a lot of floating point maths, then, maybe they still have an edge. I'm not a hundred percent sure, but as a rule of thumb, AMD is going to be, you know, faster and cheaper. And the reason for that has a great deal to do with core density. So AMD has more cores per chip than Intel does, and what that means is you end up with more processing per server, which means you need fewer servers to run the same workload. I ran some tests for the ebook and that came out,

so I had a 2000 VM instance and we had 11 AMD powered servers. So running, epic, the AMD Epic chips and we needed 17 Intel powered servers to do the same job. Right? So that's roughly 35% fewer servers. It's not, by the way, 35% less power use. It's actually about 29%, something like that, less power use 'cause the chips are quite power hungry, but still that's a big saving, right? And it's also, by the way, a cost saving as well. So the other part of this is, you know, it is probably about 13% cheaper to be running your workload on AMD than Intel. Now obviously your mileage may vary and you need to verify everything I'm saying.

Don't just assume, "well, Charles Humble said it's true, so it must be." 

It'll be a foolish thing to do, but as a rule of fault, the chances are in most cases you're better off and I'll wager that you are a lot of the time when you are setting up your VMs on your cloud provider, your cloud providers probably default to Intel and you probably just think, "well, that'll be fine."

Right?

So kind of a case of trying to flip that script. So maybe you default to AMD, maybe you evaluate whether ARM processors will work. We are seeing another surge of ARM in datacenters. Though, as I said, that comes with some it. In mobile, the trade offs are pretty straightforward with ARM to anything else. In data centers it is a little bit more nuanced. But basically it's that, and I think it's, I think it's this thing of, as I say, of these assumptions that we've just built up over time that we don't, we're not very good at going back and reexamining our opinions or our assumptions. And then the other thing that I think feeds into this is we build layers of abstractions, right? That's what computer science does, and we get more and more abstracted away from what the actual hardware is doing. I found myself this morning when I was thinking about coming on the show, thinking a bit about some of the stuff Martin Thompson's been talking about for years, about mechanical sympathy.

I'm sure you have experiences of this, and I know I have,

where, you know, I've been brought into a company that's having performance problems. And you look at, there's one that I actually remember vividly from decades ago, but it was, an internet banking app. So it was a new internet bank that was written in visual basic, weird choice, but anyway, go with me here. And they were reading. It was all MQ series, so IBM MQ series under the hood, right? So basically you've got messages that were written in XML being passed around between little programs. It looks a bit like microservices, but 20 years ago before we had the term roughly. And what they were doing, so when you read a message off an MQQ, you read it off essentially one byte at a time.

And what they were doing in a loop in Visual Basic was they were basically saying string equals string plus next byte. Does that make sense? So, string equals string plus new string. That kind of idea. Now under the cover, they're doing a deep string copy every single time they do that. But they had no idea 'cause they were visual basic programmers and didn't know what a deep string copy even was.

Fair enough. And then they were going, "why is our audit process grinding to a halt?"

And the reason is, well, 'cause you'll, we just need like an API. But what I'm getting at is we have these, we get very abstracted away from what the hardware is doing

because most of the time that's fine, right?

That's what we want, except that our abstractions leak in weird ways. And so sometimes you kind of need to be able to draw on this is what's actually happening to understand. So as I say, in the case of,

in the case of CPUs, if you haven't been paying attention to CPUs for a while, you probably think Intel still has the edge, but right now, sorry, Intel, they don't.

Hope that changes. Competition is always good. But you know, it's just a great example of, you probably don't even think about it. You probably haven't thought about it for years. I know, honestly I hadn't.

Anne Currie: Yeah.

Charles Humble: But then you start running these numbers and go, "gosh, that's, you know, like a 30% power saving."

That's, at any sort of scale, that's quite a big deal. And so a lot of the things that I was trying to do in the book was really that. It was just saying, well, what are some of the things that we can do that are easy things,

that make a massive difference?

Anne Currie: It's interesting. What you're saying there reminds me a little bit of somebody who was a big name in tech back in our early, you'll remember it very well. Joel Spolsky 

used to write a thing about, you know, what would Joel do? He used to work on, do a lot of work on usability, studying usability.

And he'd say, well, you're not looking for, to change the world and rewrite all these systems. You are often just looking for the truffles, the small changes that will have an outsize effect. And what you're saying is that, for example, moving from Intel to AMD is a small truffle that will have an outsized effect. If you do it at the right time,

it's, actually you could probably, it's not so much, as you say, the trouble with go with going to an ARM ship or, you know, Graviton servers that's been pushed very heavily by AWS at the moment. Big improvement in energy use and reductions in cost. But that is not a lift, that's not an instant

oh, flick of switch and you go over. They, you know, there are services that are no longer available. There are, you know, you're gonna have to retest and recompile and do all the things, but it's not such an obvious truffle. But you are saying that really that the intel AMD might be a really easy win for you.

Charles Humble: Yeah, absolutely. Absolutely. It's funny you mentioned Joel Spolsky there. 'Cause actually his, so I read his User Interface Design for Programmers, I think the book is called, about 30 years ago probably. It's just, I still, like everything I know about user interface, I swear it comes from that like book.

It was such a brilliant, it's also hysterically funny. It has all sorts of examples of just, it's very wittily written and has some wonderful examples of, you know, just terrible bits of user interface. Like the Windows 95 start button, which is in the bottom left hand corner. Except that if you drag to the bottom left hand corner of the screen, which is one of the easy places on a screen to hit, you miss the start button because aesthetically it looked wrong without a border around it.

But then no one thought, well, maybe we should just make it so if you miss, but you are there, you know, like it's full of just examples like that. It's very funny. And yeah, absolutely. This, business of, as I say, so much of, we have as an industry, been very profligate, right? 

We've been quite casual about our energy use and our

hardware use. So there's another example, which is to do with infrastructure and right sizing.

Again, this is just one of those things, it's such an easy, quick win for people

and it's another thing that connects to this business of our old assumptions. So when I started in the industry, and probably when you started in the industry and we ran everything in our own data centers, procurement was very slow, right?

If I needed a new server, I probably had to fill in a form and 10 people had to sign it, and then it would go off to procurement and it would sit doing, heaven knows what for a couple of months, and then eventually someone might get around to buying a server and then they'd install the software on it and then it would get racked.

And you know, like six months of my life could have gone by, right.

And so what that meant was if I was putting a new app in, and at some point someone would come along to you and go, "we're putting this new app in. How many servers do you need?" And what you do is you'd run a bunch of load tests on, I dunno, load runner or something like that.

You'd work out what the maximum possible concurrent, like, oh, sorry, concurrent was a poor choice of word there.

Simultaneous number of users on your system, rather.

Yeah.

Right. You simulate that loads, that would tell you how many boxes you needed. So suppose that said four servers, you go to procurement and you go "eight, please."

Anne Currie: Indeed. 

Charles Humble: Right. And no one would ever say "why do you need eight?" Because, right. And that's just. That's just what we do. And what's weird is we still do it, right. Even though elastic compute on the cloud means surely we don't need to. We kind of have this mindset of, "well, I'll just, I'll add a bit more just to be on the safe side 'cause I'm not

too confident about my numbers.

Anne Currie: There is a logic to it if it's easy because it, the thing that you fear is that you'll under provisioning and it'll fall over. So there's a big risk to that. Over provisioning, yes, it cost you more, but it's hard. It's really hard to get the provisioning perfect.

So we over provision and then you always intend to come back later and right size. And of course you never do because you never get a chance to come back and do things later. 

Charles Humble: Something I say a lot to the companies that I consult to is "well just run an audit."

Anne Currie: Yes, indeed. Yeah.

Have 

Charles Humble: a three month process or a, you know, like a three month or a six month mission where we are gonna do a right sizing exercise. We're gonna look for zombie machines. So those are machines that were, you know, once doing something useful but are doing nothing useful anymore. And also look for machines that are just sitting idle and get rid of them. You actually have an amazing story in the, in your O'Reilly book, the Building Green Software book from Martin Lippert. So he was tools and lead sustainability for VMware, Broadcom, part of the old Spring team.

He talks about, so in 2019, I think it was in VMware, they consolidated a datacenter in Singapore. They were moving the data center and basically they found that something like 66% of all the host machines were zombies. 66%.

Yeah. And that's untypical. 

Anne Currie: No, it's not.

Charles Humble: I've gone and done audits. 

50% plus is quite normal.

 So I have this like thing that I quite often say to people, I reckon you can halve your carbon emissions

in your IT practice just by running an audit and getting rid of things you don't need. 

And it may even be more than that. 

Anne Currie: Yeah, indeed. As VMware discovered, and people do it at a time when they move data centers. I often think this is probably a major reason why when people go, "oh, you know, I repatriated, I moved away from the cloud back in and I saved a whole load of money."

Yeah, you would've made, saved that money doing that kind of exercise in the cloud as well. Probably more because the cloud, the trouble with the cloud is both amazing, it has amazing potential for efficiency because it has great servers that are written to be very efficient and you wouldn't be able to write them that efficiently yourselves.

So there's amazing potential. Spot instances, burstable instance types, serverless, you know, there's loads of services that can really help you be efficient. But it's so easy to overprovision that inevitably everybody over provisions massively. And especially if you lift and shift into the cloud, you massively over provision.

Charles Humble: There's a related thing there as well because it's so easy to

and then you just forget about it. Evevn on my own, like sort of, you know, personal projects, I've suddenly got a bill from Google or something and I've been like, "oh hello, that then?"

And you know, it's something that I spun up three months ago for an article I was writing or something and I'd just totally forgotten about. And it's been sitting there running ever since, you know, like, and you could imagine how much worse that is as an enterprise, this is just like me on my own doing it.

And it's that kind of thing. I think. So thinking about things like auto sizing, you know,

scaling up remembering, to scale back down again. People often scale up and don't scale down again. There's some of the Holly Cummings stuff around Lightswith Ops. This idea of, you know, basically you want to be able to spin your systems back up again really easily.

That sort of stuff. Again, this is all stuff that's quite easy to do, relatively speaking.

Anne Currie: Relatively. So much easier than rewriting your systems in Rust or C, I can assure you of that.

Charles Humble: Well, a hundred percent, right? And, again, you know, I've made this, 

I've made this joke a few times on stage and it's absolutely true. We kind of, because we're programmers, we automatically think, "oh, I'll go and look at a benchmark that tells me what the most efficient language is," and it will be C or C++ or something.

And like "we will rewrite everything in C or C++ or Rust." Well that would be insane. And your company would go bust and nobody is gonna sponsor you to do that for very good reason. And

what you want to be doing is you want to be saying, "well, you know, what are the pragmatic things we can do that will make a huge difference?"

And a lot of those things are. You know, rightsizing. It's a really good example. 

Anne Currie: Yeah, I mean, I clearly we're, this is something that you and I have discussed many times and it was one of the reasons why at the end of Building Green Software, we devised the Green Software Maturity Matrix that we donated to the Green Software Foundation, 

Charles Humble: Yes. 

Anne Currie: because the, what we found over and over again when we talked to conferences, went out and spoke to people is that they had a tendency to leap right to the end, rewrite things in.

You know, they say, "well, we couldn't rewrite everything in C or Rust or we'd go outta business, so we won't do anything at all." And they step over all the most important, they step over all the truffles, which are switching your CPU choice, switching your VM choice, doing a right sizing, audits, doing a basic audit of your systems and turning off stuff, doing a security audit because a lot of the, these zombie systems actually should be turned off in a security audit because if they're there and they're running and they're not being patched and nobody owns them anymore, nobody knows what they're doing anymore, they will get hacked.

They are the ways into your system. So sometimes the way to pitch this is a security audit.

Charles Humble: Absolutely. Yes, and I do, I use the Maturity Matrix quite a lot in this ebook. Actually, it's one of the things that I reference all the way through it for exactly this reason, because it's, as I said, I think we tend to go to the end a lot. And actually a lot of the stuff is so much earlier on than that.

And I think it's just a, yeah, 

I think it's a really important thing to realize that there's a huge amount you can do. And actually as well, it's gonna save you an awful lot of money. And given the kind of very uncertain business environment that we're in, and people are very kind of worried about investing at the moment for all sorts of quite sensible reasons, this is one of those moments where actually if you're thinking about "I want to get my business onto a more, or my IT within my company onto a more sustainable footing," this is absolutely the right time to be having those conversations with your CFO, with your execs because, you know, this is the time where businesses need to be thinking, "well, how do I cut cost?" And there's a huge amount of waste. I guarantee you if you've not looked at this, there will be a huge amount of waste in your IT you can just get of

and be a bit of a hero and, you know, do good by the planet at the same time.

It's like, what's not to like?

Anne Currie: Yeah, because I mean, different companies, different enterprises, different entities have different roles in the energy transition. For most enterprises, your role is to adopt modern DevOps practices really, it's a new start. You don't mean you don't have to start there. You can start with the, as you say, manual audit.

Sometimes I've heard it called the thriftathon, where you just go through and you go, "do you know that machine? Turn it off." You know, you can use that kind of, they use the screen test method of "you don't think anyone's using it, turn it off. Find out if anybody was using it." And then you can use that to kind of step yourself up to the next level.

You and I both know holly Cummins, who was a guest, cut two back, one back, on this podcast. And she introduced the idea of, Lightswitch Ops, which is the, first kind of automation. If you haven't done any automation up till now and you want to learn how to do automation, a really good bit of automation is the ability to turn machines off automatically, maybe for a period overnight or, and you try that out on machines like your test suites, to just get yourself into the, to the simplest form of automation. It can also, if you are on the right, it depends if you're on the right models and you're in the cloud potentially, or you have the right

infrastructure, then that can save you money. It might not always save you money because you have to have made the right infrastructure choices. It might just that be that the machine sits on and doesn't really do anything. You've just turned off your application. But you really want to be turning things off to save power.

You know, and it's a really good way of getting you into the DevOps mindset, which is where everybody needs to be with so many payoffs.

Charles Humble: Yes.

Anne Currie: But yes. So, we'll go back to, do ask the questions. So, in part of, in, well, one of your talks is writing greener software, even when you are stuck on prem, and you talk about the fact that not everybody has the option to move into the cloud.

So what, then? What do you do if you can't move into the cloud?

Charles Humble: Yeah, that's, it is such an interesting question, that. So obviously there are things you can't do or can't do very easily, and one of the most obvious of those is you can't choose green locations on the whole if you're running stuff in your own data centers. So again, going back to these easy wins, an easy win is to use something like Electricity Maps, which is a tool which basically tells you what the energy mix is in a given region.

Oh.

And then you say, "I shall run my workloads there 'cause that looks good." There's a little bit more to it than that. You kind of want a location that not only has the greenest energy mix at the moment, but also has like credible plans for that to keep improving.

Anne Currie: Yeah.

Charles Humble: Obviously that's really hard to do with your own data centers.

Anne Currie: Yeah.

Charles Humble: As a rule of thumb, you probably don't want to be building new data centers if you can help it because, pouring concrete is not great. There's a lot of costs associated. That said, you do have some advantages in your own data centers 'cause you have some things that you can control that people on cloud can't. I would say, I mean, you know, like being honest about it, if you can move things to public cloud, that's probably going to be better. But if you can't, there are still things you can do. So one of those things is you have control over the lifetime of your hardware. This gets a little bit complex, but it's basically down to, so hardware has an embodied carbon cost.

That's the cost that it takes to construct it, transport it, dispose it at the end of its use, like useful lifetime. I mean, it also has the cost it takes to charge it. Now for your laptops, your mobile phones, your end user devices, the embodied carbon absolutely dwarfs the carbon cost used to charge it in its lifetime.

Anne Currie: Yeah.

Charles Humble: What we talk about with end user devices is like basically extend the life. Say, you know, 10 years or something like that, keep it. We want to make less of them, is really the point. Servers and TPUs and GPUs and those sorts of things, it's a bit more complicated. The reason it's a bit more complicated is because we are getting an awful lot better at making more efficient servers for all sorts of reasons. so what that means is the trade-offs with each new generation is more complicated. As an example, a lot of your energy use in your data center is actually gonna be cooling. So a CPU or a TPU that's running less hot requires less cooling. That's a big win. These sorts of things are sufficiently important that actually, until gen AI came along, so really three or four years ago, though we were adding massive amounts of compute, the emissions from our data centers was pretty flat. I mean, it was climbing, but not much. So the point here with your own data centers is you have control over that lifetime. So what you can do is you can do the calculations, assuming you can get the embodied carbon costs from your suppliers, you can do the calculations and think about, "well, how long do I keep this piece of hardware going before I turn it over?" Now, I don't want to give you a heuristic on that because it's kind of dangerous, but it's probably not 10 years, right?

It's probably five years-ish. Maybe something like that, but run the maths. But it's absolutely something you can do. You can also take advantage of things like your servers will have power saving modes that you probably don't turn on because we used to worry about that kind of thing.

'Cause we have this like, again, one of our old assumptions. We used to imagine that if you power a server down, it might not come back quite the same. Actually that's kind of still true, but, you know, it's fixable, right? So enable power saving across your entire fleet, that will make a huge difference, particularly if you've over provisioned, like we were saying earlier, right? 50% of your servers are idle. Well, they can be asleep all the time, and that helps. It's not the same as turning 'em off, but helpful. You can also look at voltage ranges. So your hardware will have a supported voltage range, and you've probably never thought about it, and I'll admit I hadn't until quite recently.

But actually again, if you're running at scale, if you send the lowest voltage that your servers will support, at a big scale that will a considerable difference. And then again, some of the other things we talked about, your CPU choice again, will make a difference. So think about, you know, "do I need to be buying Intel servers all the time, or could I be buying AMD ones or ARM ones?"

And also look at your cooling. But that's a whole, that's a whole nother complicated topic for all sorts of reasons. Often, well, in brief, some of the most energy efficient methods of cooling have their own set of problems, which make the trade offs really hard. So, like water-based cooling tends to be very efficient,

tends not to be great for local water tables.

Anne Currie: Yeah.

Charles Humble: It's, complicated. But, yeah, as I say, there are, so, there are a lot of things are that are definitely harder. And if you have a choice, if you're running in like a hybrid environment, chances are if you have a choice of going public cloud or own data center, public cloud is probably better. It's absolutely in Google and AWS and Microsoft's interests to run their data centers as efficiently as possible. 'Cause that's where their cloud profit margin is, right?

Anne Currie: Absolutely.

Less 

Charles Humble: it's costing them to run the, you are still paying the same amount, the more money they make. 

Anne Currie: Well, I, and I think I always laugh when I see the numbers on Graviton. So when AWS attempt to, persuade you quite correctly to, if you can move from Intel chips to run on, to run your applications onto ARM chips. They say, "oh, this will save, 40% on your hosting bill and 60% on your carbon emissions."

And you think, I think you've just pocketed quite a lot, a big. That suggest to me you've just pocketed quite a nice upgrade in your, in your, profitability. And I have no problem with that whatsoever, as things get better, I have no problem with making profits out of it. So I'm gonna pick you up on something that, I think everything you've said there is very true.

And I'm gonna take a slightly different take on it, which is that remember what that, what Charles is saying, there is quite detailed stuff about not everybody here will be a hardware person and that you will have specialists within your organization who can do all these hardware judgements.

The interesting thing is that they can. And it is always the case that if you can, if you have specialists in your organization, the best way to do better is to persuade them that they want to do better. So, if, you could persuade your specialists that actually to actually take an interest in this and to find ways of improving the efficiency of your systems, cutting the carbon emissions, they will do better at it than you will.

Charles Humble: 100%.

Anne Currie: Best thing you could do is persuade them to focus their in giant specialist brains on the subject because the likelihood is that the real issue is they probably aren't thinking about it, or they probably don't, you know, they, it is not top of their mind. They maybe think they're not even allowed to start thinking about it.

If it at a high level, you can actually get your specialists to turn their attention to these. efficiency issues to these carbon reduction issues, that's so much more effective than you going and reading up on it yourself. Get them involved. Go out and talk to people. Persuade, use your powers of persuasion, because, what you should take away from Charles, what's lots of people listening should take away from what Charles

just said then is that there is a lot of stuff that can be done by your specialist teams that they might not be thinking about doing, or they might not be, they might feel they don't have the time or focus to do. You can potentially help them by focusing them or giving them some budgets or some time to work on it.

Charles Humble: Definitely. Absolutely. Yeah. Yeah. No, I'm a big believer in specialization in our industry, and I think actually this idea that we are almost know everything isn't, is not helpful. Like absolutely, if you've got hardware people, go and tell the hardware people, and it's a thing of incentivizing.

It's like, you know, "we can save money by doing some of these things, or we can reduce our carbon by doing some of these things, and those are good things to do." Yeah, a hundred percent agree with all of that. No disagreements at all.

Anne Currie: Yeah, no, it's interesting isn't it, that most of human progress has come from the realization that specialists kick the butt of generalists. And I'm a generalist, so you know, I wish it wasn't true. My job is to kind of encourage specialists to be specialists and, you know, this is not new news.

It was the, it's the theme of Adam Smith's the Wealth of Nations that he wrote in the 1770s about why the industrial revolution was happening. It wasn't to do with any kind of technology or anything else. It was the discovery that specialists kick the butt of generalists.

Charles Humble: Hundred percent, yes.

Anne Currie: But now we're gonna get to the final tricky question that we have for you, Charles, that you'll be thinking about. You've been thinking about, so I'm, your work often emphasizes the importance of transparency, knowing the carbon footprint of what we build. What tools and practices do you recommend for people to do that? 

Charles Humble: Oh, that is a hard question. Yes. Frustratingly hard actually, we, so the first thing is we often end up using proxies

and the reason we end up using proxies is 'cause measurement is genuinely quite difficult. So cost is a quite a good proxy. In Bill Gates' book, blanking on the name of the book, oh, How to Avoid a Climate Disaster, 

Anne Currie: Oh yeah. Which is excellent. And again, everybody listening to this should be reading it. Yeah.

Charles Humble: Absolutely. So he, in that book, he does a bunch of calculations, which he calls green premiums and they're

basically the cost of going green.

Now, He doesn't do one for our industry, but I would wager, because we are also profligate, I would wager that our green premium, and I haven't worked this out, I will admit it, but I would think our green premium is probably a negative number.

So, that's to say,

going green is probably cheaper for us. Right.

Anne Currie: I agree.

Charles Humble: So cost is a very good proxy. It is an imperfect proxy. One of the reasons it's an imperfect proxy is because, for example, if you're running a green energy mix, that's not going to be reflected in your electricity bill at the moment. That may change, but at the

moment it doesn't happen.

Right. So it is imperfect, but

Anne Currie: Well, it doesn't happen in some places and in other places it does. So if you are on prem and you're in a country with dynamic pricing like Spain or zonal pricing, like talking about the UK having in future, that's still very up in the air, then it does. But if you're in the cloud, even in those areas, it doesn't at the moment.

Charles Humble: Absolutely. But nevertheless, 'cause as I was saying, you know, like probably half of your servers are doing nothing useful. So cost is a pretty good starting point. Another thing is CPU utilization. So there's something we haven't really talked about, which is this idea, Google calls it energy proportionality,

Anne Currie: Yeah.

Charles Humble: the observation that when you turn a machine on, you turn a server on, it has a static power draw, and that static power draw is quite a lot. How much depends on how efficient the server is, but it might be 50% or something like that. So when it's sitting idle, it's actually drawing a lot of power. The upshot of this is you'd usually have like an optimum envelope for a given server, and that might be somewhere between 50 and about 80%.

It may be a bit lower than that depending on how good the chips are. Above about 80% you tend to get key contention and those sorts of things going on. Not great. But around and about that operating window. So it's again, keeping your CPU utilization hard but not, high rather, but not maxed out is another good one.

Hardware utilization is another good one. Beyond that, so all of the cloud providers have tools of varying usefulness. Google's carbon footprint tool is probably best in class, at least in my experience. I think they take this stuff very seriously and they've done a lot of very good work.

Microsoft Azure tools are also pretty good. AWS's ones, so they have just released an update literally as we're recording this, and I hadn't had a chance to go and look at what's in the updated version. I'm going to say I think AWS is still a long way behind their competitors in terms of reporting.

Anne Currie: Yeah.

With 

Charles Humble: a slight proviso that I hadn't looked at what's in the new tool properly. But again, there, there are all things there that you can use. There's a tool called Cloud Carbon Footprint, which is an open source thing, by ThoughtWorks and that's quite good. It will work across different cloud providers, so that's kind of nice. You could probably adapt it for your own data centers, I would imagine. Of course the GSF has a formula for calculated carbon intensity as well. So that's more of a sort of product carbon footprint or lifecycle assessment type approach. It's not really suitable for corporate level accounting or reporting or that sort of thing, but that's quite a good tool as well. And there are a variety of other things you can use, but as I say, if we're talking the very beginnings, you probably start with the proxies. If you've got a choice of cloud provider, think about the cloud provider that gives you the tooling you need.

And you know, that might, again, going back to our assumptions, time was you would choose AWS. Maybe you shouldn't be choosing AWS now, or at least maybe you should be thinking about is AWS the right choice.

At least until they, you know, until they sort put their house in order a bit more. These are things, questions that we can reasonably ask. And in general, if you are working with vendors, whether they're AI vendors or whatever, it is entirely reasonable to go and say, "well, I want to know what your carbon story looks like." And if they won't tell you, go somewhere else. In the case of AI, none of the AI companies will tell you. They absolutely won't. And so my advice, if you're looking at running generative AI, other than. Everything we just said applies to AI, like it applies to everything else. There are a bunch of very specific AI related techniques, distillation, quantization, pruning, those sorts of things. Fine. But really my advice is well, using an open source model, and look at something like the ML leaderboard from ml.energy leaderboard, which will give you an idea of, what the carbon cost looks like. And don't use AI from a company that won't tell you, would be my advice. You know, and maybe we can embarrass some of these companies into doing the right things. You never know.

Anne Currie: Be nice, wouldn't it? It's so, it's interesting, the, this, so in April, Eric Schmidt got up in front of the US government in one of their, in one of their, committees and said, well, you know, if we, at the current rates, AI is going to take up 99% of the grid electricity in the US.

And you think "it's interesting, isn't it," because that's not a law of nature. There are plenty of countries that are looking at more efficient AI, so China, are certainly looking at more efficient AI. They don't want, they want to compete. They wanna be able to run AI because in the end, the business that's going to collapse if AI requires 99% of the US grid is AI because it cannot, you know, it's kind of, if something cannot go on, it will stop. 

Charles Humble: It's a desperate source of frustration for me because it is completely unnecessary.

Anne Currie: Well, it's, you just have to be a bit efficient.

Charles Humble: Just in brief, 'cause again, this is like a whole separate podcast probably,

but just in brief, there are a bunch of things that you can do

Anne Currie: Absolutely.

Charles Humble: that make a huge difference, both when you are collecting your data, when you are training your models, when you're running them in production afterwards. I have just done a piece of work for the News Stack on federated learning, and in the process of doing that, I talked to somebody called Professor Nick Lane, who is at Cambridge University, and he talked about, so one of the solutions to the data center cooling problem, which we touched on earlier, is basically what you do with the waste heat. And there are lots of companies in Europe that are looking at using it for things like heating homes or using, you know, heating municipal swimming pools, that sort of thing, right? You can't do that with an Amazon or a Google or a Microsoft facility, because you have to construct the data center close to where the waste is gonna be used.

But there are lots of these small data centers, particularly in Europe. There are companies like T Loop that are doing a lot of this work. And he made the point that with federated learning, you can actually combine these smaller facilities together and then, you know, be training potentially very large models on much, much smaller data centers, which I thought was fascinating. There's a guy called, Chung is his surname, and apologies to him, i'm blanking on Jae-Won Chung. He's done some extraordinary work looking at, so when we split stuff across GPUs,

that has to be synchronized, right? So we divide the workload up because it's too big to fit in a GPU and we split it across a bunch of different GPUs and we run all of those GPUs at full tilt, but we don't have to. Because we can't divide the workloads up evenly.

So you have some workloads that are tiny but this GPU is still running at full power, and what he worked out was, well, if we slow those GPUs down, the job will still end at the same point, but it'll use a lot less energy. So he's built something called Perseus, on his tasks with things like Bloom and GPT-3, they're about, it's about 30% less energy use just from using that

for exactly the same throughput. So there's no throughput loss, there's no hardware modification. The end results are exactly the same, and you just save 30% of your energy bill, which is a big deal.

Then you go, as I say, things like distillation and quantizing and pruning and shrinking your model size, all of that stuff.

So it frustrates me because it's so unnecessary. I think we need a carbon tax and I think the carbon tax needs to be prohibitive. And I think, you know, bluntly, I think companies like OpenAI should be pushed outta business if they don't get their house in it's time. I thrilled.

Hannah Richie's book, not The End of the World, which is my, possibly my favorite book on climate. And again, it's a book, everyone haven't read it, go and read it. She has a wonderful quote in there where she says, "I've talked to lots of economists and all of the economists I've spoken to agree that we need some sort of carbon tax."

And then she goes on to say, "it's maybe the only thing that economists agree on," which I thought was a fine and excellent line.

Anne Currie: It is really interesting 'cause I, we disagree slightly on, you're not a huge AI fan. I'm a massive AI fan. I want AI and I also want a livable climate. And they are not mutually exclusive. They can be done. I mean, you have, you don't love AI, you don't love AI as much as I love AI, but we are both in agreement that it is not physically impossible to have AI and effective control of climate change because as you were saying about the federated learning and, you know, optimizing your GPU towards the bottleneck tasks and then things like that, as long as you, workloads that are time insensitive that can be shifted in time and maybe delayed and maybe separated and then glob together again,

they're very good workloads to run on renewable power, which is variably available. So in fact, AI is potentially incredibly alignable with the energy transition. The fact that we don't always do it is a travesty and it's so bad for AI as well as being bad for the planet.

Charles Humble: I want to push back slightly on you saying I'm not a fan of AI. So I have. Quite strong concerns specifically about generative AI that are ethical and moral as well as environmental.

Anne Currie: Which I can see.

Charles Humble: And in essence it comes down to the fact you are taking a bunch of other people's work and you are building a machine that plagiarizes that work and you are not compensating those people for it. And you are also, basically you have to do tuning of the model. So reinforcement learning with human feedback and the way that, that's done is pretty horrifying when you dig into it. It usually involves, you know, people in places like Kenya being paid $3 an hour to look at the worst contents of the internet for day after day.

I mean, one can imagine what that does to you. So I have quite specific reservations with generative AI, the way that we are doing it. As it goes, I think there are ways that we could build generative AI that wouldn't, I wouldn't have these ethical problems with, that we're not doing. More generally, think generative AI is interesting. I don't know that it's useful, but I do think it's interesting. And more broadly, I'm not against AI at all. I'm like, you know, 

I've done work with a company that, for example, is using AI to look at, , increase the window that you can treat stroke patients with, by like hours.

And it's amazing. Amazing work. So they're basically doing image processing to identify different types of stroke. And some stroke patients, the window is much wider. So, you know, we

think of it as being 4.5 hours but it's much bigger. Stuff like

that. There's, and, as you say, like grid balancing is gonna get more complicated with renewables, and AI probably has a role to play there.

And I'm not anti. I'm not anti, I just think that there are things that we are doing as an industry which are reckless and ill-judged and you know, in my tiny little way I want. I mean, I'm aware that it's like, you know, blowing a kazoo in a thunderstorm, it's quite amusing, but it doesn't actually do much for anybody. But I, in my own little way, I want to be sort of beating the drum. As an industry, I think we need to get better. Right. And part of the reason I think we need to get better is because the work that we do has a huge impact on the whole planet now and on society and all sorts of things. And we are still like acting like we're a little cottage industry and what we do is inconsequential but it's not true.

 So my reservations with gen AI is, I think it's being done in a desperately irresponsible way, but that doesn't mean it has to be. It just means that's what we're doing. And hey, I might be wrong. You know, I'm not an ethicist. I just like, I just have reservations. Also, I am a writer. And a musician, right?

So, you know, like I do have skin in the game. I kind of want generative AI not to work. 'Cause otherwise I don't really have a living anymore, which is a bit of a worry. So, you know, I'm not a neutral observer on this at all, but I just think the way we're doing this is morally, ethically dubious, as well as being very bad for the climate. And I don't think it has to be any of those things.

Anne Currie: Yeah, I, so it's an interesting, we have a slightly different, 'cause I'm also a writer and a painter. but I've always been so rubbish at making money out of writing and painting that I don't really, don't have anything to say. So we have, that's, but that is my own fault.

A little bit. 

Charles Humble: The last question, I'm looking at your script now. Sorry. 'cause it's a shared Gigle doc, and your last question is about, so I write in my free time in a band called Twofish. And the question is, if you could score the soundtrack for a more sustainable future, what would it sound like? 

Anne Currie: I forgot about the question. Yeah.

Charles Humble: Interesting have get it in. So we did the opposite thing actually. We did, so there's a piece on the last two Fish album, called Floe, and that was my kind of, I started, everything is written as, by two of us. But I started that one and when I started it, what I was trying to do is describe what climate breakdown might sound like in music.

That was kind of my starting point. Not sure anyone hearing it would get that, but what I did was I went and recorded a bunch of like, field recordings. So, you know, California wild fires and that sort of thing. Tune them all to A flat minor as you do, and then wrote this very dark, scary,

that gets a bit drum and bassy as it goes on. It's very black and industrial and dark and quite grim and I rather like it. So I think we just have to go the opposite, right? We'd have to go the other end of this. 

Anne Currie: So Twofish, what's the name of your last album? In fact, which album would you recommend? 

Charles Humble: It's called At Least a Hundred Fingers. That's the last album. And, yeah, Twofish is the band, TWA as in the encryption algorithm, fellow nerds. So yeah, so with this one, the climate break, with the sustainable future one, I think some of my favorite composers, classical composers, would be like early, late 19th, early 20th century.

People like that. They were very inspired by the natural world, and they tended also to draw a lot on their, the folk tunes of the countries where worked. So I think melodically your, my starting point might be to go to a folk tune, and then use very traditional instruments. So have like a, maybe a string section, you know, sort of violins, violas, cello. So try and get some of that lift and air and that sort of thing into it. And then have the, more electronic stuff for stuff that I typically do, be very kind of intricate, interconnected, kind of supporting lines so that you have something melodic that is folk, quite traditional instruments, and then this kind of sense of interconnectedness and sort of mechanisms working, something like that. I might have a go at that actually. Perhaps there'll be a third Twofish album that has that on it. You never know. Yeah, that. If you want to look my stuff up, so my website, my company is Conissaunce com, www.conissaunce.com. I'm Charles Humble on LinkedIn. I'm also

Anne Currie: There will be, we'll have links below in the show notes.

Charles Humble: So yeah, you can find me on all of those. And you can find the music there as well.

Anne Currie: Excellent. And I really recommend the albums. I like them a lot. They're great. 

Charles Humble: Thank you.

Anne Currie: So thank you very much, and thank you to all the listeners today. As reminder again that all the links that we've talked about today, we have slightly overrun, will be in the show notes below. So, until the next time, thank you very much for listening and happy building Green Software.

Charles Humble: Thank you very much indeed for having me. It's been a pleasure. Thanks for listening and goodbye.

Anne Currie: Goodbye. 

Chris Skipper: Hey everyone, thanks for listening. As a special treat, we're going to play you out with the piece that Charles was talking about, Floe by Twofish. If you want to listen to more podcasts by the Green Software Foundation, head to podcast.greensoftware.foundation to listen to more.

Bye for now. 




Hosted on Acast. See acast.com/privacy for more information.

Show more...
6 months ago
55 minutes 13 seconds

Environment Variables
Backstage: Green AI Committee

In this special backstage episode of Environment Variables, producer Chris Skipper spotlights the Green AI Committee, an initiative of the Green Software Foundation launched in 2024. Guests Thomas Lewis and Sanjay Podder share the committee’s mission to reduce AI's environmental impact through strategic focus on measurement, policy influence, and lifecycle optimization. The episode explores the committee’s approach to defining and implementing “green AI,” its contributions to public policy and ISO standards, and collaborative efforts to build tools, best practices, and educational resources that promote sustainable AI development.

Learn more about our people:
  • Chris Skipper: LinkedIn | Website
  • Thomas Lewis: LinkedIn | Website
  • Sanjay Podder: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • Green AI Committee [00:00]
  • Green AI Committee Manifesto [03:43]
  • SCI for AI Workshop [05:28]
  • Software Carbon Intensity (SCI) Specification [05:34] 
  • Green Software for Practitioners (LFC131) - Linux Foundation [13:54]

Events:
  • Carbon-Aware IT: The New Standard for Sustainable Tech Infrastructure (May 5 at 6:00 pm CEST · Virtual) [15:53]
  • Inside CO2.js - Measuring the Emissions of The Web (May 6 at 6:30 pm CEST · Hybrid · Karlsruhe, BW) [16:11]
  • Monitoring for Software Environmental Sustainability (May 6 at 6:30 pm CEST · Virtual) [16:45]
  • Green IO New York (May 14 - 15 · New York) [17:02]


If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

​Chris Skipper: Welcome to Environment Variables, where we bring you the latest news from the world of sustainable software development. I'm the producer of this podcast, Chris Skipper, and today we are thrilled to bring you another episode of Backstage, where we dive into the stories, challenges, and triumphs of the people shaping the future of green software.

 In this episode, we're turning the spotlight on the Green AI Committee, a pivotal initiative approved by the Green Software Foundation in March, 2024. With the rapid rise of AI, this committee has been at the forefront of shaping how companies innovate sustainably while reducing AI's environmental impact . From driving policies and standards, to fostering collaborations and crafting new tools, the Green AI Committee is charting a path toward a more sustainable AI future. Joining us today are Thomas Lewis, the founder of the committee, along with co-chair Sanjay Podder.

Together, they'll share insights on the committee's goals, their strategies for tackling AI's carbon footprint, and the critical role this initiative plays in ensuring AI development supports global net zero ambitions. And as always, everything we discuss today will be linked in the show notes below. So without further ado, let's dive into our conversation about the Green AI Committee.

First, I'll let Thomas Lewis introduce himself.

Thomas Lewis: Hi, I'm Thomas Lewis. I'm a green software developer advocate at Microsoft, and excited to be here. I also work in artificial intelligence, spatial computing, and I've recently been involved in becoming a book nerd again.

Chris Skipper: My first question to Thomas was, what inspired the creation of the Green AI Committee and how does it aim to shape the GFS approach to ensuring AI innovation aligns with sustainability goals? 

Thomas Lewis: Yeah, so we noticed that we were getting a lot of inquiries. We were getting them from legislators and a lot of technologists. Everybody from, you know, people working at your, you know, typical enterprise to folks who were doing research at universities and learning institutions.

And they were reaching out to try to get a better understanding of how the green software principles that we talk about and those practices applied to this growing impact of AI. It was not unusual to see on social media a lot of interest in this kind of intersection of green software or sustainability with artificial intelligence.

And, you know, this kind of shaped the GSF's approach because in a way we take a slow, methodical approach to thinking about the challenges of green AI and we tend to bring in a lot of experts who have thought about this space from quite a few different viewpoints. And we don't just look at it in a binary way of good or bad.

And I think a lot of times, especially online, it can be like, well, you know, AI is, you know, burning the planet down. And you know, and that the resources needed to run these AIs are significant, which is not untrue. And that's the thing I appreciate with the GSF is that you know, we look at those elephants in the room.

But with acknowledging those challenges, we also look at AI to help support sustainability efforts by, again, looking at it from those different vectors and then thinking of a viewpoint and also backing it up with the appropriate tools, technologies, and education that may be needed.

Chris Skipper: The committee's manifesto emphasizes focusing on reducing the environmental impact of AI. Could you elaborate on why this focus was chosen rather than areas like AI for sustainability or responsible AI?

Thomas Lewis: That's a good question. We tend to look at things from a variety of vectors and don't necessarily limit ourselves if we think it is important to dig into these other areas. But one of the things I do like, about the GSF is that typically when we start a committee or start a project, we always start with a workshop.

And what we do is we ask for a lot of experts to come to the, you know, virtual table, so to speak, and walk actually through it. So, everyone gets a voice and gets to put out an opinion and to brainstorm and think about these things. And these workshops are over multiple days. And so, typically the first day is kind of like just getting everything on the board.

And then the, you know, second time that we get together is really about how to kind of say, "okay, how do we prioritize these? What do we think are the most important? What should we start on first? And then what are the things that, you know, we put on the backlog?" 

And then the third, you know, one is typically where we're really getting sort of precise about "here's where our focus is going to be." 

So the conversation is always very broad in the beginning, right? Because you have all of these people coming to the table to say what's important. But as we kind of go through that, so, after a lot of that discussion, we decide on a prioritized focus. But of course we'll come back to others as we iterate because there are gonna be opportunities where, hey, maybe it is more important that we focus on a certain thing.

So, like, for example for the GSF, it is about building out the SCI for AI. So, if you're familiar with our Software Carbon Intensity spec, that now is a standard, that is one of, kind of the projects that came out of that workshop and that thinking, because, you know, first thing you kind of have to do if you wanna make a change in what you do is you have to measure it, right?

You have to measure what your carbon intensity is, whether it's AI or gaming or blockchain or what have you. And so I think by having this process of doing these workshops that's really what gets us to our priority. So I don't think that there's always sort of a kind of a crisp thing of like, why we did this or not do this, or why we prioritize it a way.

It's really that kind of collective coming together, which I think is what really makes the foundation very powerful because everyone has a voice in it.

Chris Skipper: The committee recently responded to a bill drafted by US Senators to investigate AI's environmental impact. How do you see the role of the Green AI Committee in shaping public policy and regulations?

Thomas Lewis: I've always seen the Green AI Committee's role in this as a trusted advisor, backed up with technical credibility and intellectual honesty. Our intent is not to rubber stamp legislation or just be another endorsement on a bill, but to review bills and papers that come to us with experts in this field and to call out things that we think are important to sustainability or also question things. What I really have appreciated is what comes to us is there has never been an intention for us just to say, "this is good" and give the check mark. But it really is, has been like, "hey, we want your feedback. We wanna understand how we can make these things better for our constituents."

And the other thing is that the committee also works very closely with our own policy group within the GSF because many of the members, including myself, don't work with legislators and politicians normally. And so there's a vernacular to the things that they talk about and how they approach things.

And so our policy group is also very helpful in this. So, you know, our committees aren't based on, "hey, everything related to AI will come through this committee." We have a lot of different groups, and those groups may be like the policy group, it may be the open source projects that are within the GSF and some of our education opportunities that are there.

But yeah, I would say from my perspective the role is mostly as a trusted advisor. And I think that if that is how people reflected the relationship regarding policy and advocacy, I would think that we are doing a good thing.

Chris Skipper: From the initial stages of founding the Green AI Committee to where it stands now, what have been the most valuable lessons learned that could guide other organizations aiming to promote sustainability in AI?

Thomas Lewis: I would say, first take a thoughtful approach in how you wanna approach things. Not only is green software a significant amount of tech, people and communities, but AI builds on top of that and has its own things, and the innovation is happening way faster than most people can keep up.

And so you've gotta take the time to figure out what you wanna focus on first. You can't say you're just gonna try to cover every angle and every thing. Second, I would say take a less dogmatic approach to your efforts. It's easy to say "things should be this way," right? Or, "hey, we're gonna do something 100%, or it's considered a failure."

This space is rapidly changing. This environment especially. So what you have to do is kind of take the time to get a wide variety of insights and motivations, and then methodically figure out what a hopefully optimal approach is going to look like. And then the third which, you know, may not be just related to, you know, green software and AI, but surround yourself with people who are smarter and more knowledgeable than yourself.

One of the things that I absolutely love being on this committee is there are just super smart people that I get to work with, like the people that are on this podcast. And I learned so much because we all have different contexts, we have different viewpoints and we have various experiences, right?

So we've got you know, folks who are in big companies and people who are in small companies and people who are just starting their sustainability journey. There's people who have been doing this for a long time. We have students, we have researchers. There's all kinds of people. So the more that you can kind of understand where a lot of people are coming from,

and again, what their context is, you're gonna find that you're gonna really be able to do a whole lot more than you have been able to before. And you may get ideas from places that you think you didn't before. And again, this isn't just with the Green AI Committee, I think this is in life, you know, and again, if you surround yourself with people who are smarter and more knowledgeable than yourself I always think that you're going to be in a better place and you'll end up being a better person for it.

Chris Skipper: Thanks to Thomas for sharing those insights with us. Next up we have Sanjay Podder. Sanjay is not only co-chair of the Green AI Committee, but also host of our other podcast here at the Green Software Foundation, CXO Bytes. My first question to Sanjay was how does the Green AI Committee contribute to reducing AI's carbon footprint?

And can you share specific strategies or tools the committee is exploring to achieve these goals?

Sanjay Podder: The Green AI Committee brings together experts from across the industry to shape what it truly means to build AI sustainably. Our goal is to not only define green AI, but to make it practical and actionable for developers, data scientists, and technology leaders alike. We started by creating a simple developer-friendly definition of green AI.

One that anyone in the ecosystem can understand and apply. But we did not stop there. We have taken a lifecycle approach breaking down the environmental impact of AI at every stage from data processing and model training to deployment and inference. This helps pinpoint where emissions are highest and where optimization efforts can have the biggest impact.

We are also actively working on strategies and tools to support these goals. By embedding best practices across the AI lifecycle, we are driving a shift towards AI systems that are not just powerful, but also responsible and sustainable.

Chris Skipper: The manifesto highlights the importance of partnerships with nonprofits, governments, and regulators.

Could you share some examples of how collaborations have advanced the Green AI committee's mission?

Sanjay Podder: The committee understands that tackling AI's environmental impact demands broad collaboration with various stakeholders to create comprehensive standards. These standards will focus on transparency software and hardware efficiency and environmental accountability. Engaging a wide range of AI and ICT organizations will help build consensus and ensure that sustainability is a core design principle from the start.

Chris Skipper: The committee is tasked with supporting projects like the development of an ISO standard for measuring AI's environmental impact. What milestones have been achieved in this area so far, and what are the next steps?

Sanjay Podder: Despite rapid advancement in AI, practitioners and users currently lack clear guidance and knowledge on how to measure, reduce, and report, AI impacts. This absence limits public awareness and hinders efforts to address AI's environmental footprint, making it more challenging to develop AI sustainably.

To address these challenges, the committee is actively pursuing initiatives to provide practitioners and users with the necessary knowledge and tools to minimize AI's environmental footprint. The goal is to increase awareness of green AI principles and promote sustainable AI development practices. For example, Green AI Practitioners course to increase the awareness of green AI and understanding of the implications of AI development on the environment.

It'll explain the fundamental principles of green AI developments and solutions and, provide practical, actionable recommendations for practitioners, including guidelines for measurement. Software Carbon Intensity for AI to address the challenges of measuring AI carbon emission to the AI lifecycle, and support more informed decision making and promote accountability in AI development.

Chris Skipper: And finally, what are some of the long-term goals for the Green AI Committee, and how do you see these objectives evolving with advancements in AI technology? 

Sanjay Podder: Our goals are evolving to reduce the ecological footprint of AI systems. Green AI isn't just a standalone solution. It's a core component of a broader sustainability ecosystem. As we advance in this mission, we urge more organizations to join the conversation and help build a more sustainable future for AI, developing and regularly updating standardized methodologies to measure AI's environmental impact will be essential for driving sustainable and scalable AI development.

Chris Skipper: Thanks to Sanjay for those insights. Next up, we have some events coming up in the next few weeks that we'd like to announce. First up, a virtual event from our friends at Electricity Maps, Carbon-aware IT: The new standard for sustainable tech infrastructure, on May the fifth at 6:00 PM CEST.

Explore how organizations optimize IT infrastructure to meet their net zero goals. Then for those of you in Germany, there is a hybrid event in Karlsruhe run by Green Software Development Karlsruhe, called Inside CO2.js - Measuring the Emissions of the Web, happening on May the sixth at 6:30 PM CEST.

This is also a hybrid event, so there will be an online element. Learn how to make emissions estimates and use CO2.js, a JavaScript library from regular environment variables host, Chris Adams and the Green Web Foundation. Then we have another event that is purely virtual happening on May 6th at 6:30 PM CEST, called Monitoring for Software Environmental Sustainability.

Learn how to incorporate software sustainability metrics into your monitoring system. And finally in New York, the Green IO and Apidays conference, green io, New York, happening from May the 14th until May the 15th. Get the latest insights from thought leaders in tech sustainability and actionable hands-on feedback from practitioners scaling green IT. So we've reached the end of this special backstage episode on the Green AI Committee Project at the GSF. Thanks to both Thomas and Sanjay for their contributions. I hope you enjoyed the podcast. To listen to more podcasts about green software, please visit podcast.greensoftware.foundation, and we'll see you on the next episode.

Bye for now. 

Hosted on Acast. See acast.com/privacy for more information.

Show more...
6 months ago
18 minutes 1 second

Environment Variables
The Economics of AI
Chris Adams sits down in-person with Max Schulze, founder of the Sustainable Digital Infrastructure Alliance (SDIA), to explore the economics of AI, digital infrastructure, and green software. They unpack the EU's Energy Efficiency Directive and its implications for data centers, the importance of measuring and reporting digital resource use, and why current conversations around AI and cloud infrastructure often miss the mark without reliable data. Max also introduces the concept of "digital resources" as a clearer way to understand and allocate environmental impact in cloud computing. The conversation highlights the need for public, transparent reporting to drive better policy and purchasing decisions in digital sustainability. 

Learn more about our people:
  • Chris Adams: LinkedIn | GitHub | Website
  • Max Schulze: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • Energy Efficiency Directive [02:02]
  • German Datacenter Association [13:47] 
  • Real Time Cloud | Green Software Foundation [22:10]
  • Sustainable Digital Infrastructure Alliance [33:04]
  • Shaping a Responsible Digital Future | Leitmotiv [33:12]

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Max Schulze: The measurement piece is key. Having transparency and understanding always helps. What gets measured gets fixed. It's very simple, but the step that comes after that, I think we're currently jumping the gun on that because we haven't measured a lot of stuff. 

Chris Adams: Hello and welcome to Environment Variables, brought to you by the Green Software Foundation.

In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect. Candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Hello and welcome to another edition of Environment Variables, where we bring you the latest news and updates from the world of sustainable software development.

I'm your host, Chris Adams. We're doing something a bit different today. Because a friend and frequent guest of the pod, Max Schulzer is actually turning up to Berlin in person where I'm recording today. So I figured it'd be nice to catch up with Max, see what he's up to, and yeah, just like catch up really.

So Max, we've been on this podcast a few times together, 

but not everyone has listened to every single word we've ever shared. So maybe if I give you some space to introduce yourself, 

I'll do it myself and then we'll move from there. Okay. Sounds good. All right then Max, so what brings you to this here?

Can you introduce yourself today? Yeah. 

Max Schulze: Yeah. I think the first question, why am I in Berlin? I think there's a lot of going on in Europe in terms of policies around tech. In the EU, there's the Cloud and AI Development Act. There's a lot of questions now about datacenters, and I think you and I can both be very grateful for the invention of AI because everything we ever talked about, now everybody's talking about 10x, which is quite nice.

Like everybody's thinking about it now. Yep. My general introduction, my name is Max. For everybody who doesn't know me, I'm the founder of the SDIA, the Sustainable Digital Infrastructure Alliance. And in the past we've done a lot of research on software, on datacenters, on energy use, on efficiency, on philosophical questions around sustainability.

I think the outcome that we generated that was probably the most well known is the Energy Efficiency Directive, which is forcing datacenters in Europe to be more transparent now. Unfortunately, the data will not be public, which is a loss. But at least a lot of digital infrastructure now needs to, Yeah,

be more transparent on their resource use. And the other thing that I think we got quite well known for is our explanation model. The way we think about the connection between infrastructure, digital resources, which is a term that we came up with and how that all interrelates to software. Because there's this conception too that we are building datacenters for the sake of datacenters.

But we are, of course, building them in response to software and software needs resources. And these resources need to be made somewhere. 

Chris Adams: Ah, I see. 

Max Schulze: And that's, I think what we were well known for. 

Chris Adams: Okay. Those two things I might jump into a little bit later on in a bit more detail.

So, if you're new to this podcast, my name is Chris Adams. I am the policy chair in the Green Software Foundation's Policy Working Group, and I'm also the director of technology and policy in the confusingly, but similarly named Green Web Foundation. Alright. Max, you spoke about two things that, if I can, I'd like to go dive into in a little bit more detail.

So, first of all, you spoke about this law called the Energy Efficiency Directive, which, as I understand it, essentially is intended to compel every datacenter above a certain size to start recording information, and in many ways it's like sustainability-adjacent information with the idea being that it should be published eventually.

Could we just talk a little bit about that first and maybe some of your role there, and then we'll talk a little bit about the digital resource thing that you mentioned. 

Max Schulze: Yeah. I think on the Energy Efficiency Directive, even one step up, europe has this ambition to conserve resources at any time and point.

Now, critical raw materials are also in that energy efficiency. Normally, actually, this law sets thresholds. Like it is supposed to say, "a building shall not consume more power than X." 

And with datacenters, what they realized, like, actually we can't set those thresholds because we don't know, like reliably how many resources have you consumed?

So we can't say "this should be the limit." Therefore, the first step was to say, well, first of all, everybody needs to report into a register. And what's interesting about that, it's not just the number that in datacenter land everybody likes to talk about, which is PUE, power usage effectiveness. And so how much overhead do I generate with cooling and other things on top of the IT, but also that it for the first time has water in there.

It has IT utilization ranges in there. It even has, which I think is very funny., The amount of traffic that goes in and out of a datacenter, which is a bit like, I don't know what we're trying to measure with this, but you know, sometimes you gotta leave the funny things in there to humor everybody. And it goes really far in terms of metrics on like really trying to see what resources go in a datacenter, how efficiently are there being used, and to a certain degree also what comes out of it. Maybe traffic. Yeah. 

Chris Adams: Ah, I see. Okay. Alright, so it's basically, essentially trying to bring the datacenter industry in line with some of other sectors where they already have this notion of, okay, we know they should be this efficient, and like we've had a lack of information in the datacenter industry, which made it difficult to do that.

Now I'm speaking to you in Berlin, and I don't normally sound like I'm in Berlin, but I am in Berlin, and you definitely sound like you are from Germany, even though you're not necessarily living in Germany. 

Max Schulze: I'm German. 

Chris Adams: Oh yeah. Maybe it might be worth just briefly touching on how this law kind of manifests in various countries, because I know that like this might be a bit inside baseball, but I've learned from you that Germany was one of the countries that was really pushing quite hard for this energy efficiency law in the first place, and they were one of the first countries who actually kinda write into their own national law.

Maybe we could touch a little bit on that before we start talking about world of digital resources and things like that. 

Max Schulze: Yeah, I think even funnier, and then you always know in the Europe that a certain country's really interested in something, they actually implemented it before the directive even was finalized.

So for everybody who doesn't know European policies, so the EU makes directives and then every country actually has to, it's called transpose it, into national law. So just because the EU, it's a very confusing thing, makes something, doesn't mean it's law. It just means that the countries should now implement it, but they don't have to and they can still change it.

So what Germany, for example, did, in the directive it's not mandatory to have heat recovery. So we're using the waste heat that comes out of the datacenter. But also the EU did not set release thresholds. But of course Germany was like, "no, we have to be harsher than this." So they actually said, for datacenters above a certain size, that needs to be powered by renewable energy, you need to have heat recovery,

it's mandatory for a certain size. And of course the industry is not pleased. So I think we will see a re revision of this, but it was a very ambitious, very strong, "let's manage how they build these things."

Chris Adams: I see. Okay. There is a, I think, is there a German phrase? Trust is nice, control is better.

Yes. Well, something like that. Yeah. Yeah. Okay. All right. So if I'm just gonna put my program ahead on, so when I think of a directive, it's a little bit like maybe an abstract class, right? Yes. And then if I'm Germany, I'm making a kind of concrete, I've implemented that class in my German law basically.

Yes. 

Max Schulze: Interfaces and implementations. Okay. 

Chris Adams: Alright. You've explained it into nerd for me. That makes a bit more sense. Thank you for that. Alright, so that's the ED, you kind of, you essentially were there to, to use another German phrase, watch the sausage get made. Yeah. So you've seen how that's turned up and now we have a law in Germany where essentially you've got datacenters regulated in a meaningful way for the first time, for example. Yeah. And we're dealing with all the kind of fallout from all that, for example. And we also spoke a little bit about this idea of digital resources. This is one other thing that you spend quite a lot of intellectual effort and time on helping people develop some of this language themselves and we've used ourselves in some of our own reports when we talk to policy makers or people who don't build datacenters themselves. 'Cause a lot of the time people don't necessarily know what, how a datacenter relates to software and how that relates to maybe them using a smartphone. Maybe you could talk a little about what a digital resource is in this context and why it's even useful to have this language.

Max Schulze: Yeah, and let me try to also connect it to the conversation about the ED. I think when, as a developer, you hear transparency and okay, they have to report data. What you're thinking is, "oh, they're gonna have an API where I can pull this information, also, let's say from the inside of the datacenter." Now in Germany, it is also funny for everybody listening, one way to fulfill that because the law was not specific,

datacenters now are hanging a piece of paper, I'm not kidding, on their fence with this information, right? So this is like them reporting this. And of course we as, I'm also a software engineer, so we as technical people, what we need is the datacenter to have an API that basically assigns the environmental impact of the entire datacenter to something.

And that something has always bothered me that we say, oh, it's the server. Or it's the, I don't know, the rack or the cluster, but ultimately, what does software consume? Software consumes basically three things. We call it compute, network, and storage, but in more philosophical terms, it's the ability to store, process and transfer data.

And that is the resource that software consumes. A software does not consume a datacenter or a server. It consumes these three things. And a server makes those things, turns actually energy and a lot of raw materials into digital resources. Then the datacenter in turn provides the shell in which the server can do that function.

Right? It's, the factory building is the datacenter. The machine that makes the t-shirts is the server. And the t-shirt is what people wear. Right?

Chris Adams: Ah, I see. Okay. So that actually helps when I think about, say, cloud computing. Like when I'm purchasing cloud computing, right, I'm paying for compute. I'm not really that bothered about whether it's an Intel server or something like that.

And to a degree, a lot of that is abstracted away from me anyway, so, and there's good sides to that and downsides to that. But essentially that seems to be that idea of kind of like cloud you compute and there being maybe for want of a better term, primitives you build services with, that's essentially some of the language that you are, you've been repurposing for people who aren't cloud engineers, essentially, to understand how modern software gets built these days.

Right. 

Max Schulze: And I think. That's also the real innovation of cloud, right? They gotta give them credit for that. They disaggregated these things. So on. When AWS was first launched, it was S3 for storage, EC2 for compute, and VPC for networks, right? So they basically said like, whatever you need, we will give it to you at scale in infinite pools of however much you need and want, and you pay only for it by the hour.

Which before you had to rent a server, the server always came with everything. It came with network, it came with storage, and you had to build the disaggregation yourself. But as a developer, fundamentally all you want, sometimes you just want compute. Now we have LLMs. I definitely just want compute. Then you realize, oh, I also need a lot storage to train an LLM.

Then you want some more storage. And then you're like, okay, well I need a massive network inside that, and you can buy each of these pieces by themselves because of cloud. That is really what it is about. 

Chris Adams: Oh, I see. Okay. And this is why it's little bit can be a bit difficult when you're trying to work out the environmental footprint of something because if we are trying to measure, say a server, but the resources are actually cloud and there's all these different ways you can provide that cloud,

then obviously it's gonna be complicated when you try to measure this stuff. 

Max Schulze: Yeah. Think about a gigabyte of storage on S3. There may be hundreds of servers behind it providing redundancy, providing the control layer, doing monitoring, right? Like in a way that gigabyte of storage is not like a disc inside a server somewhere.

It is a system that enables that gigabyte. And on thinking on that, like trying to say the gigabyte needs to come from somewhere is the much more interesting conversation than to go from the server up. Ah. It's misleading otherwise. 

Chris Adams: Alright. Okay. So. I'm gonna try and use a analogy from say, the energy sector, just to kinda help me understand this because I think there's quite a few key ideas inside this. So in the same way that I am buying maybe units of electricity, like kilowatt hours I'm buying that, I'm not really buying like an entire power station or even a small generator when I'm paying for something. There's all these different ways I can provide it, but really I care about is the resources. And this is the kind of key thing that you've been speaking to policy makers or people who are trying to understand how they should be thinking about datacenters and what they're good for and what they're bound for, right? Yes. Okay. Alright, cool. So you are in Berlin and it's surprisingly sunny today, which is really nice. We've made it through the kind of depressing German winter and I've actually like, you know, you, we've crossed parts quite a few times in the last few weeks because you've been bouncing between where you live in Harlem, Netherlands, and Brussels and Berlin quite a lot.

And I like trains and I imagine you like trains, but that's not the only reason you are zipping around here. Are there any projects related to digital sustainability that you could talk about that have been taking up your time, like that you're allowed to talk about these days?

Max Schulze: Yeah, I there's a lot.

There's too many actually, which is a bit overwhelming. We are doing a lot of work still on software also related to AI and I don't think it's so interesting to go into that. I think everybody from this podcast knows that there's an environmental impact. We now have a lot of tools to measure it, so my work is really focused on how do I get policy makers to act. And one project that I just recently came out and now that the elections are over in Germany, we can also talk about it, is we basically wrote a 200 page monster, call it the German Datacenter, not a strategy yet, it's an assessment and there's a lot of like, how much power are they gonna use?

That's not from us. But what we, for the first time we're able to do is to really explain the layers. So there's a lot of misconception that say building a datacenter creates jobs. But I think everybody in software knows that, and I think actually all of you should be more offended when datacenters claim that they are creating jobs because it is always software that runs there that is actually creating the benefit, right?

A datacenter building is just an empty building, and what we've been able to explain is to really say, okay, I build a datacenter, then there is somebody bringing servers, running IT infrastructure, maybe a hoster. That hoster in turn provides services to, let's say an agency. That agency creates a website. And that's a really complex system of actors that each add value,

and what we've shown is that a datacenter, per megawatt, depending on who's building it, can be three to six jobs. And a megawatt is already a very large datacenter, just can be 10,000 servers. If you compare that to the people on top, like if you go to that agency that can go to up to 300 to 600 jobs per megawatt.

And the value creation is really in the software and not anywhere else. And we believe that the German government and all sort of regions, and this applies to any region around the world, should really think like, "okay, if I did, I will build this datacenter, but how do I create that ecosystem around it? You know, in Amsterdam is always a good example.

You have Adyen, you have booking.com, you have really big tech companies, and you're like, "I'm sure they're using a Dutch datacenter." Of course not. They're running on AWS in Ireland. So you don't get the ecosystem benefit. But your policy makers think they do, but you don't connect the dots, so to say. 

Chris Adams: Ah, okay.

So if I understand this, so essentially the federal German government, third largest economy, I think it's third or fourth largest economy in the world. Yes. They need to figure out what to do with the fact there's lots and lots of demand for digital infrastructure. They're not quite sure what to do with it, and they also know they have like binding climate goals. So they're trying to work out how to square their circle. And there is also, I mean, most countries right now do wanna have some notion of like being able to kind of economically grow. So they're trying to understand, okay, what role do these play? And a lot of the time there has been a bit of a misunderstanding between what the datacenter provides and where the jobs actually come from.

And so you've essentially done for the first time some of this real, actually quite rigorous and open research into, "okay, how do jobs and how is economic opportunity created when you do this? And what happens if you have the datacenter in one place, but the job where the agencies or the startups in another place?" 

For example, because there seems to be this idea that if you just have a datacenter, you automatically get all the startups and all the jobs and everything in the same place.

And that sounds like that might not always be the case without deliberate decisions, right? 

Max Schulze: Yes. Without like really like designing it that way. And it becomes even more obvious when you look at Hyperscale and cloud providers, where you see these massive companies with massive profits and let's say they go to a region, they come to Berlin,

and they tell Berlin, you know, having actually Amazon and Spain also sent a really big press release, like, "we're gonna add 3% to your GDP. We're going to create millions of jobs." 

And of course every software engineer know is like just building a datacenter for a cloud provider does not do that.

And what they're also trying to distract, which we've shown in the report by going through their financial records, is that they don't, they pay property tax, so they pay local tax, in Germany is very low. But they of course, don't pay any corporate income tax in these regions. So the region thinks, "oh, I'm gonna get 10% of the revenue that a company like Microsoft makes."

That's not true. And in return, the company ask for energy infrastructure, which is socialized cost, meaning taxpayers pay for this. They ask for land, not always available, or scars. And then they don't really give much back. And that's really, I'm not saying we shouldn't build datacenters or you know, but you have to be really mindful that you need the job creation.

The tax creation is something that comes from above this, like on top of a datacenter stack. Yeah. And you need to be deliberate in bringing that all together, like everything else is just an illusion in that sense. 

Chris Adams: Oh, I see. Okay. So this helps me understand why you place so much emphasis on help helping people understand this whole stack of resources being created and where some of the value might actually be.

'Cause it's a little bit like if you are, let's imagine like say you're looking at, say, generating power for example, and you're like, you're opening a power station. Creating a power station by itself isn't necessarily the thing that generates the wealth or it's maybe people being able to use it in some of the higher services, further up the stack as it were.

Correct. And that's the kind of framing that you helping people understand so they can have a more sophisticated way of thinking about the role that datacenters play when they advance their economies, for example. 

Max Schulze: I love that you're using the energy analogy because everybody will hear that, or who's hearing this on the podcast will probably be like, "oh yeah, that's obvious, right?"

But for digital it, to a lot of people, it's not so obvious. They think that the power station is the thing, but actually it's the chemical industry next to it that should actually create, that's where the value is created. 

Chris Adams: I see. Okay. Alright. That's actually quite helpful. So one of the pieces of work you did was actually.

Providing new ways to think about how digital infrastructure ends up being, like how it's useful for maybe a country, for example. But one thing that I think you spoke about for some of this report was actually the role that software can actually play in like blunting some of the kind of expected growth in demand for electricity and things like that.

And obviously that has gonna have climate implications for example. Can we talk a little bit about the role that designing software in a more thoughtful way actually can blunt some of this expected growth so we can actually hit some of the goals that we had. 'Cause this is something that I know that you spend about fair amount of time thinking about and writing about as well.

Max Schulze: Yeah, 

I think 

it's really difficult. The measurement piece is key, but having transparency and understanding always helps. What gets measured gets fixed. It's very simple. But the step that comes after that, I think we're currently jumping the gun on that because we haven't measured a lot of stuff. We don't have a public database of say, this SAP system, this Zoom call is using this much.

We have very little data to work with and we're immediately jumping through solutions that like, oh, but we, if we shift the workloads, but if we're, for example, workload shifting on cloud, it's, unless the server has turned off, the impact is zero. Or that zero is extreme, but it's very limited because the cloud provider then has an incentive to, to fill it with some other workload.

You, it's, we've talked about this before. If everybody sells oil stocks because they're protesting against oil companies, it just means somebody else gonna buy the oil stock. You know? And it ultimately brings them spot prices down. But that's a different conversation. So I think, let's not jump to that.

Let's first get measurement really, right? And then it raises to me the question, what's the incentive for big software vendors or companies using software to actually measure and then also publish the results? Because, let's be honest, without public data, we can't do scientific research and even communities like the Green Software Foundation will have a hard time, you know, making report or giving good, making good analysis if we don't have publicly available data on certain software applications.

Chris Adams: I see. Okay. This does actually ring some bells 'cause I remember when I was involved in some of the early things related to working out, say software carbon intensity scores. We found that it's actually very, difficult to just get the energy numbers from a lot of services simply because that's not the thing that, 'cause a lot of the time,

if you're a company, you might not want to share this 'cause you might consider that as commercially sensitive information. There's a whole separate project called the Real Time Cloud project within the Green Software Foundation where the idea is to, and there's been some progress putting out, say, region by region figures for the carbon intensity of different places you might run cloud in, for example, and this is actually like a step forward, but at best we're finding that we could get maybe the figures for the carbon intensity of the energy that's there, but we don't actually have access to how much power is being used by a particular instance, for example. We're still struggling with this stuff and this is one thing that we keep bumping up against. So I can see where you're coming from there. So, alright, so this is one thing that you've been spending a bit of time thinking through, like where do we go from here then?

Max Schulze: Yeah, I think first we need to give ourselves a clap on the back because if you look at the amount of tools that can now do measurement like commercial tools, open source tools, I think it's amazing, right? We have, it's all there. Dashboards, promoters things, report interfaces, you know, it's all there. Now, the next step, and I think that's, as software people, we like to skip that step because we think, well, everybody's now gonna do it.

Well, it's not the reality. Now it's about incentives. And I think, for example, one organization we work with is called Seafit and it's a conglomerate of government purchasers, iT purchasers, who say, "okay, we want to purchase sustainable software." And to me it's very difficult to say, and I think you have the same experience, here are the 400 things you should put in your contracts to make the software more sustainable.

Instead, what we recommend is to simply say, well, please send me an annual report of all the environmental impacts created from my usage of your software, and very important phrase we always put in this end, please also publish it. Yeah. Again, and I think, right now, that's what we need to focus on. We need to focus on creating that incentive for somebody who's buying, even like Google Workplace, more like notion to really say, "Hey, by the way, before I buy this, I want to see the report," right?

I want to see the report from my workplace, and even for all the people listening to this, any service you use, like any API you use commercially, send them just an email and say, "Hey, I'm buying your product. I'm paying 50 euro a month, or 500 or 5,000 euros a month. Can I please get that report? Would you mind?"

Yeah. And that creates a whole chain reaction of everybody in the company thinking, "oh my God, all our customers are asking for this." Yeah, we need this. One of our largest accounts wants this figured out. And then they go to the Green Software Foundation or go to all the open source tools.

They learn about it, they implement a measurement. Then they realize, "oh, our cloud providers are not giving us data." So then they're sending a letter to all the cloud providers saying like, "guys, can you please provide us those numbers?" 

Chris Adams: Yeah. Yes. 

Max Schulze: And this is the chain reaction that requires all of us to focus and act now to trigger.

Chris Adams: Okay. So that sounds like, okay. When you, when I first met you, you were looking at, say, how do you quantify this and how do you build some of these measurement tools? And I know that some, there was a German project called, is It SoftAware, which was very, you know, the German take on SoftAware that does try to figure these out to like come up with some meaningful numbers. And now the thing it looks like you're spending some time thinking about is, okay, how do you get organizations with enough clout to essentially write in the level of disclosure that's needed for us to actually know if we're making progress or not?

Right? Yeah. 

Max Schulze: Correct. Little side anecdote on SoftAware. The report is also a 200 page piece. It's been finished for a year and it's not published yet because it's still in review in the, so it's a bit, it's a bit to pain. But fundamentally what we concluded is that, and I, there's other people that have already, while we are writing it, built better tools than we have.

And again, research-wise, this topic is, I don't wanna say solved. All the knowledge is out there and it's totally possible. And that's also what we basically set the report. Like if you can attach to the digital resource, if I can attach to the gigabyte of S3 storage, that is highly redundant or less redundant, an environmental product declaration.

So how much, physical resources went in it, how much energy went into it, how much water? Then any developer building a software application can basically then do that calculation themselves. If I use 400 gigabytes of search, it's just 400 x what I got environment important for, and that information is still not there.

But it's not there because we can't measure it. It's there because people don't want to, like you said, they don't want to have that in public. 

Chris Adams: Okay. So that's quite an interesting insight that you shared there, is that, 'cause when we first started looking at, I don't know, building digital services,

there was a whole thing about saying, well, if my webpage is twice the size, it must have twice the carbon footprint. And there's been a whole debate saying, well actually no, we shouldn't think about that. It doesn't scale that way. And it sounds like you're suggesting yes, you can go down that route where you directly measure every single thing, but in aggregate, if you wanna take a zoom out, if you wanna zoom out to actually achieve some systemic level of change, the thing you might actually need is kind of lower level per primitive kind of allocation of environmental footprint and just say, well, if I know the thing I'm purchasing and building with is say, gigabytes of storage, maybe I should just be thinking about in terms of each gigabyte of storage has this much, so therefore I should just reduce that number rather than worrying too much about if I halve my half, halve the numbers, it's not gonna be precisely a halving in emissions because you're looking at a kind of wider systemic level.

Max Schulze: First of all, I never talk about emissions because that's already like a proxy. Again, I think if you take the example of the browser, what you just said, I think there it becomes very obvious, what you really want is HP, Apple, Dell, any laptop they sell, they say, you know, there's 32 gigs of memory per gigabyte of memory.

This is the environmental impact per CPU cycle. This is the environmental impact. How easy would it be then to say, well, this browser is using 30% CPU, half of the memory, and then again, assigning it to each tab. It becomes literally just a division and forwarding game mathematically. But the scarcity, that the vendors don't ultimately release it on that level makes it incredibly painful for anyone to 

kinda reverse 

engineer and work backwards. Exactly. You get it for the server for the whole thing. Yeah. But that server also, of which configuration was it? Which, how much memory did it have? And this subdivision, that needs to happen.

But again, that's a feature that I think we need to see in the measurement game. But I would say, again, slap on the back for all of us and everybody listening, the measurement is good enough. For AI we really see it like, I think for the first time, it is at a scale that everybody's like, it doesn't really matter if we get it 40 or 60% right. It's pretty bad. Yeah. Right. And instead of now saying like, oh, let's immediately move to optimizing the models. Let's first create an incentive that we get all the model makers and then especially those service providers and the APIs, to just give everybody these reports so that we have facts.

That's really important to make policy, but also then to have an incentive to get better. 

Chris Adams: Okay. So look, have a data informed discussion essentially. Alright, so you need data for a data informed discussion basically. 

Max Schulze: Yes. 

Chris Adams: Alright. 

Max Schulze: To add to that, it's really because you like analogies and I like analogies

it's a market that is liquid with information. What I mean by that, if I want to buy a stock of a company, I download their 400 page financial report and it gives me a lot of information about how good that company's doing. Now for software, what are we, what is the liquidity of information in the market?

It's, for environmental impact, it's zero. The only liquidity we have is features. There are so many videos for every product on how many features and how to use them. So we have even the financial records of most software companies you can't actually get, 'cause they're private. So we have very scarcity of information and therefore competition in software is all about features.

Not about environmental impact. And I'm trying to create information liquidity in the market so that you and I and anybody buying software can make better choices. 

Chris Adams: Ah, okay. And this helps me understand why, I guess you pointed to there was less that French open example of something equivalent to like word processing.

I think we, it should be this French equivalent to like Google Docs. Yeah. Or which is literally called Docs. Yeah. And their entire thing was it's, it looks very much, very similar to some, to the kind of tool you might use for like note taking and everything like that. But because it's on an entirely open stack, it is possible to like see what's happening inside it and understand that, okay, well this is how the impacts scale based on my usage here, for example.

Max Schulze: But now. Now one of our friends, Anna, from Green Coding, would say, yeah, you can just run it through my tool and then you see it, but it's still just research information. We need liquidity on the information of, okay, the Ministry of Foreign Affairs in France is using docs. It has 4,000 documents and 3000 active data users.

Now that's the where I want the environmental impact data, right? I don't want a lab report. I don't wanna scale it in the lab. I want the real usage data. 

Chris Adams: Okay. So that feels like some of the next direction we might be moving to is almost looking at some of these things, seeing, like sacrificing some of the precision for maybe higher frequency information at like of things in production essentially.

So you can start getting a better idea about, okay, when this is in production or deployed for an entire department, for example, what, how will the changes I make there scale across rather than just making an assumption based on a single system that might not be quite as accurate as the changes I'm seeing in the real world?

Max Schulze: And you and I have two different bets on this that go in a different direction. Your bet was very much on sustainability reporting requirements, both CSRD or even financial disclosures. And my bet is if purchasers ask for it, then it will become public. And those are complimentary, but they're bets on the same exact thing. Information liquidity on environmental impact information. 

Chris Adams: Okay. All right. Well, Max, that sounds, this has been quite fun actually. I've gotta ask just before we wrap up now, if people are curious, and I've found some of the stuff you're talking about, interesting. Where should people be looking if they'd like to learn more?

Like is there a website you'd point people to or should they just look up Max Schulze on LinkedIn, for example? 

Max Schulze: That's always a good idea. If you want angry white men raging about stuff, that's LinkedIn, so you can follow me there. We, the SDIA is now focused on really helping regional governments developing digital ecosystems.

So if you're interested in that, go there. If you're interested more in the macro policy work, especially around software, we have launched a new brand that's our think tank now, which is called Leitmotiv. And I'm sure we're gonna include the note, the link somewhere in the notes. Of natürlich. Yeah. Yeah. Very nice.

And yeah, I urge you to check that out. We are completely independently funded now. No companies behind us. So a lot of what you read is like the brutal truth and not some kind of washed lobbying positions. So maybe you enjoy reading it. 

Chris Adams: Okay then. All right, so we've got Leitmotiv, and we've got the SDIA and then just Max Shulzer on LinkedIn.

These are the three places to be looking for this sort. Yeah. Alright, Max, it's lovely chatting to you in person and I hope you have a lovely weekend and enjoy some of this sunshine now that we've made it through the Berlin winter. Thanks, Max. Thanks Chris. 

Hey everyone. Thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts.

And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser.

Thanks again and see you in the next episode.




Hosted on Acast. See acast.com/privacy for more information.

Show more...
6 months ago
34 minutes 35 seconds

Environment Variables
OCP, Wooden Datacentres and Cleaning up Datacentre Diesel
Host Chris Adams is joined by special guest Karl Rabe, founder of WoodenDataCenter and co-lead of the Open Compute Project’s Data Center Facilities group, to discuss sustainable data center design and operation. They explore how colocating data centers with renewable energy sources like wind farms can reduce carbon emissions, and how using novel materials like cross-laminated timber can significantly cut the embodied carbon of data center infrastructure. Karl discusses replacing traditional diesel backup generators with cleaner alternatives like HVO, as well as designing modular, open-source hardware for increased sustainability and transparency. The conversation also covers the growing need for energy-integrated, community-friendly data centers to support the evolving demands of AI and the energy transition in a sustainable fashion.

Learn more about our people:
  • Chris Adams: LinkedIn | GitHub | Website
  • Karl Rabe: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • Windcloud [02:31]
  • Open Compute Project [03:36]
  • Software Carbon Intensity (SCI) Specification [35:47] 
  • Sustainability » Open Compute Project [38:48]
  • Swiss Data Center Association [39:07]
  • Solar Microgrids for Data Centers [47:24]
  • How to green the world's deserts and reverse climate change | Allan Savory [53:39]
  • Wooden DataCenter - YouTube [55:33] 

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Karl Rabe: That's a perfect analogy, having like a good neighbor approach saying, "look, we are here now, we look ugly, we always box, you know, but we help, you know, powering your homes, we reduce the cost of the energy transition, and we also heat your homes." 

Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams. 

Hello, and welcome to another edition of Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. How do you green the bits of a computing system that you can't normally control with software? We've discussed before that one option that you can do might be to shift where you run computing jobs from one part of the world to another part of the world where the energy is greener.

And we've spoken about how this is essentially a way to run the same code, doing the same thing, but with a lower carbon footprint. But even if you have two data centers with the same efficiency on the same grid, one can still be greener than the other simply because of the energy gone into making the data center in the first place and the materials used. So does this make a meaningful difference though, and can it make a meaningful difference? I didn't know this. 

So I asked Karl Rabe the founder of Wooden Data Center and Windcloud, and now increasingly involved in the Open Compute Project, to come on and help me navigate these questions as he is the first person who turned me onto the idea that there are all these options available to green the shell, the stuff around the servers that we have that also has an impact on the software we run.

Karl, thank you so much for joining me. Can I just give you the floor to introduce yourself before we start?

Karl Rabe: Thanks, Chris. This is an absolute honor and I'll have to admit, you know, you're a big part on my carbon aware journey, and so I'm very glad that we finally get to speak. I'm Karl, based out of North Germany. We initially, I always say I had a one proper job. I'm a technical engineer by training,

and then I moved into the data. Then I fell into the data center business, we can touch on it a little later, which was Windcloud, which remains, which was data center thought from the energy perspective, which is a very important idea in 2025. But we pivoted about four years ago to Wooden Data Center, probably can touch upon those a little later, in also realizing there is this supply chain component to the data center.

And there are also tools to action against those. And I'm learning and supporting and providing, you know, as a co-lead in the data center facilities group of the OCP where we work, you know, with the biggest organizations directly in order to shape and define the latest trends in the data center

and especially navigating the AI buildout in somewhat of a, yeah, sustainable way.

Chris Adams: Okay, cool. And when you say OCP, you're referring to the Open Compute Project, the kind of project with Microsoft, Meta, various other companies, designing essentially open source server designs, right?

Karl Rabe: Correct. That is the, initially started by then Facebook now Meta, in order yeah, to create or to cut out waste on the server design. It meanwhile involves and grew into cooling environments, data center design, chiplet design. It's a whole range of initiatives.

Very interesting to look into. And, happy to talk about some of those projects. Yeah.

Chris Adams: All right, thanks Karl. So if you are new to this podcast, my name is Chris Adams. I am the director of technology and policy at the Green Web Foundation, a small Dutch non-profit focused on a fossil free internet by 2030. And I also work with the Green Software Foundation, the larger industry body, in their policy working group.

And we are gonna talk about various projects and we'll add as many all the show notes to all the links we can think of as we discuss. So if there's any particular things that caught your eye, like the OCP or Wooden Data Centers, if you follow the link to this website, to this podcast's website, you'll see all the links there.

Alright then Karl, are you sitting comfortably?

Karl Rabe: I am sitting very well. Yeah.

Chris Adams: Good stuff. Alright, then I guess we can start. So maybe I should ask you, where are you calling me from today, actually?

Karl Rabe: I'm calling you today from the west coast of the North Sea Shore in northern Germany. We are not a typical data center region for Germany, per se. We, which is Frankfurt, you know, 'cause of the big internet hub there. But we are actually located right within a wind farm.

You know, in my home, which is, initially was, you know, home growing up and turned to my home office and eventually to what was somewhat considered the international headquarter of Wooden Data Center. Yeah, and we're very close to the North Sea and we have a lot of renewable power around.

Chris Adams: Oh, I see. Okay. So near the north of Germany, near Denmark, where Denmark has loads of wind, you've got the similar thing where, okay. So

Karl Rabe: Yeah, absolutely. Yeah.

Chris Adams: Oh, I see. I get you. So, ah, alright. For people who are not familiar with the geography of like Europe, or Northern Europe in particular, the north part of Germany has loads of wind turbines and loads of wind energy, but lots of the power gets used in other parts of it.

So, Karl is in the windiest part of Germany, basically.

Karl Rabe: That's correct, yeah. We basically have offshore conditions on shore. And it's a community owned wind farm, which is also a special setup, which is very easy to get, you know, the people's acceptance. We have about a megawatt per inhabitant of this small community.

And so this is becoming, you know, the biggest, yeah, economic factor of the small community.

Chris Adams: Wow. A megawatt per, okay, so just for context, for people who are not familiar with megawatts and kilowatts, the typical house might use what may be about half a kilowatt of constant draw on average over the year. So that's a lot of power per person for that space. So that's a, you're in a place of power abundance compared to the scenario people are wondering where's the power gonna be coming from? Wow, I did not know that.

Karl Rabe: No, that, is, yeah, that is the, so it's a bit of that background, so to speak. We are now trying to go from 300 megawatts to 400 megawatts. There has been, you know, Germany's pushing for more renewable energy, and we have still some spots that we can, under new regulations now, build out.

And the goal or the big dream of our organization, the company running this wind farm for us is trying to produce a billion kilowatt hours per year. And so we're now slightly below that and we're trying to, Yeah, add another, yeah. For, we need to reach probably another 25 percent more production. And, it is, so to speak, you are absolutely right, we are in an energy abundance and that was one of the prerequisites for Windcloud. 'Cause you know, the easiest innovations, is one and one is two. And so we have in, we had energy, I was aware that we also had fiber infrastructure in the north to run those set wind, parts.

So we said, why don't we bring a load to those? That was the initial start of Windcloud.

Chris Adams: Okay, so maybe we should talk a little bit about that. I hadn't realized the connection between the geography and the fact that you're literally in the middle of a wind farm, which is why this came together. Okay. So, the, so as I understand it, and now this makes sense why you are so involved in Windcloud.

So for context, my understanding of Windcloud is it's essentially a company where rather than like connecting data centers via big power lines to like somewhere else where the actual generation is miles away from where the data centers are, the ideals instead was to actually put the data centers literally inside the towers of the wind turbines themselves.

So you don't need to have any cables and, well you've obviously got green energy because it's right there, you're literally using the wind turbine. So, apart from this sounding kind of cool, can you tell me like why you do this from a sustainability perspective in the first place?

Karl Rabe: Yeah, so the way we discovered that I wanted to, and this is the, probably the biggest reference that I can give on the software developer front, and I came out of a study in the UK. We had a really nice cohort.

We were constantly bouncing ideas off of each other. I wanted to actually build small aircraft, because we have a wind farm and we have wealth with that. We actually have people building small planes in our location. They told me I needed about 5 million euros to do it, which I didn't have.

So I started pivoting to a software idea. And why the software did to host that, I just quickly discovered, you know, the amount of energy going into data centers, the amount of, you know, associated issues, and back then, 2015, 16, we were literally just discovering the energy aspect of it. We need didn't discuss, you know, water and land use and all of that.

We really focused on the energy and then we say, "look, well wait a second. You know, we have all this excess of energy. We literally cannot deliver that at this point. So we have a very high share of shutting down our wind turbines when there's just too much energy to move it around. Why not bring the data center as a flexible load close to the production, and enable, you know, sustainable compute

to then send package rather than energy, which is way easier, you know, over the global fiber grids." And that's how I got started and fell into the data industry. Big benefit and big learnings from that stand that I didn't know nothing about data centers. And as an engineer, a lot of things were not adding up. 

We looked at the servers back then, and even then it said, okay, this is good, you know, to run from 15 to 32 degrees. I said "32 degrees? Why? What is data center cooling and why is data center cooling? We don't have 32 degrees in the north." Most likely now, we probably ought to do within eight years.

Karl Rabe: But the important thing was really challenging this, and we started with very little money and we couldn't afford like the proper fancy stuff that all of this data center make, you know, like a chillers, you know, spending electric energy to cool something which really does not need cooling, in my opinion, up to now.

That was the start of this, you know, and so this is, the company of Windcloud is still ongoing. We had what we were, what we had as a huge problem. And I was always, my gut feeling for this was always we need to find a way to be able to compete with the Nordics.

So we have renewable energy, but we need to have it cost effective. And that was something that we tried two or two and a half times, I would say with the, with always having a legal way to access the energy in a proper setting. It was always extremely difficult and extremely frustrating also because the German energy system is very complicated.

It is, you know, geared or developed from a centralized view of this, and is benefiting, you know, large scale industry and large scale energy companies, to putting other terms, as you know, in, you're probably familiar with the, Asterix comics. You know, that far off and north in Germany that probably people, you know, there was a bit suspect, you know, what we're doing there or now we start producing energy and now we also want to use the energy so that is not adding up.

It's very hard and close to impossible to access your own produced energy at scale, you know, which is in an abundance. And that was, yeah, that was something what we always faced, which led to other innovations. So we build the first data center or one of the few data centers to reuse the heat in Germany, putting into an algae farm.

And we trying to create really efficient, PUEs already back then, you know, whereas the industry stranded is quite still quite high in ours. Claim I never had enough money to build a data center with a PUE above a 1.2, or even 1.1. The first servers were cooled with a, you know, a temperature regulated fan, you know, out of the, we built with the same guy who built, you know, a pig stale for my father, you know.

that was, you know, we nearly didn't call it Windcloud. We nearly called it Swines and Servers, 

Chris Adams: Okay. Pigcloud.

Yeah.

Karl Rabe: Yeah, Pigcloud, but it could have been, you know, could have been misleading. And the, so the good thing turning out of that, you know, and going back to that, to those struggles in getting started is that we were forced to uncover a lot of the cooling change and the energy distribution change, which are were not, you know, not really adding up for us.

And that is, you know, still one of the biggest support for us to build efficient data centers and to create, you know, sustainable solutions.

Chris Adams: Okay. Cool. Alright then. So. Okay. There's, I didn't realize anything about the Schweins und Servers aspect at all actually. Would you even, I'm not sure what German is for server would actually be in this context. Was it literally gonna be Schweins und Servers, or?

Karl Rabe: Yes. Some. So, yeah, something like that.

Chris Adams: Okay. Wow. That's, I was not expecting that.

I think Windcloud sounds a bit better, to be honest.

Karl Rabe: Yeah, thanks. No, the brand, the name is great. I think it's still, yeah, I'm very simple like that. You know, we had Windcloud, so we take wind, we make cloud. Now we are have, we are Wooden Data Center. We build data centers outta wood. So we, but there's this, but it's, to be fully honest, is right now, is so to speak,

we call Wooden Data Center, but what we do is try to decarbonize the data center. So wood is obviously, is a massive component of that, but we do see real good effort in the supply chains. Happy to go into that a little later, but there are some examples from fluids. We just found, you know, bio-based polycarbonate for hot and cold containments.

So the amount of components throughout the data center that have, that has a bio-based, ergo, a low carbon alternative is ever so increasing. 

Chris Adams: Can I come back to that a little bit later? 'Cause I just wanna, touch on the

Karl Rabe: Yeah. no.

Chris Adams: So the wind thing, so basically Windcloud, the big idea was putting data centers in the actual wind turbines themselves. So that gives you access to green energy straight away, because you're literally using power that otherwise either couldn't be transmitted because there were, because the pipes weren't big enough essentially in some cases.

And, I guess plus point to that in that if you are already using a building that's already there, you don't have to build a whole new building to put the data centers inside. So there's presumably some kind of embodied energy advantage there because there's a load of energy going into, kind of, that goes into making concrete and stuff that you don't have to do because you are already using an existing, like, building, right?

Karl Rabe: Yeah. So to clarify on that, it is good that you touch on that because there is, this is literally is a bit of a crossover because the company you're referring to is Wind Cause, which is a good friend of ours and they are using the turbine towers. 

Chris Adams: Ah, 

Karl Rabe: They can do so because they use a little bit different type of turbine.

And they're also based in the south of Germany, we had the same idea because it's also very difficult to build next to a wind farm. The big difference is that the towers used at Wind Cause, they are concrete and they have quite a lot of space. They're about 27 meters wide. because of the initial, discussion that we have onshore, or offshore conditions onshore, we have steel towers, which are shorter and hence don't have this big diameters.

You know, we build tall. And so we always had the challenge of still needing a data center. And so that's where our learnings inspirations came from For Wooden Data Center. But we still tried to reuse existing infrastructure. So we were at one point within the Windcloud journey,

I was the co-owner of a former military bunker area. And so we wanted to place within those long concrete tubes, we want to place data centers in order to yeah. Have a security aspect and don't need, you know, a lot of additional housing or even bunkering. And there's obviously the dodging bullets where has spent a lot of concrete and steel concrete in order for those facilities.

Chris Adams: I see. So you're reusing some of the existing infrastructure, so rather than building totally new things, you're essentially reusing same, you're reusing stuff that's already had a bunch of like energy and emissions spent to create it in the first place. I see. Okay. All right. So,

Karl Rabe: And, back then, you know, also to, because it's such a short time back then, really need to emphasize that we were, we really, you know, only had a hunch and a feeling, oh yeah, sort of has CO2 associated to it and probably also the building of a data center.

You know, we have, we really, it was so hard to quantify, and I think we still, carbon accounting is still, is somewhat of, not wizardry, but it's really hard to pull the right numbers. You know, only two years ago at the OCP Summit, so in a Google presentation, the range that they mentioned, you know, for steel and concrete carbon was, you know, 7 to 11 for equally both. So the range of the total uncertainty, I feel, is quite high. You know, and this is the biggest, one of the biggest and most funded, best funded organizations in the world. You know, we're still not being able to get it more concrete, you know, and that's something we really need to work with the industry and supply chains in order to be even aware to specify the problem.

Chris Adams: So, can I unpack that for a second before we talk a little bit about this? And so you're basically saying even the largest companies in the world, they don't necessarily have that good access to know how, what the carbon intensity of the concrete they've used in one data center compared to another one,

it can quite, it can vary quite a lot. Is that what you're saying there?

Karl Rabe: So this was basically specifying the global numbers for steel and concrete. So, I do believe that we have now relatively good visibility for our own builds and projects and also what we do now moving forward. But to really try to grasp the global problem of it, that was still, you know, two years ago was still had this high uncertainty, you know, 'cause we were working with numbers,

maybe they're now five years older, we don't know the complete, you know, build out of every city, every building globally. You know, it's just a lot of guesswork in that, globally. And so I especially believe that although we, Wooden Data Center, the amount of innovation that is put into concrete, you know, has the potential to drastically reduce that for buildings.

You know, the, it was a, it's definitely still a huge problem in, for the data quality and the data, yeah, the emissions, you know, guesswork that's in there, you know, and a lot of those things are based on scenarios, you know, and those are getting ever so more real. But the best example for Wooden Data Center is, there's a comparison comparing a steel concrete building to a CLT one,

Chris Adams: Yeah. 

Karl Rabe: and it is assuming that after 20 years, it's only living for 20 years, which, you know, can easily be 200 years, but that afterwards it is being reduced into, you know, building chairs or tools or toys. But if you take then the CLT and burn it, then obviously you have a zero sum gain. Every, all the carbon that was stored.

It's Cross-Laminated Timber, you know.

Chris Adams: Yeah. So this is the kind of like the special, the, this, it's a special kind of, essentially like machined timber that is, that provides some of the kind of strength properties of maybe steel or stuff like that, but is made from wood, basically, right?

Karl Rabe: Correct. So we need to stretch the importance that this is actually a material innovation. It's a relatively young material based on a, I think a thesis, PhD thesis from Austria. And so we only have CLT or cross-laminated timber for about 25 years. 

Chris Adams: Oh, I see.

Karl Rabe: Or maybe now 26, years.

So the, you probably are familiar, or you have seen there are those huge wooden beams in, you know, storage buildings. 

Chris Adams: Yeah. 

Karl Rabe: Those are called GLT, like glue laminated timbers. And the difference is those boards are basically glued in one direction and they're really good for those beams or for posts.

But to have like ceilings, walls, and roofs, those massive panels, you now have the material of cross laminated.

Chris Adams: Oh, okay. In both directions, right? Yeah.

Karl Rabe: Correct. And those now enable like full massive wooden buildouts. And that's something, and so the biggest challenge is that we, if we say wood, then the association we probably will touch on now or later is fire.

Chris Adams: Yeah. 

Karl Rabe: But in reality, in nature, we don't have those massive panels which don't, you know, just flame up. They have, they're fully tested and certified to glim down, which is, you know, they turn black and then they slowly, you know, in a thousand degrees, they slowly, you know,

shrink 

Chris Adams: Like smolder, right? Yeah.

Karl Rabe: Yeah. And so, but, the, how we design data centers is basically factor in this component, and we are able to create really fire secure data centers built out of those new wood materials basically.

Chris Adams: Okay. All right. So a lot of us are typically thinking of data centers as things made entirely with wood and with steel, concrete and plastic all over the place. And essentially you can introduce wood into this place and it's not gonna burn down because you have this material, which is treated in such a way that it is actually very fire resistant.

And that means you could probably replace, I mean, maybe you could talk to him a little bit about which bits you can replace. Like, can you rep, would you replace like a rack or a wall or like a roof? Maybe we can talk about that so we can like, make it a bit easier to kind of picture what this stuff looks like.

Karl Rabe: No, absolutely. I'm afraid I'm still, always very liberal in sending out samples to my clients, you know, but I don't have it here in my hand, but, so that is a very good the question, is basically like, if we would talk like slide deck or something that I'll try to show in terms of scope one, two, and three, what we can do and what we have now, and that it's like the, biggest component is in obviously the housing. You know, what is your building or your room of a data center? When you are touching on existing buildings we CLT is also ideal for building and building concept of existing large storage or logistic buildings to put in data centers.

We can build that up quite quickly out or create rooms very quickly in those, and there is other huge advantage of CLT is that we get those pre-manufactured and they just fit,

Chris Adams: Oh, like stick them together like Lego rather than have to pour 

Karl Rabe: concrete?

Yeah. Little, Yeah, a little bit. You need like, a little bit of leveling foundation.

If you have an existing floor, still, some datas, you know, preferred to in the greenfield also have a new floor. But that is is something that it helps to, with those, we can create the IT room relatively quickly and then have the build out of those averaging up to 40% quicker times than traditional steel sandwich concrete, you know, data center.

So it is enormously easy to work too. It's very precise to pre-design and pre-manufacture and then very easy to work with. If there's something, if there's a problem on site, you know, you just crank out the chainsaw and adapt and adjust.

Chris Adams: Okay. Just to carve it down a bit.

Karl Rabe: And yeah, so to speak. But once you have then those assembled and secured, it has like a lot of mass to it and a lot of volume to that which creates very good fire protective

physical resistance and availability properties. And that is something that we now, it's really being seen as one of the core benefits. You know, the speed what we can build this out.

Chris Adams: Oh, ok.

Karl Rabe: We have introduced wooden racks, and we also see more and more attention for those.

Chris Adams: Wait, sorry. Can I just, you said wooden rack, like as in the big steel rack that holds the servers themselves, you could, that could be made of wood as well now, so you'd have like a rack thing holding a bunch of servers, right?

Karl Rabe: Correct, So we built this also. We have, one of our clients has send us like a server casing and ask to also think about to do the casing, but we probably, we're not a hundred percent there yet. In order to do that, we would have, we would've an idea, in terms of the spirit of OCP, which is, you know, like,

reduce and cut out stuff. You know, one vision of that would be just a wooden, you know, board where you have dedicated spaces. You slide in your main board, connect power, connect liquid cooling, have fans on the back and then cycle through only the boards. Remove, you know, not even fancy, but just base frames of a server.

But right now the, it's a combination, for the 19 inch standard and also the OCP standard, to use, you know, reduce up to 98% of the steel in those constructions and then only have functional parts in order to stick in the servers made from steel railings and have then wooden frames.

And we do that for the OCP format, which is very popular. We get a lot of the special requirements because we are the only ones who producing like a small version of the rack, which, the OCP has a lot of advantage, but the base rack format is a two meter 30 high, which is like a really hyperscale, you know, mass density approach.

Which doesn't fit even through the doors of most data centers I know, you know, they still have relatively, you know, standard two meter high doors or able to fit in like a 42 inch rack. But you need like a very special facilities because those racks come also pre-integrated and then you roll them in place.

So you need a facility that has high doors, ramps with small inclines, you know, or no ramps at all, in order to be able to place a fully integrated rack. We started building OCP racks because back at the time only hyperscalers were really getting those, and we wanted to do more of this open compute format and were able to offer that.

And the version three rack, you know, was a good candidate to convert to a wooden based structure.

Chris Adams: All right, so we'll come to that a little bit later because I actually came across some of your work when you were building, designing some of these on YouTube so people can see what all this stuff looks like. But if I just come back to the original question, essentially, so it sounds like you can replace quite a lot in a data center.

So you can replace the shell of the building, like literally green the shell by replacing the concrete, which is one of the largest sources of, you know, creating concrete and cement is one of the largest sources of emissions globally. So you can switch, you can move from a source of emissions to, is it a sink?

Cause CO2 and carbon gets sucked out of the sky to be turned into trees. So you've gone from something which is a source to a sink and that, and you can replace not just the walls, the outer building, but also quite a lot of the actual structure itself. Just not the servers yet.

So that's probably like a, I mean, maybe I could ask you then like, If I'm switching from maybe regular concrete and regular steel, I mean, is there any, like, do you folks have any idea about like what the kind of changing quantitative terms might actually be if I was to have like an entirely concrete, entirely steel data center and then replaced all of that with, say, wooden alternatives, for example? Like is it like a 5% reduction or is there any, like, what kind of changes are we looking at for the embodied figures, for example? 

Karl Rabe: So the conservative industry figures are somewhat off between minimum 20%, only having the production change up to 40%. So Microsoft, we, the good thing also we have to mention is that we are an industry now. Microsoft announced those productions I know the other hyperscalers are looking at that.

We only, in Germany we had two other companies started getting into construction. That's why it's for us really important to be on the decarbonization path.

Chris Adams: Ah, I 

Karl Rabe: see. 

So we do come with our own data center, even concepts and philosophies, which I can talk about a little later.

But coming back to the point it is still very hard to quantify. And the, but really positive things about carbon accounting or calculations, as I mentioned, we now have as a data center, we have this negative component, which I have to laugh 'cause an engineer immediately and said, can we then just use more wood?

You know, can we make the wall thicker? You know, obviously yes, you could do that, but there's a cost to that and there's also, you know, it be betrays the idea, you know. But, the really exciting thing is that I now go to show, from show to show, and I was two weeks ago in London

and just on the flight somebody showed me, a picture of an air handling unit inside of a wooden enclosure. And I was chasing an hour through the London show, 'cause I assumed it was there, but it was on a, it was on a different show. but that is the kind of things that we can really think about is enclosures.

So also we have started, we have one, well, for the OCP reg or for this AI build out, we have also created a rear door, which is, so to speak, a wooden rear door. So the fans are traditional, the heat exchanges obviously needs to be traditional, but it is also like a micro aluminum micro channel heat exchanger, which is derived from other industries, which is, you know, helping mass production, reducing cost, reducing emissions.

And that is the other thing that is happening in the industry that 

we're trying to find, not data center specific solutions, but rather find mass produced industry solutions and adapt them to the data center in enhanced reducing cost and time.

Chris Adams: Alright. So in the same way that basically cross laminated timber and the use of wood is something that has been in use in not just in the data center industry, like people make, what are they called? Are they called plyscrapers? You know, skyscrapers with wood. Plyscrapers.

It's, so the idea was that, okay, things which are made, being made in volume here can be made more efficiently and like this is one way that you are adapting 'em to a new domain.

And it may be that if people are getting much, much better at making say very efficient heat pumps, 'cause they can cool things down as well as heat them up, that might be another thing you're looking at. Say, "well actually that might be able to be used in this context as well." Okay. Alright. And if I am, so if I go back to the original thing about saying, okay, we're looking at possible savings maybe 20% up to like possibly 40%, like that's the kind of

Karl Rabe: yeah. That's the range that we have, you know, I think, so the problem is do we, if, did Microsoft evaluate with IT or without IT? So for the facility, I think we can potentially come to net zero approach, which we, you know, but by first principle, I think we can at least achieve realistic reductions to let's say 80, 70-80, 85% with those tools that are set, you know, basically the easy steel replacements, the, like, the rack, the enclosures, the housing, fluids is something we have. There's a very interesting, you know, no-brainer replacement for fossil diesel on backend generators.

It's a liquid called HVO

Chris Adams: Yeah. Let's come to that in a second actually. 'Cause I did wanna ask a little bit about some of the things you can do for the fuel here. So maybe if we just, so basically the, so there are some savings available there and these should be something that you could, some, this is something that should show up in some kind of numerical description.

So if you had like, maybe two data centers and one was using wood in strategic places, then the embodied carbon should be lower. So if, I mean, if I was looking for this there like a label to look for or a standard I can look for? Because in the Green Software Foundation we have this idea called Software Carbon Intensity, which includes like the carbon intensity of the energy you use and stuff like that.

But they also look at the building itself. So theoretically, if you had a wooden data center and a bog standard concrete data center, you know, if you run your code in the greener data center, you would probably have a better score if you had access to these, the data or stuff like that. Do you know, like, do, any places share this data or have like a label for anything like this?

Karl Rabe: They definitely share the data. I, for example, so we definitely also need to Eco Data Centers in Sweden's and we, which were, you know, basically we approach to them. Our whole world was shook. You know? It's like, oh, so we come from this energy perspective, but they didn't have idea and they build it, you know, sustainably. They build it sustainably.

So we need to change, you know. That was, you know, it was a huge eyeopener. And they also are the few first ones to, I'm not sure if they used like the LCA method, but they were quantifying the embed carbon and are certifying to you annually too, as a client, which I think is the way to go.

And we need to figure out how to standardize that. I assume there's potentially a standard that we can use. I know that other data center providers are building sustainably and putting this effort forward. But we don't have a unified label yet, I'm afraid.

Chris Adams: Okay. Well this

Karl Rabe: I know that some, also challenge, like, there's like a data center climate neutral act and some of them specifically exclude scope three, which, you know, I know where they're coming from.

Also in Germany and Germans, you know, they're all about energy efficiency. They love to talk about, you know, just the, energy and the scope two, basically. But then, you know,

Chris Adams: Most of the

Karl Rabe: missing out, this dimension, you know,. Missing out the dimension is being faithful to your girlfriend or wife, you know, like three days out of a week.

You know? It is, it's not

Chris Adams: You are not showing the full picture, right?

Karl Rabe: Yeah. You're not, doing it at all basically. Right. I would probably, you know, just need to Google it and there are, you know, building labels that you could be used in construction. Quantifications, I'm sure, but there's not yet like a data center specific label.

There is good work also in OCP to do metrics and key performance indicators, and they're all looking at that and there is, I think they're trying to build towards something like real, like true net zero.

Chris Adams: Oh yeah. Okay.

Karl Rabe: But... 

Chris Adams: So there are some, so there are some initiatives going on to kind of make this something that you could plausibly see, and, but it's quite early at the moment right now. So like, let's say that I, you know, we spoke before about, okay, I can run my computing jobs in one data center or I can choose to run it somewhere else.

These numbers don't show up just yet, but there is work going on. Actually, I've just realized there is actually a embedded carbon working group inside the OCP who have been looking at some of this stuff. So maybe we'll share a link to that, because that's actually one of the logical places you'd look for that.

Okay,

Karl Rabe: And they do real good work. They do a lot of good initiatives, happening there. There's also, it's Swiss from the Swiss Data Center Association. They also have a label, that is looking at some of this, and they want also to include scope three.

So this is coming up, but it's, not as easy as, you know, having an API, you know, pushing it to the software developer and saying, look, we have this offset because this was constructed, you know, with concrete or steel, and this is, you know,

Chris Adams: Okay. So we're not there yet, but that's a, that's the direction we might be heading towards. Okay. Alright. We'll add some links to that. And now I'd like to pick up the other thing you mentioned about HVO and stuff like that because you spoke before about, you know, Windcloud or wind node and like data centers running in,

or like, you know, relying on wind right now, we know it's a really common refrain that the wind doesn't blow all the time. And like it's news to some people that sun, that's, you know, it is not always sunny, for example. So there'll be cases where there'll be times where you need to get the power from somewhere and, you know, in the form of backup power. And like loads of data centers, you said before they rely on like fossil diesel generators, right.

And that can be, it's bad from a climate point of view, it's also quite expensive, but it's also terribly really bad from an air quality point of view as well, because, you know, people are up, you know, you can see elevated cases of like asthma and all kinds of respiratory problems around data centers and things like that.

But you mentioned there's options there to kind of reduce the impact or have like more responsible options there. Maybe we could talk a little bit about like what's available to me there if I wanted to reduce that part, for example.

Karl Rabe: No, happy to go into that. That is something that we are now thinking about quite heavily this year. And we're already presenting on two occasions, a sense. So the easy options in order to reduce your carbon on the scope one part for data center, which is basically, you know, that's just the direct burning of fossil resources and that is the testing of your backup generators. 

The easy option for that is this second gen diesel, HVO 100. And the, when I realized the key feature of this fuel, which about 15% more expensive, is that it doesn't age. Fossil diesel and especially, you know, biodiesel, the first generation and fossil diesel with biogen, there's always, in Europe there you have a certain degree of mixed in of this, it ages through bacteria biologically.

So it's degrading. So, the, which is, you know, really bad because this diesel sitting there in a tank, you run it half an hour every two weeks, and you maybe change the fuel filter twice, once, twice a year. But if you really have an issue, you know, all of a sudden you use this diesel 

for four hours and then your system, your filter clocks, and you still have a problem, right? If

Chris Adams: So your backup isn't a very good backup.

So backup needs to be a good backup. Yeah.

Yeah. 

Karl Rabe: Yeah. so your backup can run

Chris Adams: you had one job, right? Yeah.

Karl Rabe: Yeah. Yeah. And so, how it's mitigated is people try to use 'pure' diesel or, you know, heating oil, you know, which is not so prone to it, but still ages. They are recycling or, you know, really pumping out the fuel and pumping it in again every three years or they continuously filter it.

All of this is either adding energy or cost. And so, the, this new form of biodiesel, which is, you know, your old frying fat, cracked with hydrogen to, is it looks very clear and it's very chemically treated that it's not really aging. People don't know really yet how long it stays.

They certified 10 years, potentially it's stays, good longer and is also burning cleaner. So

Chris Adams: Ah,

so it'sn't going to be bad 

like bad air and stuff as well then?

Karl Rabe: Yeah. So for the majority of your enterprise IT, your standard data center that's around you, you know, cutting out the whole AI discussion, probably that's the easiest way to do something about that.

This is like a drop in replacement. You just, you know, you empty your tank, you put it in, or you burn your old fuel and put a new, that is something that is, you know, easily increasing the availability of your facility and you can change with that. 

Chris Adams: Can I just try to like summarize that? So, because I don't work with data centers on a daily, so there's like basically fossil diesel, the kind of stuff that, you know, you might associate with dieselgate and like all kinds of bad air, air quality issues. And then the, kind of the other option, which is maybe a little bit more expensive, you said around 15%,

there's something called HVO, which is essentially like biodiesel that's been treated in a particular way to get rid of lots of the gunk so it burns more cleanly and works better as a reliable form of backup. So the backup is actually a decent backup rather than a thing which might not be a very good backup.

Oh okay. So that's like one of the things, and that's like the direction that we might be moving towards and that's kind of what we would like to see more of for the case where you need to rely on some kind of liquid fuel power. Right.

Karl Rabe: Yeah.

Chris Adams: Okay.

Karl Rabe: That is, I think is for most people, you know, just a very easy low hang fruit to just replace, you know, it does not, you know, most engines are certified for, nowadays, all engines run on it, you know, it's, it has the same, Yeah, criteria, properties like traditional diesel, the only thing that's different is it's 4% lighter, you know?

Chris Adams: Oh, I see.

Karl Rabe: So that's the only real on the spec sheet 

Chris Adams: Oh, okay. Alright. So if I may, so that's one of the options. These, so you can replace fossil diesel with essentially non-fossil cleaner, slightly less, you know, less toxic diesel. So that's one thing that you might have in for your backup. Now, I've heard various people talking about, say hydrogen, for example.

Now hydrogen can come from fossil sources. So people, most of the time, actually, most hydrogen does come from basically cracking natural gas or methane gas, but it can come from green places. And that's why is, that's another option that you might have to generate power locally.

Karl Rabe: Is that something that people tend to use? 

So I think the best, the best reference for hydrogen is like the champagne of our energy transition. You know, we need, we need to put in a lot of energy to put, to produce it. It's not easy to store, that we need a lot of facilities to actually create green hydrogen.

Karl Rabe: The majority of hydrogen is not green hydrogen, but it's gray or blue, which is basically

Chris Adams: like a carbon capture hydrogen, which is still a bit questionable. Yeah.

Karl Rabe: all based from fossil tracking, you know, so it's, it potentially, you know, you, you also have the same goal. 

everything that we do for our clients is under this extremely short impact of time.

You know, we have solve everything within now, within five years time, not even five years. Right. And so that's also something that I'm always, you know, spark a good discussion. When we talk about SMRs, you know, have the big pushback for nuclear over in the US, and also in Europe we have voices for that.

And the short answer is, the three reasons I don't believe in it. They're not quick, you know, they're not cheap. Two projects were just, a year ago, there were two potential very, you know, hopeful projects for SMRs were canceled in the US, and half a year later it was a big thing.

The big solution. like, what changed, you know? And then the third point is that is the, very German, perspective, you know, all the fears or the, challenges around the fuel, like getting it mostly 70% from Russia or, then the waste, you know, dumping it somewhere is not solved, still.

And so, this is not a 2030 technology basically. 

That's my, the point what we can do and what I'm happy also to link, there's a good article from some of the, hyperscalers looking into solar combined with batteries, combined with gas based backup. The gas based has the one flexibility that it can start fossil, can move to bio, and potentially also can run on hydrogen. So this is, in terms of the speed with which we are now deploying hundreds of megawatts, you know, every data center for AI is now, you know, 100, 200, 300 gigawatts.

You know, things that we did not,

yeah,

yeah. So it's things that we, you know, like yeah, we're discussing, you know, five or one to five gigawatts for the large people. And every other data center is all of a sudden is now a hundred megawatts, which used to be like a mega facility, just two years back.

So 

that build out can only really be achieved not with grids or interconnects, those are too slow. This can only be basically with micro grids.

 

Chris Adams: I see. Okay.

Karl Rabe: Helping, you know, that are battery backed and gas based backed. And the big advantage of this is that if we think about the data center, 

traditionally a data center is a data fortress, right?

You don't get in, data doesn't get out. It is, you know, is like a bank, you know, in terms of the security measures to do that. And also all of the infrastructure was handled that way. But if we are thinking about the UPS, and the genset not being sitting straight at the data center or only sitting straight at the data center but technically belonging to the utility and being able to provide flexible power, you know, because we have this, as mentioned, underlying flexible build out of renewable energy, and we need, you know, reliable switch-on power, which data centers all have. And so if we can put those together, there's a little bit of this working together, finding the right location where it would make most sense, and then allowing for SLAs and with clients to bidirectionally use batteries, gas turbines,

Chris Adams: Oh, I see.

Karl Rabe: engine power.

This would, you know, help our, yeah, help us to transition, especially if we go into, you know, renewable shares, 60% and above and at latest from 80% we need those backup technologies. And then, and that is coming back to the question of hydrogen. Hydrogen is a technology that would, is so expensive that it would need to run all the time, basically.

With renewable energy, we have high loads of

abundance 

of energy and only need short times of flexible energy generation for which gas and batteries is virtually ideal. And so we promote this idea of an energy-integrated data center, which has the electrical part supporting into the grid and is also, you know, taking advantage of the heat reuse, especially for liquid-cooled facilities in order to give heat out.

And the benefit of that is not only from an economical perspective, but also we see more and more discussions about not in my backyard. If a data center is energy integrated, it's not a question, you know, it's a must have. And there's also a reason why it needs to be there, you know, in order to be able to stabilize the, your town grid or your local area.

And so that's what we are trying to promote. We got a lot of good feedback and we see the first, hopefully we'll have the first data center realized with a medium voltage UPS this year, which is like a first step in moving the availability components of a data center, the batteries and the gen sets to a higher area, which, a lot of the cost in a data center is from the low voltage distribution.

The power that you put in the batteries is also first transferred down, and then it's moved, you know, through the data center until it sits in the battery and then needs to go out. And all of those are rectification steps. And all of this makes, yeah, all of

Chris Adams: You lose, so do you lose power every single time you switch between them? Oh, okay. So it sounds like you, there's a shift from, like, data center as a fortress where, you know, you could do that before to like something where you have to be like a bit more symbiotic with your local environment because for a start, if you don't, you're not gonna allow it,

you won't be allowed to build it. But also it's gonna change the economics in your favor if you are prepared to like play nicely and integrate with your, essentially be a good neighbor. All right. That seems to be what you seem to be suggesting. 

Karl Rabe: That's a perfect analogy. Having like a good neighbor approach. Saying, "look, we are here now, we look ugly, we always box, you know, but we help, you know, powering your homes, we reduce the cost of the energy transition, and we also heat your homes."

 You know, and that is then, is then a relatively easy sale.

Chris Adams: Okay. So that points to possibly that, honestly, that points to quite a different strategy required for people who are, whose job it is to get data centers built. They need to figure out how to honestly relate to communities and say, well, which bits are we supposed to be useful? Rather than the approach that you come to sometimes see where people basically say, "well, we're not even gonna tell you who, it is or who the company is, but we're gonna use all your power and use all your water." 

That approach's days are probably numbered, and that's not a very good strategy to use. It does make more sense to actually have a much more like neighborly approach and these are maybe new skills that need to be developed inside the industry then.

Karl Rabe: Absolutely correct. And so, you need the, you need an open collaboration approach to that, and that is, you know, mirrored, so we trying to be a bit of an example there. And if you go, if you talk about, you had a good point in there, which we usually don't have a lot of time to expand on,

but I think podcast a good format for that. You ask like, where do you get the ideas or what's the guiding star on that? And so, 

I was fortunate to be an exchange worker, you know, on a farm in Canada. And they introduced to me the idea of holistic management, which is like a, basically, decision making framework, that is based on financially viable, socially viable, and economically viable. 

And so those three bases are necessary in order to create sustainable decisions or holistic decisions. Those need to be short and long term viable.

And that has been, you know, my guiding star as an entrepreneur and really being able to cut out those things. You know, there's a lot of startups, especially in Germany. We had those Berlin startups who all came from a business school and all of their ideas worked on an Excel sheet,

always cutting out like a social perspective, you know? And so that was, you know, that's the opposite basically of what we are trying to do. And this framework was found by a farmer who first applied it to grass management and cattle farming, technically. But it is, and it is wildly interesting what he's able to do. He's basically retaking, stopping desertification and reversing effects in subtropical, semi arid areas. 

Yeah. So we'll definitely put that in the notes.

It's a tED Talk from Alan Kettle, which I think he's still alive. He's in his, he must be 90 now. And it's fascinating. But that was a guiding star. And in order to promote our ideas, you know, a lot of our designs, you know, we put on YouTube, but we also put the files up.

The, racks, you know, you can download the CD files. There's, we believe they're created with open source tools. So especially in engineering, we only recently really have powerful open source tools for CAD, for single line diagram. So we can give the source files with that.

And that is is something how we believe that open collaboration and openness helps to build, you know, the trust 

Chris Adams: Ah, 

Karl Rabe: to build with speed and to really work together, you know? And that's what we get mirrored in the Open Compute Foundation. Yeah, that is something that we believe is, for challenges that we face as humanity,

you know, I believe that only this open approach, and especially an open source, open hardware, open data, framework can help us.

Chris Adams: All right. Okay, so we're coming up to time and I just wanted, and you did alluded to it a few times and I just wanted to provide a bit of space to let you talk a little bit about that before we kind of finish up. You spoke a few times about the fact that these models, when you work, bunch of designs for the racks and things are like online and available, and did you say that they're on YouTube, like people can see the videos of this or can like download like something in blender to mess around with themselves or work with it? Maybe you could just expand on that a little bit because I haven't come across that before and

Karl Rabe: Okay, sure, sure. So, yeah, we initially, when we started, you know, we designed everything and we put it, we still need to, shamefully, we still need to put, do the push for, to GitLab and GitHub. We use right now, we put those model on a construction setup, of course, called, GrabCAD.

Chris Adams: Mm-hmm. 

Karl Rabe: And for our, it, you know, it's not only our own thoughts to open source this and to build the trust, but it's also our biggest, easiest marketing tool. You know, create a model, publishing it, put a video tape. We are a bit behind. We have a lot of new and great ideas and things to share.

But that's how we approach it, you know, we'll come up with idea, put it out there and, also, you know, make ourselves criticizable, you know, we'll, we are the only ones comfortably saying, look, we have the best data centers in the world, 'cause you can go, you can download, you can fact check our ideas, and if you have something against it, you know, just give us a feedback.

And we are open to change that. And this way forward, you know, helps us also to approach the biggest companies in the world. They really like this open approach, you know, and they're happy to take the files in the models and to work on that.

Chris Adams: So you basically have like models of like, this is a model of

Karl Rabe: Our rack, you know, this is our module data center. These are ideas behind that. And so that's how we are moving this forward. So people can approach this, they can download, they can see if it fits. They can make suggestions. 

Chris Adams: And like see if it's tall enough for the door and all of the basic or the practical things.

Karl Rabe: Yeah. All those things, you know, and see, okay, we have smaller data center, oh, the base design doesn't fit in this setup, or we need to change something where we place, you know, the dry coolers or something like that.

And so that is really, you know, really good feedback and sparks discussions.

Chris Adams: Yeah, I haven't heard about that before. All right. Well, Karl, thank you so. This has been a lot of fun. Now, we've come up to time and I really enjoyed this tour through all the stuff hap that happens below the software stack for engineers like us, for example. If someone does wanna look at this or learn about this or maybe kind of check out any of the models themselves, if they wanted to build any of this stuff themselves, where should they look?

Like, how do we, where do people find you online or any other projects that you're looking at, you're So, working on?

Karl Rabe: So the best thing technically to, is LinkedIn. This is, you know, our strong platform, to be honest, we are very active there. We publish most there. The webpage is still under construction. You know, people already understand what we do from going to that.

LinkedIn is great. Look, go and, you know, trying to reach us and see what we do at the Open Compute Foundations is also often very great. But yeah, just technically why Google is very easy to find us on LinkedIn and to reach 

Chris Adams: So Karl Rabe on LinkedIn, Wooden Data Center, there aren't that many other companies who are called Wooden Data Center. And then for any of the Open Compute Project stuff, that's the other place to look at where you're working. 'Cause you're doing the open compute modular data center stuff.

Those are the ones, yeah?

Karl Rabe: Yeah. Correct.

Chris Adams: Brilliant. Karl, thank you so much for this. This has been loads of fun and I hope that we've had listeners follow us along as well to see all the options and things available to them. Alright, 

Karl Rabe: It was a pleasure. Thanks so much. And, 

Chris Adams: Likewise, Karl. And, hope the wind turbines treat you well

where you're staying. All right, take care mate.

Karl Rabe: Yeah. Thank you. Bye bye. Cheers.  

Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.

To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 




Hosted on Acast. See acast.com/privacy for more information.

Show more...
6 months ago
1 hour 1 minute 26 seconds

Environment Variables
GreenOps with Greenpixie
Host Chris Adams sits down with James Hall, Head of GreenOps at Greenpixie, to explore the evolving discipline of GreenOps—applying operational practices to reduce the environmental impact of cloud computing. They discuss how Greenpixie helps organizations make informed sustainability decisions using certified carbon data, the challenges of scaling cloud carbon measurement, and why transparency and relevance are just as crucial as accuracy. They also discuss using financial cost as a proxy for carbon, the need for standardization through initiatives like FOCUS, and growing interest in water usage metrics.

Learn more about our people:
  • Chris Adams: LinkedIn | GitHub | Website
  • James Hall: LinkedIn 
  • Greenpixie: Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

News:
  • The intersection of FinOps and cloud sustainability [16:01]
  • What is FOCUS? Understand the FinOps Open Cost and Usage Specification [22:15]
  • April 2024 Summit: Google Cloud Next Recap, Multi-cloud Billing with FOCUS, FinOps X Updates [31:31]


Resources:
  • Cloud Carbon Footprint [00:46]
  • Greenops - Wikipedia [02:18]
  • Software Carbon Intensity (SCI) Specification [05:12]
  • GHG Protocol [05:20]
  • Energy Scores for AI Models | Hugging Face [44:30]
  • What is GreenOps - Newsletter | Greenpixie [44:42]
  • Making Cloud Sustainability Actionable with FinOps 
  • Fueling Sustainability Goals at Mastercard in Every Stage of FinOps 

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!


TRANSCRIPT BELOW:


James Hall: We want get the carbon data in front of the right people so they can put climate impact as part of the decision making process. Because ultimately, data in and of itself is a catalyst for change. 

Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams. 

Hello and welcome to Environment Variables where we explore the developing world of sustainable software development. We kicked off this podcast more than two years ago with a discussion about cloud carbon calculators and the open source tool, Cloud Carbon Footprint, and Amazon's cloud carbon calculator.

And since then, the term GreenOps has become a term of art in cloud computing circles when we talk about reducing the environmental impact of cloud computing. But what is GreenOps in the first place? With me today is James Hall, the head of GreenOps at Greenpixie, the cloud computing startup, cloud carbon computing startup,

to help me shed some light on what this term actually means and what it's like to use GreenOps in the trenches. James, we have spoken about this episode as a bit of a intro and I'm wondering if I can ask you a little bit about where this term came from in the first place and how you ended up as the def facto head of GreenOps in your current gig.

Because I've never spoken to a head of GreenOps before, so yeah, maybe I should ask you that.

James Hall: Yeah, well, I've been with Greenpixie right from the start, and we weren't really using the term GreenOps when we originally started. It was cloud sustainability. It was about, you know, changing regions to optimize cloud and right sizing. We didn't know about the FinOps industry either. When we first started, we just knew there was a cloud waste problem and we wanted to do something about it.

You know, luckily when it comes to cloud, there is a big overlap between what saves costs and what saves, what saves carbon. But I think the term GreenOps has existed before we started in the industry. I think it, yeah, actually originally, if you go to Wikipedia, GreenOps, it's actually to do with arthropods and Trilobites from a couple million years ago, funnily enough, I'm not sure when it started becoming, you know, green operations.

But, yeah, it originally had a connotation of like data centers and IT and devices and I think Cloud GreenOps, where Greenpixie specializes, is more of a recent thing because, you know, it used to be about, yeah, well it is about how do you get the right data in front of the right people so they can start making better decisions, ultimately.

And that's kind of what GreenOps means to me. So Greenpixie are a GreenOps data company. We're not here to make decisions for you. We are not a consultancy. 

We want get the carbon data in front of the right people so they can put climate impact as part of the decision making process. Because ultimately, data in and of itself is a catalyst for change.

You know, whether you use this data to reduce carbon or you choose to ignore it, you know, that's up to the organization. But it's all about being more informed, ignoring or, you know, changing your strategy around the carbon data.

Chris Adams: Cool. Thank you for that, James. You mentioning Wikipedia and Greenops being all about Trilobites and Arthropods, it makes me realize we definitely should add that to the show notes and that's the thing I'll quickly just do because I forgot to just do the usual intro folks. Yeah, my name's Chris Adams.

I am one of the policy director, technology and policy director at the Green Web Foundation, and I'm also the chair of the policy working group inside the Green Software Foundation. All the things that James and I'll be talking about, we'll do our best to judiciously add show notes so you can, you too can look up the origins of, well, the etymology of GreenOps and find out all about arthropods and trilobites and other.

And probably a lot more cloud computing as well actually. Okay. Thank you for that James. So you spoke a little and you did a really nice job of actually introducing what Greenpixie does. 'Cause that was something I should have asked you earlier as well. So I have some experience using these tools, like Cloud Carbon Footprint and so on to estimate the environmental impact of digital services. Right. And a lot of the time these things use billing data. So there are tools out there that do already do this stuff. But one thing that I saw that sets Greenpixie apart from some other tools as well, was the actual, the certification process, the fact that you folks have, I think, an ISO 14064 certification.

Now, not all of us read over ISO standards for fun, so can you maybe explain why that matters and what that actually, what that changes at all, or even what that certification means? 'Cause, It sounds kind of impressive and exciting, but I'm not quite sure, and I know there are other standards floating around, like the Software Carbon Intensity standard, for example.

Like yeah, maybe you could just provide an intro, then see how that might be different, for example.

James Hall: Yeah, so ISO 14064 is a kind of set of standards and instructions on how to calculate a carbon number, essentially based on the Greenhouse Gas Protocol. So the process of getting that verification is, you know, you have official auditors who are like certified to give out these certifications, and ultimately they go through all your processes, all your sources, all the inputs of your data, and kind of verify that the outputs and the inputs

make sense. You know, do they align with what the Greenhouse Gas Protocol tells you to do? And, you know, it's quite a, it's a year long process as they get to know absolutely everything about your business and processes, you really gotta show them under the hood. But from a customer perspective, it means you know, that it proves that

the methodology you're using is very rigorous and it gives them confidence that they can use yours. I think if a company that produces carbon data has an ISO badge, then you can probably be sure that when you put this data in your ESG reports or use it to make decisions, the auditors will also agree with it.

'Cause the auditors on the other side, you know, your assurers or from EY and PWC, they'll be using the same set of guidance basically. So it's kind of like getting ahead of the auditing process in the same way, like a security ISO would mean the security that the chief security officer that would need to, you know, check a new vendor that they're about to procure from.

If you've got the ISO already, you know they meet our standards for security, it saves me a job having to go and look through every single data processing agreement that they have.

Chris Adams: Gotcha. Okay. So there's a few different ways that you can kind of establish trust. And so one of the options is have everything entirely open, like say Cloud Carbon Footprint or OpenCost has a bunch of stuff in the open. There's also various other approaches, like we maintain a library called CO2.js, where we try to share our methodologies there and then one of the other options is certification. That's another source of trust. I've gotta ask, is this common? Are there other tools that have this? 'Cause when I think about some of the big cloud calculators, do you know if they have this, let's say I'm using say, a very, one of the big three cloud providers.

Do these have, like today, do you know if they actually have the same certification or is that a thing I should be looking for or I should be asking about if I'm relying on the numbers that I'm seeing from our providers like this.

James Hall: Yeah, they actually don't. Well, technically, Azure. Azure's tool did get one in 2020, but you need to get them renewed and reordered as part of the process. So that one's kind of becoming invalid. And I'm not sure AWS or Google Cloud have actually tried, to be honest, but it's quite a funny thought that, you know, it's arguably because this ISO the, data we give you on GCP and AWS is more accurate than the data, or at least more reliable than the data that comes directly out the cloud providers.

Chris Adams: Okay. Alright. Let's, make sure we don't get sued. So I'm just gonna stop there before we go any further. But that's like one of the things that it provides. Essentially it's an external auditor who's looked through this stuff. So rather than being entirely open, that's one of the other mechanisms that you have.

Okay, cool. So maybe we can talk a little bit more about open source. 'Cause I actually first found out about Greenpixie a few years ago when the Green Software Foundation sent me to Egypt, for COP 27 to try and talk to people about green software. And I won't lie, I mostly got blank looks from most people.

You know, they, the, I, there are, 

people tend to talk about sustainability of tech or sustainability via tech, and people tend not to see them as, most of the time I see people like conflating the two rather than actually realizing no, we're talking about of the technology, not just how it's good for stuff, for example, and he told me, I think one of your colleagues, Rory, was this, yeah.

He was telling me a bit about, that Greenpixie was initially using, when you just first started out, you started looking at some tools like Cloud Carbon Footprint as maybe a starting point, but you've ended up having to make various changes to overcome various technical challenges when you scale the use up to like a large, to well, basically on a larger clients and things like that. Could you maybe talk a little bit about some of the challenges you end up facing when you're trying to implement GreenOps like this? Because it's not something that I have direct experience myself. And it's also a thing that I think a lot of people do reach for some open source tools and they're not quite sure why you might use one over the other or what kind of problems they, that they have to deal with when you start processing that, those levels of like billing and usage data and stuff like that.

James Hall: I think with the, with cloud sustainability methodologies, the two main issues are things like performance and the data volume, and then also the maintenance of it. 'Cause just the very nature of cloud is you know, huge data sets that change rapidly. You know, they get updated on the hour and then you've also got the cloud providers always releasing new services, new instance types, things like that.

So, I mean, like your average enterprises with like a hundred million spend or something? Yeah. Those line items of usage data, if you like, go down to the hour will be billions of rows and terabytes of data. And that is not trivial to process. You know, a lot of the tooling at the moment, including Cloud Carbon Footprint, will try to, you know, use a bunch of SQL queries to truncate it, you know, make it go up to monthly.

So you kind of take out the rows by, you know, a factor of 24 times 30 or whatever that is. It's about 740, I think. Something like that (720). Yeah. Yeah. So, and they'll remove things like, you know, there's certain fields in the usage data that will, that are so unique that when you start removing those and truncating it, you're really reducing the size of the files, but you are really losing a lot of that granularity.

'Cause ultimately this billing data is to be used by engineers and FinOps people. They use all these fields. So when you start removing fields because you can't handle the data, you're losing a lot of the familiarity of the data and a lot of the usability for the people who need to use it to make decisions.

So one of the big challenges is how do you make a processor that can easily handle billions of line items without, you know, falling over. And CCF, one of the issues was the performance really when you start trying to apply it to big data sets. And then on the other side is the maintenance.

You know, arguably it's probably not that difficult to make a methodology of a point in time, but you know, over the six months it takes you to create it, it's way out date. You know, they've released a hundred new instance types across the three providers. There's a new type of storage, there's a brand new services, there's new AI models out there.

And so now, like Greenpixie's main job is how do we make sure the data is more, we have more coverage of all the skews that come out and we can deliver the data faster and customers have more choices of how to ingest it. So if you give customers enough choice and you give it to them quick enough and it's, you know, covering all of their services, then you know, that's what those, lack of those three things is really what's stopping people from doing GreenOps, I think.

Chris Adams: Ah, okay, so one of them was, one of the things you mentioned was just the volume, the fact that you've got, you know, hours multiply the number of different, like a thousand different computers or thousands of computers. That's a lot of data. And then there's a, there's like one of the issues about like the metrics issue, like you, if you wanna provide a simple metric, then you end up losing a lot of data.

So that's one of the things you spoke about. And the other one was just the idea of models themselves not being, there's natural cost associated with having to maintain these models. And as far as I'm aware, there aren't, I mean, are there any kind of open sources of models so that you can say, well this is what the figures probably would be for an Amazon EC, you know, 6XL instance, for example.

That's the stuff you're talking to when you say the models that you, they're hard to actually up to, hard to keep up to date, and you have to do that internally inside the organization. Is that it?

James Hall: Yes, we've got a team dedicated to doing that. But ultimately, like there will always be assumptions in there. 'Cause some of these chip sets you actually can't even get your hands on. So, you know, if Amazon release a new instance type that uses an Intel Xeon 7850C, that is not commercially available.

So how do you get your hands on an Intel Xeon 7850B that is commercially available and you're like, okay, it, these six things are similar in terms of performance in hardware. So we're using this as the proxy for the M5 large or whatever it is. And then once you've got the power consumption of those instance types,

then you can start saying, okay, this is how we, this is how we're mapping instances to real life hardware. And then that's when you've gotta start being really transparent about the assumptions, because ultimately there's no right answer. All you can do is tell people, this is how we do it. Do you like it?

Do you?

And you know, over the four years we've been doing this, you know, there's been a lot of trial and error. Actually, right at the start, one of the questions was, what are my credentials? How did I end up as head of GreenOps? I wouldn't have said four years ago I have any credentials to be, you know, a head of GreenOps.

So it was a while when I was the only head of GreenOps in the world, according to a Sales Navigator. Why me? But I think it's like, you know, they say if you do 10,000 hours of anything, you kind of, you become good at it. And I wouldn't say I'm a master by any means, but I've made more mistakes and probably tried more things than anybody else over the four years.

So, you know, just, from the war stories, I've seen what works. I've seen what doesn't work. And I think that's the kind of, that's the kind of experience people wanna trust. And why Greenpixie made me the head of GreenOps.

Chris Adams: Okay. All right. Thanks for that, James. So maybe this is actually a nice segue to talk about a common starting point that lots of people do actually have. So over the last few years, we've also seen people talk about move from not moved away, not just talking about DevOps, but talking about like FinOps.

This idea that you might apply kind of some financial thinking to how you purchase and consume, say, cloud services for example. And this tends to, as far as I understand, kinda nudge people towards things like serverless or certain kinds of ways of buying it in a way, which is almost is, you know, very much influenced by fi by I guess the financial sector.

And you said before that there's some overlap, but it's not totally over there, it's not, you can't just basically take a bunch of FinOps practices and think it's gonna actually help here. Can we explore that a bit and maybe talk a little bit about what folks get wrong when they try to like map this straight across as if it's the same thing?

Please.

James Hall: Yeah, so one of the big issues is cost proxies, actually. Yeah, a lot of FinOps as well, how do you fix, or how do you optimize from a cost perspective? What already exists? You know, you've already emitted it. How do you now make it cheaper? The first low hanging fruit that a finance guy trying to reduce their cloud spend would do is things like, you know, buy the instances up front.

So you've paid for the full year and now you've been given a million hours of compute.

That would might, that might cut your bill in half, but if anything that would drive your usage up, you know, you've got a million hours, you are gonna use them.

Chris Adams: Commit to, so you have to commit to then spending a billion. You're like, "oh, great. I have the cost, but now I definitely need to use these." Right?

James Hall: Yeah, exactly. And like, yeah, you say commitments. Like I promise AWS I'm gonna spend $2 million, so I'm gonna do whatever it takes to spend that $2 million. If I don't spend $2 million, I'll actually have to pay the difference. So if I only do a million in compute, I'm gonna have to pay a million and get nothing for it.

So I'm gonna do as much compute as humanly possible to get the most bang for my back. And I think that's where a lot of the issues is with using costs. Like if you tell someone something's cheap, they're not gonna use less, they're gonna be like, "this looks like a great deal." I'm guilty of it myself. I'll buy clothes I don't need 'cause it's on a clearance sale.

You know? And that's kind of how cloud operates. But when you start looking at, when you get a good methodology that really looks at the usage and the nuances between chip sets and storage tiers, you know, there is a big overlap between, you know, cutting the cost from a 2X large to a large that may halve your bill, and it will halve your carbon. And that's the kind of things you need to be looking out for. You need a really nuanced methodology that really looks at the usage more than just trying to use costs.

Chris Adams: Okay, so that's one place where it's not so helpful. And you said a little bit like there are some places where it does help, like literally just having the size of the machine is one of the things you might actually do. Now I've gotta ask, you spoke before about like region shifting and stuff, something you mentioned before.

Is there any incentive to do anything like that when you are looking at buying stuff in this way? Or is there any kind of, what's the word I'm after, opinion that FinOps or GreenOps has around things like that because as far as I can tell, there isn't, there is very rarely a financial incentive to do anything like that.

If anything, it costs, usually costs more to use, maybe say, run something in, say Switzerland for example, compared to running an AWS East, for example. I mean, is that something you've seen, any signs of that where people kind of nudge people towards the greener choice rather than just showing like a green logo on a dashboard for example?

James Hall: Well, I mean, this is where GreenOps comes into its own really, because I could tell everyone to move to France or Switzerland, but when you come to each individual cloud environment, they will have policies and approved regions and data sovereignty things, and this is why all you can do is give them the data and then let the enterprise make the decision. But ultimately, like we are working with a retailer who had a failover for storage and compute, but they had it all failing over to one of the really dirty regions, like I think they were based in the UK and they failed over to Germany, but they did have Sweden as one of the options for failover, and they just weren't using it.

There's no particular reason they weren't using it, but they had just chosen Germany at one point. So why not just make that failover option Sweden? You know, if it's within the limits of your policies and what you're allowed to do. But, the region switching is completely trivial, unfortunately, in the cloud.

So you know, you wouldn't lift and shift your entire environment to another place because there are performance, there are cost implications, but again, it's like how do you add sustainability impact to the trade-off decision? You know, if increasing your cost 10% is worth a 90% carbon reduction for you, great.

Please do it if you know the hours of work are worth it for you. But if cost is the priority, where is the middle ground where you can be like, okay, these two regions are the same, they have the same latency, but this one's 20% less carbon. That is the reason I'm gonna move over there. So it's all about, you've already, you can do the cost benefit analysis quite easily, and many people do.

But how do you enable them to do a carbon benefit analysis as well? And then once they've got all the data in front of them, just start making more informed decisions. And that's why I think the data is more important than, you know, necessarily telling them what the processes are, giving them the, here's the Ultimate Guide to GreenOps. You know, data's just a catalyst for decisions and if you just need to give them trustworthy data. And then how many use cases does trustworthy data have? You know, how many, how long is a piece of string? I've seen many, but every time there's a new customer, there's new use cases.

Chris Adams: Okay, cool. Thank you for that. So, one thing that we spoke before in this kind of pre-call was the fact that, sustainability is becoming somewhat more mainstream. And there's now, within the kind of FinOps foundation or the people who are doing stuff for FinOps are starting to kind of wake up to this and trying to figure out how to incorporate some of this into the way they might kind of operate a team or a cloud or anything like that.

And you. I believe you told me about a thing called FOCUS, which is, this is like something like a standardization project across all the FinOps and then, and now there's a sustainability working group, particularly inside this FOCUS group. For people who are not familiar with this, could you tell me what FOCUS is and what this sustainability working group as well working on?

You know, 'cause working groups are supposed to work on stuff, right?

James Hall: Yeah, so as exactly as you said, FOCUS is a standardization of billing data. So you know, when you get your AWS bill, your Azure bill, they have similar data in them. But they will be completely different column names. Completely different granularities, different column sizes. And so if you're trying to make a master report where you can look at all of your cloud and all of your SaaS bills, you need to do all sorts of data transformations to try and make the columns look the same.

You know, maybe AWS has a column that goes one step more granular than Azure, or you're trying to, you know, do a bill on all your compute, but Azure calls it virtual machines. AWS calls it EC2. So you either need to go and categorize them all yourself to make a, you know, a master category that lets you group by all these different things or, you know, thankfully FOCUS have gone and done that themselves, and it started off as a, like a Python script you could run on your own data set to do the transformation for you, but slowly more cloud providers are adopting the FoCUS framework, which means, you know, when you're exporting your billing data, you can ask AWS give me the original or give me a FOCUS one. So they start giving you the data in a way where it's like, I can easily combine all my data sets. And the reason this is super interesting for carbon is because, you know, carbon is a currency in many ways, in the fact that the, 

Chris Adams: there's price on it in Europe. There's a price on it in the UK. Yeah.

James Hall: There's a price on it, but also like the way Azure will present you, their carbon data could be, you know, the equivalent of yen, AWS could be the equivalent of dollars.

They're all saying CO2 E, so you might think they're equivalent, but actually they're almost completely different currencies. So this effort of standardization is how do we bring it back? Maybe like, don't give us the CO2 E, but how do we go a few steps before that point and like, how do we start getting similar numbers?

So when we wanna make a master report for all the cloud providers, it's apples to apples, not apples to oranges. You know, how do we standardize the data sets to make the reporting, the cross cloud reporting more meaningful for FinOps people?

Chris Adams: Ah, I see. Okay. So I didn't realize that the FOCUS stuff has actually listing, I guess like what the, let's, call them primitives, like, you know, compute and storage. Like they all have different names for that stuff, but FOCUS has a kind of shared idea for what the concept of cloud compute, a virtual machine might be, and likewise for storage.

So that's the thing you are trying, you're trying to apply, attach a carbon value to in these cases, so you can make some meaningful judgment or so you can present that information to people. 

James Hall: Yeah, it's about making the reports at the same, but also how do you make the numbers, the source of the numbers more similar? 'Cause currently, Azure may say a hundred tons in their dashboard. AWS may say one ton in their dashboard. You know, the spend and the real carbon could be identical, but it's just the formula behind it is so vastly different that you're coming out with two different numbers.

Chris Adams: I see. I think you're referring to at this point here. Some places they might share a number, which is what we refer to as a location based figure. So that's like, what was kind of considered on the ground based on the power intensity from the grid in like a particular part of the world.

And then a market based figure might be quite a bit lower. 'Cause you said, well, we've purchased all this green energy, so therefore we are gonna kind of deduct that from what a figure should be. And that's how we'd have a figure of like one versus 100. But if you're not comparing these two together. It's gonna, these are gonna look totally different.

And you, like you said, it's not apples. With apples. It's apples with very, yeah. It's something totally different. Okay. That is helpful.

James Hall: It gets a lot more confusing than that 'cause it's not just market and location based. Like you could have two location based numbers, but Azure are using the grid carbon intensity annual average from 2020 because that's what they've got approved. AWS may be using, you know, Our World in Data 2023 number, you know, and those are just two different sources for grid intensity.

And then what categories are they including? Are they including Scope 3 categories? How many of the scope 2 categories are they including? So when you've got like a hundred different inputs that go into a CO2 number, unless all 100 are the same, you do not have a meaningful comparison between the two.

Even location/market based is just one aspect of what goes into the CO2 number, and then where do they get the kilowatt hour numbers from? Is it a literal telemetry device? Or are they using a spend based property on their side? Because that's not completely alien to cloud providers to ultimately rely on spend at the end of the day.

So does Azure use spend or does AWS use spend? What type of spend are they using? And that's where you need the transparency as well, because if you don't understand where the numbers come from, it could be the most accurate number in the world, but if they don't tell you everything that went into it, how are you meant to know?

Chris Adams: I see. Okay. That's really interesting. 'Cause the Green Web Foundation, the organization I'm part of, there is a gov, there's a UK government group called the Government Digital Sustainability Alliance. And they've been doing these really fascinating lunch and learns and

one thing that showed up was when the UK government was basically saying, look, these are, this is the carbon footprint, you know, on a kind of per department level. Like this is what the Ministry of Justice is, or this is what say the Ministry of Defense might be, for example. And that helps explain why you had figures where you had a bunch of people saying the carbon footprint of all these data centers is really high.

And then you said they, there were people talking about saying, well, we're comparing this to cloud looks great, but 'cause the figures for cloud are way lower. But the thing they, the thing that I was that people had to caveat that with, they basically said, well, we know that this makes cloud look way more efficient here, and it looks like it's much more, much lower carbon, but because we've only got this final kind of market based figure, we know that it's not a like for like comparison, but until we have that information, we're, this is the best we actually have. And this, is an organization which actually has like legally binding targets. They have to reduce emissions by a certain figure, by a certain date. This does seem like it has to be, I can see why you would need this transparency because it seems very difficult to see how you could meaningfully track your progress towards a target if you don't have access to that.

Right?

James Hall: Yeah. Well, 

I always like to use the currency conversion analogy. If you had a dashboard where AWS is all in dollars, Azure, or your on premise is in yen. There's 149 yen in 1 dollar. So, but if you didn't know this one's yen and this one's dollars, you'd be like, "this one's 149 times cheaper. Why aren't we going all in on this one?"

But actually it's just different currencies. And they are the same at the end of the day. Under the hood, they're the same. But, know, just the way they've turned it into an accounting exercise has kind of muddied the water, which is why I love electricity metrics more. You know, they're almost like the, non fungible token of, you know, data centers and cloud.

'Cause you can use that to calculate location-based. You can use calculate market-based. You can use electricity to calculate water cooling and metrics and things like that. So if you can get the electricity, then you're well on your way to meaningful comparisons.

Chris Adams: And that's the one that everyone guards very jealously a lot of the time, right?

James Hall: Exactly. Yeah. Well that's directly related to your cost of running business and that is the proprietary information.

Chris Adams: I see. Okay. Alright, so we spoke, we've done a bit of a deep dive into the GSG protocol, scope 3, supply chain emissions and things like that. If I may, you mentioned, you, referenced this idea of war stories before. Right. And I. It's surprisingly hard to find people with real world stories about okay, making meaningful changes to like cloud emissions in the world.

Do you have any like stories that you've come across in the last four years that you think are particularly worth sharing or that might be worth, I dunno, catch people's attention, for example. Like there's gotta be something that you found that you are allowed to talk about, right.

James Hall: Yeah, I mean, MasterCard, one of our Lighthouse customers, they've spoken about the work we're doing with them a lot in, at various FinOps conferences and things like that. But they're very advanced in their GreenOps goals. They have quite ambitious net zero goals and they take their IT sustainability very seriously.

Yeah, when we first spoke to them. Ultimately the name of the game was to get the cloud measurement up to the point of their on-premise. 'Cause their on-premise was very advanced, daily electricity metrics with pre-approved, CO2 numbers or CO2 carbon coefficients that multiplied the, you multiply the electricity with.

But they were getting, having no luck with cloud, essentially, you know, they spend a lot in the cloud and, but they, they were honestly like, rather than going for just the double wins, which is kind of what most people wanna do, where it's like, I'm gonna use this as a mechanism to save more money.

They honestly wanted to do no more harm and actually start making decisions purely for the sustainability benefits. And we kind of went in there with the FinOps team, worked on their FinOps reporting, combined it with their FinOps recommendations and the accountability, which is their tool of choice.

But then they started having more use cases around. How do they use our carbon data, not our electricity data from the cloud or like, because we have a big list of hourly carbon coefficients. They wanna use that data to start choosing where they put their on-premise data centers as well, and like really making the sustainability impact a huge factor in where they place their regions, which I think is a very interesting one. 'Cause we had only really focused on how do we help people in their public cloud. But they wanted to align their on-premise reporting with their cloud reporting and ultimately start even making decisions. Okay, I know I need to put a data center in this country.

Do I go AWS, Azure, or on-prem for this one? And what is the sustainability impact of all three? And, you know, how do I weigh that against the cost as well? And it's kind of like the golden standard of making sustainability a big part of the trade-off decision. 'Cause they would not go somewhere, even if it saved them 50% of their cost, if it doubled their carbon. They're way beyond that point. So they're a super interesting one. And even in public sector as well, like the departments we are working with are relatively new to FinOps and they didn't really have like a proper accountability structure for their cloud bill. But when you start adding carbon data to it, you are getting a lot more eyes onto the, onto your bills and your usage.

And ultimately we help them create that more of a FinOps function just with the carbon data. 'Cause people find carbon data typically more interesting than spend data. But if you put them on the same dashboard, now it's all about how do you market efficient usage? And I think that's one of the main, use cases of GreenOps is to get more eyes or more usage.

So, 'cause the more ideas you've got piling in, the more use cases you find and.

Chris Adams: Okay. Alright, so we spoke, so you spoke about carbon as one of the main things that people are caring about, right. And we're starting to develop more of an awareness that maybe some data centers might themselves be exposed to kind of climate risks themselves. Because I know they were built on a floodplain, for example.

And you don't want a data center on a floodplain in the middle of a flood, for example. Right. but there's also like the flip side, you know, that's too much water. But there are cases where people worry about not enough water, for example. I mean, is that something that you've seen people talk about more of?

Because there does seem to be a growing awareness about the water footprint of digital infrastructure as well now. Is that something you're seeing people track or even try to like manage right now?

James Hall: Well, 

we find that water metrics are very popular in the US more so than the CO2 metrics, and I think it's because the people there feel the pain of lack of water. You know, you've got the Flint water crisis. In the UK, we've got an energy crisis stopping people from building homes. So what you really wanna do is enable the person who's trying to use this data to drive efficiency, to tell as many different stories as

is possible,. You know, the more metrics and the more choice they have of what to present to the engineers and what to present to leadership, the better outcomes they're gonna get. Water is a key one because data centers and electricity production uses tons of water. And the last thing you wanna do is, you know, go to a water scarce area and put a load of servers in there that are gonna guzzle up loads of water. One, because if that water runs out, your whole data center's gonna collapse. So it's, you're exposing yourself to ESG risk. And also, you know, it doesn't seem like the right thing to do. There are people trying to live there who need to use that water to live.

But you know, you've got data centers sucking that water out, so you know, can't you use this data to again, drive different decisions, could invoke an emotional response that helps people drive different decisions or build more efficiently. And if you're saving cost at the end of that as well, then everyone's happy.

Chris Adams: So maybe this is actually one thing we can talk about because, or just like, drill into before we kind of, move on to the next question and wrap up. So we, people have had incentives to track cost and cash for obvious reasons, carbon, as you're seeing more and more laws actually have opinions about carbon footprint and being able to report that people are getting a bit more aware of it.

Like we've spoken about things like location based figures and market based figures. And we have previous episodes where we've explored and actually kind of helped people define those terms. But I feel comfortable using relatively technical terminology now because I think there is a growing sophistication, at least in certain pockets, for example.

Water still seems to be a really new one, and it seems to be very difficult to actually have, find access to meaningful numbers. Even just the idea of like water in the first place. Like you, when you hear figures about water being used, that might not be the same as water. Kind of.

It's not, it might not be going away, so it can't be used. It might be returned in a way that is maybe more difficult to use or isn't, or is sometimes it's cleaner, sometimes it's dirtier, for example. But this, it seems to be poorly understood despite being quite an emotional topic. Have you, yeah, what's your experience been like when people try to engage with this or when you try to even find some of the numbers to present to people and dashboards and things?

James Hall: Yeah. So yeah, surprisingly, all the cloud providers are able to produce factors. I think it's actually a requirement that when you have a data center, you know what the power usage effectiveness is, so what the overhead electricity is, and you know what the water usage effectiveness is. So you know, what is your cooling system, how much water does it use, how much does it withdraw?

Then how much does it actually consume? So the difference between withdrawal and consumption, is withdrawal is you let you take clean water out, you're able to put clean water back relatively quickly. Consumption is you have either poisoned the water with some kind of, you know, you've diluted it or you know, with some kind of coolant that's not fit for human consumption or you've now evaporated it.

And there is some confusion sometimes around "it's evaporated, but it'll rain. It'll rain back down." But, you know, a lake's evaporation and redeposition processs is ike a delicate balance. If it, you know, evaporates 10,000 liters a day and rains 10,000 liters a day after, like a week of it going into the clouds and coming back down the mountain nearby.

If you then have a data center next to it that will accelerate the evaporation by 30,000 leases a day, you really upset the delicate balance that's in there and that, you know, you talk about are these things sustainable? Like financial sustainability is, do you have enough money and income to last a long time, or will your burn rate run out next month?

And it's the same with, you know, sustainability. I think fresh water is a limiting resource in the same way a company's bank balance is their limiting resource. There's a limited amount of electricity, there's a limited amount of water out there. 

I think it was the cEO of Nvidia. I saw a video of him on LinkedIn that said, right now the limit to your cloud environment is how much money you can spend on it.

But soon it will be how much electricity is there? You know, you could spend a trillion dollars, but if there's no more room for electricity, there's no more electricity to be produced, then you can't build anymore data centers or solar farms. And then water's the other side of that.

I think water's even worse because we need water to even live. And you know what happens when there's no more water because the data centers have it. I think it invokes a much more emotional response. When you have good data that kind of is backed by good sources, you can tell an excellent story of why you need to start reducing.

Chris Adams: Okay, well hopefully we can see more of those numbers because it seems like it's something that is quite difficult to get access to at the moment. Water's it, water in particular. Alright, so we're coming to time now and one thing we spoke about in the prep call was talking about the GSG protocol.

We did a bit but nerd like nerding into this and you spoke a little bit about yes, accuracy is good, but you can't just only focus on accuracy if you want someone to actually use any of the tools or you want people to adopt stuff, and you said that in the GHG protocol, which is like the gold standard for people working out kind of the, you know, carbon footprint of things.

You said that there were these different pillars inside of that matter. And if you just look at accuracy, that's not gonna be enough. So can you maybe expand on that for people who maybe aren't as familiar with the GSG protocol as you? Because I think there is something that, I think, that there, there's something there that's worth, I think, worth exploring.

James Hall: Yeah. So it just as a reminder for those out there, the pillars are accuracy, yes, completeness, consistency, transparency, and relevance. A lot of people worry a lot about the accuracy, but, you know, just to give an example that if you had the most amazing, accurate number for your entire cloud environment, you know, 1,352 tons 0.16 grams, but you are one engineer under one application, running a few resources, the total carbon number is completely

useless to you, to be honest. Like how do you make, use that number to make a decision for your tiny, you know, maybe five tons of information. So really you've got to balance all of these things. You know, the transparency is important because you need to build trust in the data. People need to understand where it comes from.

The relevance is, you know, again, are you filtering on just the resources that are important to me? And the consistency touches on, aWS is one ton versus Azure is 100 tons. You can't decide which cloud provider to go into based on these numbers because you know, they're marking their own homework. They've got a hundred different ways to calculate these things. And then the completeness is around, if you're only doing compute, but 90% is storage, you are missing out on loads of information. You know, you could have a super accurate compute for Azure, but if you've got completely different numbers for AWS and you dunno where they come from, you've not got a good data set, a good GreenOps data set to be able to drive decisions or use as a catalyst.

So you really need to prioritize all five of these pillars in an equal measure and treat them all as a priority rather than just go for full accuracy.

Chris Adams: Brilliant. We'll sure make a point of sharing a link to that in the show notes for anyone else who wants to dive into the world of pillars of sustainability reporting, I suppose. Alright. Okay. Well, James, I think that takes us to time. So just before we wrap up, there's gonna be usual things like where people can find you, but are there any particular projects that are catching your eye right now that you are kind of excited about or you'd like to direct people's attention to? 'Cause we'll share a link to the company you work for, obviously, and possibly yourself on LinkedIn or whatever it is. But is there anything else that you've seen in the last couple of weeks that you find particularly exciting in the world of GreenOps or kind of the wider sustainable software field? 

James Hall: Yeah, I mean, a lot of work being done around AI sustainability is particularly interesting. I recommend people go and look at some of the Hugging Face information around which models are more electrically efficient. And from a Greenpixie side, we've got a newsletter now for people wanting to learn more about GreenOps and in fact, we're building out a GreenOps training and certification that I'd be very interested to get a lot of people's feedback on.

Chris Adams: Cool. Alright, well thank you one more time. If people wanna find you on LinkedIn, they would just look up James Hall Greenpixie, presumably right? Or something like that.

James Hall: Yeah, and go to our website as well.

Chris Adams: Well James, thank you so much for taking me along to this deep dive into the world of GreenOps ,cloud carbon reporting and all the, and the rest. Hope you have a lovely day and yeah. Take care of yourself mate. Cheers.

James Hall: Thanks so much, Chris.  

Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.

Chris Adams: To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode. 




Hosted on Acast. See acast.com/privacy for more information.

Show more...
6 months ago
46 minutes

Environment Variables
Each episode we discuss the latest news regarding how to reduce the emissions of software and how the industry is dealing with its own environmental impact. Brought to you by The Green Software Foundation.

Hosted on Acast. See acast.com/privacy for more information.