Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
News
Sports
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/20/88/75/20887589-4f19-08e8-9b64-ba9bed20be12/mza_18235039745453993785.png/600x600bb.jpg
CXO Bytes
The Green Software Foundation
14 episodes
3 weeks ago
Tech leaders, your balancing act between innovation and sustainability just got a guide with the Green Software Foundation’s latest podcast series, CXO Bytes hosted by Sanjay Podder, Chairperson of the Green Software Foundation.In each episode, we will be joined by industry leaders to explore strategies to green software and how to effectively reduce software’s environmental impacts while fulfilling a drive for innovation and enterprise growth.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
Technology
Business,
Science
RSS
All content for CXO Bytes is the property of The Green Software Foundation and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Tech leaders, your balancing act between innovation and sustainability just got a guide with the Green Software Foundation’s latest podcast series, CXO Bytes hosted by Sanjay Podder, Chairperson of the Green Software Foundation.In each episode, we will be joined by industry leaders to explore strategies to green software and how to effectively reduce software’s environmental impacts while fulfilling a drive for innovation and enterprise growth.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
Technology
Business,
Science
Episodes (14/14)
CXO Bytes
How Much Energy Does Google’s AI Use? with Cooper Elsworth

Host Sanjay Podder speaks with Cooper Elsworth, Google’s lead for AI and cloud emissions insights, about the real energy, carbon, and water footprint of AI systems. They discuss Google’s groundbreaking research on measuring AI’s environmental impact using empirical data rather than estimates, revealing a comprehensive methodology, with Cooper explaining how Google’s full stack approach, spanning hardware, software, data centers, and clean energy procurement, has cut Gemini’s carbon footprint by 44x in a year. The conversation also explores the balance between energy efficiency and water usage, the role of transparent metrics in driving climate action, and how AI can be scaled sustainably without undermining net-zero goals.


Learn more about our people:

  • Sanjay Podder: LinkedIn
  • Cooper Elsworth: LinkedIn | Website


Find out more about the GSF:

  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter


Resources:

  • Measuring the environmental impact of delivering AI at Google Scale [00:53] 
  • TPUs improved carbon-efficiency of AI workloads by 3x | Google Cloud Blog [01:45] 
  • Measuring the environmental impact of AI inference | Google Cloud Blog [01:52] 
  • Green AI Position Paper | GSF [24:48] 
  • Software Carbon Intensity (SCI) Specification | GSF [28:13]


If you enjoyed this episode then please either:

  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

Hosted on Acast. See acast.com/privacy for more information.

Show more...
3 weeks ago
33 minutes 6 seconds

CXO Bytes
Sustainability at Scale in the GenAI Era with Dr. Katia Chaban
Host Sanjay Podder speaks with Dr. Katia Chaban about uniting IT and sustainability through circular economy practices. Dr. Chaban shares her journey into sustainable IT, the importance of addressing e-waste and embodied carbon, and the growing challenges posed by AI. She highlights how circular thinking, training, and cross-industry collaboration can help CXOs and technology leaders embed sustainability into IT strategies while reducing costs and environmental impact.

Learn more about our people:
  • Sanjay Podder: LinkedIn
  • Dr. Katia Chaban: LinkedIn

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • Software Carbon Intensity specification [11:02]
  • The Carbon Literacy Project [20:16]
  • Environment Variables - Real Time Cloud with Adrian Cockcroft [30:34]
  • Updating the Materiality of Sustainability Management | NTT DATA Group [34:25]
  • Global Data Centers 2025 Sustainability Report | NTT DATA [40:26]
  • Green Software Practitioner | GSF [45:56]
  • Awesome Green Software | GSF [43:54]
  • The Green Software Foundation Expands Efforts to Incentivize Decarbonization in the Software Industry 
  • NTT DATA Announces First Sustainability Report for its Global Data Centers Division 
  • Global Data Centers 2025 Sustainability Report | NTT DATA 
  • NTT DATA named a Leader by Everest Group in Sustainable IT Services PEAK Matrix® Assessment 2025 report 

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Sanjay Podder:
Hello and welcome to CXO Bytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting chiefs of information, technology, sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.

I am your host, Sanjay Podder.

Welcome to another episode of CXO Bytes, where we bring you unique insights into the world of sustainable IT from the view of the C-Suite. I am your host, Sanjay Podder. Today's guest, Dr. Katia Chaban. Brings over 30 years of global IT leadership, including with NTT Data, EDS, and many others to our current mission, harnessing technology, business and people so that enterprises, ecosystems, and the planet can thrive.

Katia, welcome. Excited to dive in.

Dr. Katia Chaban: Thank you. I am as excited as well. Appreciate the opportunity.

Sanjay Podder: Katia, can you, introduce yourself? You know what? You have been doing and what got you interested in the field of, you know, uniting sustainability and technology? You know, that will be very interesting to know.

Dr. Katia Chaban: It's actually a funny story because back in 2021, I had been with NTT Data actually, and back in 2020, 2020 I started my doctorate. During the period of COVID I decided. That I needed to do something right other than be upset about what was happening in the world. And so I started my education, so my doctorate in business. And then in 2021, I decided to actually leave corporate world, leave the IT industry and go and focus on school.

And I did that. And as I was trying to research my dissertation, like what is my topic going to be? I mean, it's a big part of that education through your doctorate. I had, I had two choices. I came up with this really interesting concept around acceleration of the digital transformation, right? How do we apply that emergency change behavior from COVID to digital transformations?

But then I also had this idea where I kept seeing this sustainability topic, this ESG topic in all of our strategy courses and all my readings and discussions, and then I came across this thing called the Circular Economy, and I'm like, "what is this thing?"

And so I started doing research about it. And so I then decided that I wasn't gonna do something that I had already been doing for a very long time.

I wanted to restart my brain and focus on something new. And so I did, and I focused on circular economy actually in the consumer goods industry. So not even in IT, but it was with that dissertation, its business impacts, and it turned out to be, you know, planet impacts with the amount of waste that comes from the returns process.

So I graduated, I accelerated that schooling and I graduated and I thought, "yay, I am out of IT! I'm gonna go and work in sustainability and save the world. This is awesome." And then I started talking to people and really building that network, from around the world and got to speak at a conference in Thailand where I got to,

and then, but I started talking to all my IT friends and some folks are like, "Hey, you know, there might be an opportunity for you in sustainability in IT."

I'm like, "what you talking about?" So as an academic now I've started to do the research and I went, oh my goodness, the water, the waste, the emissions, the carbon, what is the rare earth mineral? What is happening? And this, to me, this invisible impact of IT is a massive sustainability, and with the advent of AI and the other, is going to be something that we have to control.

And I said, well, I guess I can't get out of IT. I'm gonna go back to IT but I'm going back to IT with a new lens, a new purpose, and a new focus. And that's really to become that advocate, right, that evangelist, and to really drive the practices that we need to have in order not to be the negative impact to the planet, right?

We really have to maximize all the positive things that IT can do and at the same time minimizing our harm. So I thought I was outta IT after all those years, but now I'm back, but really energized on the topic itself because it really will have a massive impact.

Sanjay Podder: No, absolutely. I agree with you on that. In fact, some of the studies shows that, by 2040, 14% of global greenhouse gas emissions will come from IT. And this data came from a research which is, which predates the rise of generative AI. So we can just imagine how things can be. Before we dive in further, I just wanted to let the audience know that everything we discussed today, will be linked in the show notes below this episode.

So, Katia, you know, you are now focused on sustainability and technology and your background research in the area of circularity, right. You know, that is very interesting. How do you see all of this together? Like what are the big opportunities you see when it comes to, say, circularity in the IT context?

Right. You know, are people missing it? Because most of the time we talk about, for example, carbon emissions. We talk about usage, operational emissions, right? So, but when we talk about circularity, would be very interested, given you have been an expert in this topic, how do you see those opportunities in the IT context?

Dr. Katia Chaban: Yeah, so there's massive opportunities. One, circularity is a way of thinking, right? It's almost a, it's a philosophy and it can be applied to everything. So even when we're talking about software development and the SCI and how we're calculating emissions and what those environmental impacts are, what we're also trying to

teach people is that whole lifecycle develop development of keeping things in use for longer, and how do you develop things that are more modular and can be replaced and repaired versus having to create something in a whole and then get rid of it, right? So that's a philosophy, but where it really comes into play is hardware.

So we look at corporations with massive amounts of end user assets. We look at IT assets, your servers, your network devices. I mean, and the larger the organization, the more these assets are out there. And so from a circularity perspective, it's, for me, the big focus for those assets is something called e-waste diversion, right? And so how do we buy these things? How do we treat these things? How do we, end of life, dispose of these things in a way that we are not contributing to a massive, already massive e-waste problem? A problem that's gonna continue to get worse with AI, right? Because as we're building more data centers, we need more equipment, we need more things to run our AI on, and those things are gonna be end of life very quickly because just of the usage on them, the energy that it's, and where are we gonna put it?

What's gonna happen with those things? And so where circularity is a, is massive. And by the way, circularity then also causes, right, e-waste causes emissions. So it all comes around to emissions and it all comes around to climate change. But we have to look at, you know, very different things.

The other thing in circularity that people don't consider is how do you make these things. And so it's rare earth minerals. It's, we take things from the ground, right? And we use those things to make things, chips and LEDs and all of these things within our electronic devices. And the one thing I tell people, which is interesting to see their reaction is

 when you take that stuff out of the earth, it doesn't grow back.

And people are like, "what do you mean it doesn't grow back?"

You know, there's just this preconceived notion that, you know, whatever we take, it's like a, it's like a weed. You pull it out and it comes back and it doesn't. And so the more and more that we are mining, the more and more that we're taking those virgin minerals out, the less and less there is.

And so as we see demand increase for IT assets, because we have this increase in AI infrastructures and our cloud and all these kind of stuff, there's a decrease in availability. So there's cost issues there as well, but there's just availability issues, which should then force the conversation about how do we recycle these components and reuse these components.

And again, going back to that modular design, create our assets in a way that we can easily go in, take the things out that we can reuse and remanufacture, avoid massive carbon emissions, because we're not completely building from new anymore, right? It's 80% of the emissions in a laptop is when you actually make it.

And so avoiding that is avoiding the emissions conversation as well. So circularity is a mindset. Circularity is something we can actually contribute to our IT assets. And we have a massive program that we've just launched right now to meet our circularity metrics around E-waste avoidance and responsible management of those assets.

And by the way, anybody that's in IT knows how hard hardware asset management is, and I've never met an organization that has a perfect asset management in their organization. And so this also is a benefit because we have to be able to manage our assets in order to meet circularity. So there's benefits also in the IT organization that says we're gonna get a better handle on our assets.

We're gonna have a better handle on those costs associated with those assets. We have to manage the life cycles and we have to change the way we buy.

 Right, and so just don't buy the new, but start to look at those programs that are being introduced by those OEMs that are actually selling repurposed and remanufactured items.

The interesting part about that is people's mindset, and I saw this in my research too, on circularity, is people don't wanna buy something that's 'used.' Everybody wants something new, right? New cars, new clothes, new this, new that. And so there's a behavioral change and a mindset change that comes along with this that says a laptop that's been remanufactured by an OBM that comes with a guarantee is just as good as the new one, but it's even better because we just avoided all those emissions and we just help achieve our circularity targets by buying in this manner. So it's massive, right? it's a massive impact.

Sanjay Podder: And, you know, that is probably also one of the reason when we defined the Software Carbon Intensity standards, we factored in embodied carbon, right? So, because most of the time people forget embodied carbon, you know, the carbon that goes during the manufacturing of the servers themselves.

And, many a times people don't factor that in when they do procurement of the hardware, for example. Or just extending the life of the assets by a year more, you know, can help you lower your carbon emissions. so I think these are some great points and this is obviously

the other aspect would be end user devices as well, right. You know, there's such a proliferation of end user devices. Normally we only think about the servers in the data centers. you know, but then, that has been like so much of end user devices. Again, there's a lot of embodied carbon. You mentioned about the laptops themselves, you know, 80% of their lifetime emission is doing the manufacturing process.

So, you know, how do you manage that? I think definitely this is a big opportunity for the industry. Now, when we think about some of the recent innovations happening in this space, and you briefly touched upon AI and generative AI. Something that strikes me, how does this whole challenge amplify in the age of generative AI?

Right, because, if I think aloud, we are already looking at new data centers coming up every week, right? If I think about, you know, more and more models getting created, right. You know, and then, the emissions from inferencing are quickly surpassing the emissions because of training those models.

So, wearing your hat of an expert in the field of circularity, has this bothered you? How does the Gen AI world make this problem even more difficult to manage?

Dr. Katia Chaban: So it's funny, I've done a few talks and panel discussions and I always start out with, I say, I hate AI. And I hate AI, it's gonna be, there's gonna be great things, but it's like the center of attention, right, for everything. And so it is so critically important to get our arms around

AI and I look at it in a couple of different lenses. There's the responsible AI, so it's the governance of how we implement and how we manage that. So back to your comments around models. How are we training our models? What modals are we using? And there's a difference between AI that's hosted in the cloud versus on-prem, right?

And you're building your own models versus AI models. And then when you're in the cloud, it's the token usage, the input token usage, output. How do you put thresholds on those things? How do you force certain use cases to use certain models? 'Cause you don't need the big choppy GPT-5 model. You can use a mini model to be able to get, so there's that responsible AI layer. Then there's the sustainable AI part of it, right? That starts to say you have to put in metrics to be able to measure the emissions associated with that. So if you're in a cloud environment, Azure or Google, and you're building your Azure platforms or your AI platforms, measure that, what does that mean?

And you have to measure the models. You have to measure the throughputs, the outputs, all of those different types of things. And then you have to wrap in tech innovations that help to mitigate the harm associated with it. So yesterday we had a fascinating conversation on quantization, right? So the, how, what they're doing, and I won't even pretend to get into what those details, but it's how do you minimize

the impacts from those models that you're training? And then, you know, then we're looking at things like innovation that helps with improved prompting, right. Because that's the big issue is people put in their prompts. People are like, please and thank you to AI. Right? So you have all this great but they're, people aren't good at good specificity, right? They're never gonna get their first answer on the first try. And so what you're trying to do is build those types of prompting where you're caching things and you're able to provide better responses, but you're also doing better prompting.

So there's all those things. And then there's the ethical AI component, and then there's the inclusivity AI, right? To make sure that everybody gets access to it. Because if we continue to keep building AI and only we get access to it, right, we continue to build that digital divide, which is not good.

And so there's all those aspects. So when I say I hate AI, it's because it is moving, it's the fastest moving tech that I've seen in my career. Probably that you've seen, right. And just the adoption of it and it's almost like we're just running in second place all the time, try and get arms around what's happening. And nevermind the security challenges.

I was at a security conference last week and they were talking about some of the security issues related to AI, which was frightening alone. So they talked about, you know, how it's learning on its own and it's gonna make its own decisions and all those types of things. So we talked about, what is it, Odyssey 2001 movie where Hal can't get control of his robot.

And I bring the perspective of those old movies that say, well, I'm a Mad Max, right? So now we're running around on a planet that doesn't have any water, doesn't have any wildlife, doesn't have any of that, and that could possibly happen too, unless we can catch up, take a breath and make sure that we, wrap that AI or really put a lot of people on it to get that governance in place.

So, it's massive. So when we talk about circularity, yeah, how do we manage the infrastructure that is being built to support it in a manner, how do we ensure that it's more modular? All of those things, right? And then, how do we use software to, and AI, to manage how we're doing all those types of things?

So not only do you have an issue with climate and whatnot using AI, but we also wanna use AI to manage our AI. And so how do you do that in a way that you just not continually harming things, but you're actually, you know, mitigating the bad by doing the good. And we have to do that through that circular thinking, especially on the replenishment of those infrastructures. And then again, the coding, right? Optimization. How do we optimize every prompt, every query, and all the models in such a fashion that we are, we don't have to use as much energy, that we know causes an emissions issue.

Sanjay Podder: Well, great points. And that actually takes me to the next question because traditionally, software engineering did not involve all this disciplines, right. So sustainability was by far not a NFR, non-functional requirement. Security was, but sustainability never, right. And now that we are seeing that sustainability is an NFR, it's an important one,

direct bearing on climate impact on energy use. And another important thing, the cost of IT. Because, you talked about circularity, you can really reduce the cost of, you know, procuring devices by delaying the devices you're currently using, you are using them more. So what we are really looking at is rewiring our talents our software engineering talent, our IT workforce to think differently and enabling them with skills,

so that they can build systems that are not only functional, but also sustainable at the same time. In your opinion, from the vantage point you sit in for the last few decades, how easy is that? You know, how do you really, because most of the time we think it's about, okay, let me give some training to some software engineers.

It's not as that, as simple as that. So, for large enterprises like the ones you are leading, how do you see this transformation or change happen? Any insights from your own experience?

Dr. Katia Chaban: So you have to hit it from multiple sides, right? Training is absolutely essential, but it has to be meaningful training. I'm happy. Our organization, we just, we've just launched what we call carbon literacy training. We've partnered with an organization, the Carbon Literacy Project, and we've created training that's focused on digital and tech sector, and it includes our NTT strategies and goals.

And it's intensive. It's e-learning training and it's workshop, and so it's eight hours of somebody's already-busy life, to talk specifically about climate change, emissions, what is GHG? What is methane gas as well? All the science around these things and then how it impacts your daily lives individually, the choices that you make individually,

but then from a digital and tech sector, what those impacts are. What is the impacts of AI? What is the impacts of circularity? What is the impacts of these things? And so, the workshop is then intended to really have deeper conversations so when people have their learning, they walk away, really, it's been, now they've taken away at least one or two things that's associated with not only their individual carbon footprint and potentially how they can reduce that, but what they can do in the role that they do, whether a software developer, whether a service desk agent, whether a field engineer, supporting all those end user devices, whether they're an architect, take away those couple things that say, this is what I can do in my job to be able to make a difference, so that training has to be impactful. The other thing that has to happen is you have to include it in your governance. And from an IT perspective is it's right from the ideation, so I have a great idea, I have an innovation, or our business partner or a client has some innovative ideas.

And so right at that point, you need to start asking those sustainability guardrail questions. Does this entail the use of hardware? If it does, what would you think about what you would do with that hardware? Is this software development? Okay. If it's software development, we start to throw in things like how would you measure SCI, but on all through the lifestyle.

So you go through ideation and then you go through checkpoints where there's approvals of project. You go through your ARB types of processes, right, with your architects that have to be also trained and aware that say, alright, we're gonna design these solutions, but how do we do it that it's low code, lower minimal infrastructure, that it's more modular, it's all these types of things, right?

If it's, if it does have assets, what is that end of life plan? Are we gonna purchase them in a way where we don't have to do the disposal? We're not responsible for that, right? And so, we have to govern that, and there has to be metrics associated with, with how that's governed. So really including the criteria and the mechanisms to ensure that technology innovation throughout the life cycle is environmentally, is socially and economically sustainable,

for the client and for the corporation. So you have the training level, and then you've got that lifecycle level where you have to, and then you've gotta have the, I'm monitoring and measuring you, right? So then you have to have the dashboards and the visibility into all of these things.

What are my circularity practices? What are emissions practices with our strategic application portfolio? What is the emissions associated with our cloud footprint, with our on-prem footprint? All those types of things. And then they all, link back. And by the way, not have an organization that's somebody's monitoring those dashboards and giving people,

like, you know, here you have to do something. My approach is those dashboards are being created by the people that have to manage those every day. So they start to learn about what these things are. So I'll give you an example. It was super, yesterday was a really exciting day for me because I've got a cloud team that I'm working with and they're, you know, always ", right? FinOps, go save money, optimize right size, do all the right things. 

And we've been working with a partner to take a look at what our emissions data is within the Azure environment. And we have now built that dashboard. We thought this was super important, but it's the team that's doing this.

It's not Katia and some folks over here. It's the team that's building a dashboard that can align costs with carbon down at the resource level. And they said to me yesterday, like what we've learned is not a one-to-one ratio between cost and carbon. And I'm like, "yes, no, it's not."

You know, it isn't. And there's water associated with, so we're looking at water.

So now we're not only are they looking at the, Hey, there's a right sizing opportunity where we can save money and carbon and water, but they're also saying, Hey, there's opportunities where we can save carbon and water. It might only save $12 and 72 cents, but I think this is the right decision to reduce our carbon footprint.

But it's them, it's that operational group that you're instilling that thinking into and that ownership and that accountability into, to do that. So you've gotta train 'em, you've gotta put it throughout the life cycle, but you have to integrate it into how people are doing their work.

And you have to do that by engaging them, talking to them, and educating them.

Sanjay Podder: Absolutely. And what makes it very interesting, Katia, in my experience, has been, this has been an emerging area, so to a great extent, you know, most practitioners were not aware, as you rightly mentioned, you need to train them. There have been, there has been no standards, that you know you can use to measure, because you can only start reducing once you measure and know where you are, your baselining,

right? And that's where I found that, you know, all this in cross-industry collaborations are helping. What has been your own experience with cross-industry collaboration and the emergence of new standards, like we mentioned, SCI and the role in accelerating this journey. What have you observed?

Dr. Katia Chaban: Do you know what's super exciting about sustainability in IT, is the amount of collaboration and the amount of people that are embracing the topic, but are also willing to share. And I say that because 

there has been a long time where we don't wanna give away our secrets, right? There's been a long time where you minimize the amount of discussion and collaboration,

'cause you don't wanna, to your competitors, give away the secret sauce or to your suppliers, you're trying to hide some cards so you can negotiate better. With sustainability, we are all driving towards the same goal. And so the openness in discussion and collaboration with others in my industry on how they're doing things is amazing.

And in fact, as a member of the SustainableIT.org organization, I got involved with them very early on this journey, and there's a taxonomy that was created based on Nicholas's, you know, book and kind of that momentum, but that taxonomy is a nice framework to be able to start with, right? It's got the energy types of metrics, the emissions metrics, social metrics, governance metrics, and of course, included in there is a lot of focus on your sustainable sourcing, right?

How you're buying things and how you're monitoring that. And so for me, having that framework and having be able to just have those discussions with people across the industry, across the world about the use of that framework and what they're doing has, it just helps me, right? And I would do the same.

Anybody that wants to call and say, I've got a problem. How do you integrate this? Or how do you do this? Very open. I'm not afraid of giving away any secret sauce because we are all on a giant mission with the same end goal, right? And that is to reduce the impact of climate change. 

And so for me that's super cool.

And then internally, what I'm finding is even though everybody is already busy, we have incredible leaders that support this, but people are embracing it almost at a pace where I can't keep up with providing them the information and the things that they need to know because it feels so, it makes their job much more purposeful now, the same as the way I feel about the role.

So the cross collaboration is great. Now, where it's not great, is in the supply chain. And as you know, even when you're trying to calculate the SCI metrics, right, we're trying to look at emissions metrics, there's a vast amount of data that we need to get to that specificity and that validity of those numbers.

And we can't get that. And so for me that that collaboration and that transparency, and it's the one thing, you know, you see on, in discussions and forums is the Microsofts of the worlds, the OpenAIs of the world, all, they have to be much more transparent with what they're using and where they're using it to help us really understand what the impacts are.

And so I, you know. That's an, that's a tough nut to crack, and I'm not sure if that's something that you've come across and you maybe have any guidance, but while we as leaders are on that goal and we can share and collaborate, we don't have that same level of transparency within our supply chain.

Sanjay Podder: You're absolutely right. That's a challenge everybody's grappling with. But in fact, we did a podcast with Adrian Cockcroft on Real Time Carbon Calculation or Carbon Accounting, because every hyperscaler tends to give you data of different granularity and different frequency, so it gets very difficult to compare apples and oranges, you know, it's, very difficult. And also, as you rightly said, some of the closed LlM models, you don't really know much. You know, I was very happy to see Mistral coming up with some of the data recently. But all that does impede our ability to understand where we stand today,

in terms of emissions. We have to then, you know, go around trying to use other ways of, you know, guesstimating what the emissions, the proxies, what the emissions could be. And hopefully all these things will resolve as, you know, sustainability becomes a key consideration in the way we use AI.

People would love to use AI which is more environmental friendly. Earlier in the discussion you did mention fit for purpose, you know, maybe a smaller model. You don't need to use the largest of the models for every purpose. So there will be a need for the model providers to be more transparent around the emission numbers on training and inferencing. I did see, read Sam Altman's blog on the emissions so that he has, without giving much of a context, he has revealed a few numbers, which I hope is the tip of the iceberg, and we get to see a lot more what's happening, you know, in the closed LLM space.

Katia, you kind of pointed to a very important fact, and it is the journey of adoption of sustainable IT, right? You know, we have these wonderful ecosystems, you know, SustainableIT.org, Green Software Foundation, everybody coming up with the collective wisdom of, the members and making it available for everybody to use and aspirate the journey.

Also emergence of new standards. The taxonomy you mentioned. GSF has been coming up with a lot of standards, like last year we released the ISO standard for Software Carbon Intensity. We are right now working on the SCI for AI. Hopefully that becomes another useful standard. There are a lot of open source tools that one can use as well.

Now, looking at the whole community of CIOs, you know, CTOs, CIOs, large organization, medium organization, you know, if they have to start on this journey, what would be your advice to them? Where do they start? You know, how, what is the smartest way to make progress? You know, any thoughts there?

Dr. Katia Chaban: Yeah, I, well, I have a lot of thoughts on there. So it's funny, I had a conversation with a CIO organization that were starting on the journey and they're like, just give us a couple of easy things to start with. And I said, well, okay, but if it was really easy then, you know, everybody would be doing it.

But let's think about, let's think about this more impactfully. So I'll share the journey or my thinking on this. And this is what I would tell everybody, right? You, it's like any strategy that you're gonna implement, you have to understand what you want to accomplish. What are your goals? What are the visions?

What are the missions? What, are the things that you actually wanna do? And then, you've gotta figure out where you are, right. So how far away that gaps analysis are you from that vision? So, when I started in this role, you know, I knew what my mission was and you said it in the introduction, right?

It's harnessing that power of people and technology so that we're doing good things for our business, the planet and the people. But I didn't know where we were. And we're talking about a global organization across 50 different countries, 152,000 employees worldwide. And what do we do? And so we did

an assessment like anybody does, right, from an IT perspective to take a look at multiple capabilities from that sustainable IT lens within the organization. And this was something that we did through, our SustainableIT.org front. So we were looking at what is the awareness and commitment of the organization?

What is the strategy from an IT perspective? What are the roles, what are the resources that you have in the organization? What are the, what is the level of skills and training related to sustainability? what kind of change management do you have? What kind of procurement practices do you have? And so on and so forth.

Right? So there's a number of capabilities and I talked to the CIOs across all the regions and their EAs, and we got ourselves a score very similar to like an SDLC score, one to five in a maturity. We rated ourselves where we thought we were and then we also said where we wanted to be and that was a great starting point.

Then I surveyed my organization. So all the employees about their awareness, their understanding, do they know what sustainability is? Do they know what sustainable IT was? Right? And would they be interested in training? So now I've got it from my leaders and where they think we are from a capabilities and where they wanna go.

And now another data point from the organization itself across different countries and across different roles about their awareness and even our awareness of our actual sustainability strategy, which we have. And so that also then gave me all right, here's where we are, here's where we are, and here's where we need to go.

And that was all the input that I needed from a strategy perspective. Then you gotta get visibility, right? And so, what is our footprint with some things? What are we doing with the circular, with circularity? what are these things that are important, right? So now I've said I want to have us be able to measure carbon emissions.

We've gotta, we improve all the capabilities to get there, but what visibility do we have? And for me at that time, I wasn't looking for perfection in the numbers like I wanna get to with an SCI type of number on everything where I can use that in an auditable finance or a sustainable report, right?

For me, it was just get me some numbers. Now, it wasn't the dashboards that were provided by, you know, hyperscalers and whatnot. It was a little bit more scientific than that. And luckily we have Gadhu as part of our organization who also is from GSF, so we got good guidance on how do we calculate a variety of things. And so understanding what that just looked like as we get smarter, I could have spun my wheels on perfection and how we're measuring things or

I could have just started to get visibility and get people engaged in understanding what these numbers meant. So we're going through a massive transformation, let's say in Office 365, and we're reducing a number of the tenants that we have. Well, that should be a massive data improvement, and that should be a massive emissions improvement.

We wanna monitor that. So I didn't wanna wait a year to find the best tool and implement and be all, but we created something together very quickly so that team could then set a target for themselves, for our missions for this year and, and move, and move forward, right. 

And so my advice would be,

know what your vision is. Know where you are in your capabilities and the maturity of your organization. Know your people and what they think too, right. Because you, your perspective as a CIO or a CSO may be a little different than somebody that's on the ground. So get that perspective. And then figure out what you wanna do and don't wait for perfection.

'Cause that's, that can be the evil, right? So start to take a look at where are your big environments, so for us, it's cloud and a lot of our apps are in cloud. It's end user devices. It's, you know, these areas where you want to go and get that visibility and perfect that visibility as you go. But without visibility, you can't start making any impacts.

And so do that, but, back to that training and awareness, that one pillar that I talked about with our carbon literacy training for the organization, that was based on the feedback that I got from the employees. I can put visibility, we can create dashboards, we can put reports together all we want, but if the people that are doing that work every day don't understand why we're doing it,

and how they're involved in doing that, it's not gonna work. And so your people have to understand. And so it's that employee engagement and awareness and training in however it is. So it sounds like a lot, but when you start to put your strategy together, right, that's an important pillar.

But don't wait for perfection. 

Sanjay Podder: Cannot agree more. Yeah. No, those are great points. How do people keep track of all the good work you're doing in this space? Is there some website they can go to? What is the best resource?

Dr. Katia Chaban: You know, right now I think we just do celebrations on LinkedIn, when we have some good things, we'll celebrate that way. And hopefully what we'll see start to see at the end of the year, because we'll have a full year, is we will provide a kind of that impact report specifically in the global IT organization within NTT.

And hopefully we'll see a lot of great impact and we can share that again with my peers and colleagues across my SustainableIT.org and those that are interested and we'll be creating, for me, it's a playbook, right? I have to have a playbook that I can pass on to my successor and whatnot that says, here's how we did the things.

Here's how we're measuring these things. Here's why things. So it's almost that to do, that we'll also be able to share across the community.

Sanjay Podder: Wonderful. If you have any question for me.

Dr. Katia Chaban: I think, you know, back to what you said at the beginning, you and I talked about the podcast and I'm a, I've been finding that very similar to your experience, you just talk to people and they don't understand the impact of hitting enter and what that does from an energy and emissions perspective.

And sometimes people just really get engaged. So I often think about should I do a podcast? And would people really care about what that would be? And so I guess my question for you is, what's the best way to get to that larger, there's millions of people out there that are in the IT industry,

How do we get to them all so we can start making sure that they all understand what those impacts are?

Sanjay Podder: Right. I think, I mean, not able to tell you what is the best way, but I can share with you what we have tried to do, to achieve that objective. The very first thing we did, Katia, when we started the foundation. The foundation itself was in that direction, right? Like we felt that, you know, no single organization can solve a problem of this scale.

We have to come together. So we need to form a consortium of like-minded people, and the GSF was formed that it has been ever since doing very well. So that was step number one. Step number two is, you mentioned about the need to make people aware our sustainability, because software engineers never knew what is greenhouse gas emission and how is it linked to that work.

So the very first thing that the GSF did, was released the Principles of Green Software training, which became a huge success for many of the CIOs who have come to the podcast. They did confide to me saying "Sanjay, when I personally took the training, I was so happy that I learned so many things in such a short period of time that I mandated everybody in my leadership team to go through it because it was an eye-opener," right?

So, as I say, it comes from the top. So the whole focus has been, we have to empower the developers, the practitioners, but we also want to make the leadership aware so that the mandate comes from the top, because otherwise this is not going to be sustainable. There has been a lot of Green Software Foundation Summits we have done around the world.

And many of the people who come and participate, they're not even a GSF member, but the whole idea is to come, collaborate, learn, and take it back. Right? So those are some of the things we have been creating, for example, Awesome Tools list, which is a collection of some of the open source tools and best of the tools that people can come and start using it.

Similarly, standards, right? We believe that by creating this collective intelligence, we can influence a collective action about this whole area. So that is, that I believe is very effective so far. Is this the best way possible? I do not know. The podcast is also a part of that journey, where, you know, people will learn from experts like you, right?

You know, from many experts who are coming. You know, I was just, this is our annual year, for the CXO Bites, and I was compiling, what did I learn from each of the leaders? It was mind boggling. You know, I should write a book on it now, saying, you know, what are the insights? You know, otherwise you won't get that any one source.

So I think, those are some of the ways we are trying to accelerate this journey. And the outcome has been very positive, very encouraging, and I hope we can collectively achieve newer heights. So, yeah. Katia, it was so wonderful having you, and thanks for sharing all your insights from circularity to how you're driving it in your organization.

You know, like all good things have to come to an end, you know, we are now coming to the end of our podcast episode. And I want to say thank you for, coming over and sharing your deep insights to CXO Bytes, from entire Green Software Foundation. Thank you so much, Katia.

Dr. Katia Chaban: Thank you for having me. I really appreciate it.

Sanjay Podder: Thank you so much. That's all for the episodes of CXO Bytes. All the resources for this episode. In the show description below and you can visit Podcast.GreenSoftware.Foundation to listen to more episodes of CXO Bytes. See you all in the next episode. Bye for now. 

Hey, everyone. Thanks for listening. Just a reminder to follow CXO Bytes on Spotify, Apple, YouTube, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. Thanks again, and see you in the next episode.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 months ago
46 minutes 32 seconds

CXO Bytes
The Green IT Value Case with Marc Zegveld

In this episode of CXO Bytes, host Sanjay Podder speaks with Marc Zegveld, Managing Director of ICT at TNO, about the competitive value of green IT. Drawing on the recent Green IT Value Case, real-world case studies, and research, they explore how sustainability initiatives can enhance business performance—from cost savings and supply chain clarity to talent attraction and regulatory preparedness. Marc emphasizes that green IT is not just a climate imperative but a strategic differentiator, requiring top-down leadership, grassroots innovation, and effective change management. Together, they discuss how businesses can embed sustainability across operations to thrive in a tech-driven, low-carbon future.

Learn more about our people:
  • Sanjay Podder: LinkedIn
  • Marc Zegveld: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • The Green IT Value Case | TNO [05:57]
  • Awesome Green Software | GSF [23:26]
  • Software Carbon Intensity (SCI) Specification | GSF [23:39]

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Sanjay Podder: Hello and welcome to CXO Bytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting chiefs of information, technology, sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.

I am your host, Sanjay Podder.

Marc Zegveld: Hi, Sanjay. Thanks for inviting me on this, on this podcast. I'm really happy to join. I think it's very important and relevant topic what we discussed. My background is, I'm now a two year Managing Director of the Unit Strategy, Policy, ICT at TNO. We are an independent research organization based out of the Netherlands.

But we work internationally. We do that for both business to business as well as business to government, both in industry as well as in, for defense. My background before that, I've been working 15 years at IBM. And mainly as a European services leader for the industrial sector.

And before that I've been teaching innovation, high tech at the TU Delft, at my own consulting firm. And I used, I was a columnist for the leading financial newspaper in the Netherlands. Now I'm an engineer by background, but also, I got my PhD in business strategy economics.

 I'm intrigued by competitiveness and what triggers competitiveness. That's, one. And just to elaborate on that a bit, and then we go to the second, is, so competitive is not something which comes easy. You need to stand out, you need to invest, you need to, build.

It's based on, 

most of the time, hard work, technology, but also reputation. It's a lot of elements which you need to bring to the table to gain and sustain sustainability. And I'm pretty convinced that from the green IT movement, this is competitive, by heart. And creating competitiveness by heart. 

As it's, you're able to combine a reduction of cost if you organize it well. It can bring you more clarity in your complex supply chain.

It gives you a better insight in decision making from an investments perspective, but also from a Marceting reputational side it can enhance your position. But only if you're able to combine all these different threats into one specific aspect of competitiveness. And I think that's where I'm intrigued, and that's why, we did this study together with Accenture, and picked out some relevant cases and draw some conclusions.

But that's one. There's another step I'm intrigued by, is the following, we hear, we read and hear a lot about, let's say the doomsday clock or whatever. About that we have a small earth that we have a lot of carbon emissions, et cetera. And if you're an optimist, if you're a pessimist, I perfectly, I don't care.

But there's something we can do just better, improve, compared on what we do without losing quality of life, quality of on what we do for planet as a whole. Now finding that whole combining with competitiveness, I think that's the strengths which should unique companies, which should unique all other organizations around the globe to see what we can do together. 

Sanjay Podder: Wonderful. And, you know, one thing that really strikes me here is you started with competitive space, right? Because this is a space where people primarily drive the conversation with concerns around climate change, the greenhouse gas emissions. What you did talk about in the second part,

you know, we are, the planetary boundaries. We are a small planet. But typically in my own experience, I have seen that, given the challenging business environment that we are seeing today, the leading this conversation with greenhouse gas emissions is not as appealing to business as competitiveness that you mentioned.

Right. And cost efficiencies, operational efficiencies and competitiveness. Many businesses do not see that part today. They can still relate to cost efficiencies, which is equally attractive, improving their bottom line. But competitiveness, you know, is very rare for people thinking that green software, green IT, green cloud, green AI is a competitive differentiator. Right. And I'm glad you started the conversation with competitiveness, which I think it'll be great if you can throw a bit more spotlight, because that to me is the most critical point in this whole conversation for businesses to realize that this is not some altruistic thing they're doing for the planet, but this is for them to survive, to thrive and be ahead of the rest of the competition.

So Marc would love to hear a little bit more, and I'm sure you put a lot of it in your Green IT value case. Right. So, you know how have you articulated that? 

Marc Zegveld: No, it's a fair point, Sanjay. And, absolutely, competitive first.

And I'm not sure how you see it, but for me, it's all about change. And change starts with the action. So, internally within companies, you need to fire up you need to ensure that indeed there is action and then competitiveness or the underlaying parameters in boosting competitiveness is key to start that change.

And once more companies, more organizations, understand and work that way, I'm convinced, without being altruistic, but I'm convinced that indeed, a greener IT, a greener situation, a healthier planet can be can be started. But if we start that discussion from a planet perspective, we can agree or disagree, but we more have a debate than that what we have in action.

And I'm more an action-oriented person. And I think that's what companies, and that's why I like this conversation, as well, Sanjay is, with you, with your team. You're more action-oriented. And I think that's where the trigger and that's where it really starts. 

So competitiveness, for my end, is a multifaceted aspect. 

It's about not only attracting capital and of course within, we have more and more sustainable capital providers, it's also about attracting talent, the new kids from school, I would say, attracting them, gen Z and others, it's more difficult to attract them and to keep them.

And once you, your part, when you tell your story about green IT, about the relevance of green IT that it's indeed not only strong for their environment, but definitely it's strong for boosting the company. It's strong for their career. It's a relevant aspect on being competitive as well. It is a competitiveness indeed for the full supply chain backward looking, but also forward looking. Most of the supply chains are very complex. If indeed we're able to detangle and create some more clarity also from a sustainability perspective, from a green perspective.

In most of the cases we've seen, we're able to reduce cost. We were able to optimize. So there's several aspects, and I think instead of going to the root of only cost cutting, here, competitiveness is a multifaceted aspect, and especially if we're able to create that interplay between these different facets, then we really can build strong, stronger, more competitive organizations, more competitive companies. 

Sanjay Podder: Wonderful. And, you know, I think you touched upon various aspects like the talent getting attracted to companies which embrace sustainability. Right? And you spoke about the supply chain. I think another area where, though it might not be an imperative today, it might turn out to be of an important area for business is regulations coming up in this space. Today, the regulations are fairly voluntary. Even the EU AI Act when it comes to, you know, environmental impact, you know, it's much more stronger on the social aspects of responsible AI and stuff like that. I think that would be another area for business to be ready for the future when regulations are much more stringent around these areas, at least in certain part of the geographies, right? So that would be important. So, Marc, one of the thing that I wanted to discuss more is examples of businesses that are turning this into a competitive differentiator. I remember in some of my early conversations in the field of sustainable AI, when we would talk about techniques like quantization, pruning of models, creating smaller models fit for purpose. All that seemed great from a theory point standpoint, theoretical standpoint, right? And, but the moment DeepSeek did all of this and suddenly came out with, you know, large language models much more cost effective.

You know, they built, they trained the model and a fraction of cost compared to other large language models, and we could see that they used the green principles and they have converted it into a competitive differentiator, creating something very unique. You know, and suddenly people started thinking, "do we really need so much compute?" Right. And, the outcome was good enough. Now, have you come across, such examples, in your own study, in the light of the green it value case as well, where you felt that certain organizations or certain industries have changed the outcome or rather, brought this green practices to further enhance outcomes. So are there any examples that come to you, top of your mind? 

Marc Zegveld: Yeah, there's several examples, Sanjay. Think more, probably more examples that then we have time for this specific podcast. 

I'm sorry you about that one. But there are definitely a few. But if you're okay, and please step in if you want to deviate from it, but I think the regulation part, which you alluded on, is highly relevant. And sometimes it sense that regulation is, comes from top down. It comes from governments or European bodies, whoever. And then it is, it's killing innovation or it is pressing energy or power in that sense from and pressing competitiveness down downwards.

And I think there are lots of examples indeed where it is, where that does happen. So the trick is to create some regulation where indeed companies can thrive and take the action and thrive from, on their competitiveness in the direction of it's green, or it is sustainable, or it is responsible or what it is.

And I think that's a very thin line, and it's always difficult, specifically now today with all these trading blocks, to create a global view. But regulation, also within companies, that's where I like to make the bridge, is highly relevant. 'Cause in the cases, and we'll have some cases, I want to elaborate on or discuss with you.

It's also where the top would identify what's the KPI. So what is the target we are in for? And maybe we don't see it as a regulation in a sense, but internally in an organization it's a KPI. And we all know that a KPI is driving behavior, which how we run our organizations, which is fine, but so identifying what is the KPI is highly relevant. So just to give two examples here, so we have a hospital in the Netherlands who wants to, wanted to reduce its carbon emission. And normally, organizations like hospitals and others, they will look internally and in having less waste or having other kind of initiatives. And they were thinking the other way around.

It was like, if we have less patients visit to the hospital, we have less people driving their cars, if we, through automated kind of 

dialogue and assessments and monitoring patients, we have a better insight, then from a full value chain and value system perspective, we have a much better in view, inside view and reduction on carbonization than otherwise.

And so, the KPI here was relevant, but only relevant indeed If they would not look at it from their own organization, but from the full value system. And so that's a highly relevant aspect. So that's also with KLM, Air France, the airline. They do, I would say, these green laps and it's not only internally, but they do it with their partners.

They do it with their clients. Just not only looking internally in their own organization, but in the full value system on what they can gain and what they can do. And here you see a very interesting effect because they take the initiative, it strengthened their relation with their partners.

They have a better, a much deeper insight in the full value system, in their value chain, and as a result, their competitiveness are, however difficult to measure, strengthens. And here you see finding the KPI, identifying what to do and how to drive it. Different organizations will play differently.

Phillips is doing something different. ABM AMRO, which we've, we as Accenture analyze, is doing something different as well. And I think that's where indeed the, there's so many ways to pick that up and to drive it that at the ends, it's more interesting to see about what we did, these 20 or 15 companies and organizations and just distill, what is possible and what are the triggers and what can you do, initiate yourself, for your own organization to start going.

Sanjay Podder: Good.

No, absolutely. I think you have addressed both the, how organizations are using it as a competitive advantage, as well as on the regulations. And I'm with you on the point that you don't have to wait for regulations to come, you know, businesses who are the leaders in their sector, they set up those standards, their own business standards, and then they try to make sure that their business processes are within those guardrails, right, as they operate. Now one of the things that I often, you know, observe is that when we talk about green ICT or green software, there are many terminologies, people typically think it is only about writing green code, code that takes less energy or in the process, emits less carbon. But as even your value case, you know, puts a spotlight on, there's a lot more things for an organization to become green. Right? It is, you mentioned the supply chain itself, procurement practices, for example. So, how do you see organizations grappling with embedding these green practices end to end?

Because this is not something confined to one department or couple of people, right? If you want this value, this competitive differentiation, it is a change management process, a complex change management process. In your research, did you find organizations having developed a good way to do this or are there challenges? 

What's your observation about this change management process? 

Marc Zegveld: So let me, if you're okay, let me just elaborate a bit on my experience and our studies, but also Sanjay would really hear on from your experience from the Foundation and see where it matches or where it differs. 

Sanjay Podder: Yep. 

Marc Zegveld: I think there are three key aspects. The first one, 

it starts from the top.

Senior management, the board, needs to lead by example. Maybe not in an extremely detailed way of what needs to be done and the KPIs, et cetera, but definitely it's a relevant aspect. And you can bring that also towards a message on competitiveness, a message on why we do business and how we should do business.

That's, I think, where it starts. Then we have on the, I won't say the bottom part, but at the operational part, also in your organization, Sanjay, but in a lot of let's say global or larger organizations we have lots of people, extremely smart, extremely hardworking and driven with a certain purpose.

So let them speak. Let them try. Let them give, let us give them some time on bringing up ideas and suggestions. I think that's motivating people and if we'll do that via small laps or various initiatives, it's all possible.

 And then we have, what I would say, the people in the middle, which in many of the cases, let's say also in management reports, they defined as the people who always block.

it's not my experience in a sense. I think these are the people who needs to organize with the various initiatives on the various ideas, and really make that work and ensure that the people on the operational level are able to deliver, and are able to facilitate it. And I know this is all, let's say, highbrow managerial talk, but then we can go through the various initiatives and if it's Philipps or it's ABM AMRO, or it's KLM or it's AWS, I think that's where it happens.

And then once we see some pearls of indeed the real action and progress, then the top can step in and say, "Hey, this is a great example. I want to have more of these examples. Hey, this is what we did in this business unit. Let's see what we can grasp in other business units as well." And then we have a kind of wheel rolling, and get started.

And I think we have enough, quite some examples over there on how that works. Would it mean that if we would completely organize it all every round, that it won't work?

No. But on the majority of the cases we've seen, that's how we combine the motivation, the energy, the ideas, the suggestions, the smartness of people with the sense of direction, the board would identify this is highly relevant to go. How would you see that, Sanjay, from your experience?

Sanjay Podder: Yeah, no, I think I agree with everything you mentioned. You know, I would, I can relate both from my role in the Green Software Foundation as well as in Accenture as we embraced the green software practices. Right? So starting with our foundation, when we started the foundation, we realized that we wanted, you know, people to do green software without even defining, clearly having, what is green software? What are the tools? How do you measure, you know, you can only, you know, reduce emission once you start measuring. Right? And, I think, some of the foundational elements that we did upfront was getting the collective 

intelligence of our members, right? So who has best, like Microsoft had a great principles of green software training, which we made available to all members, and that helped us baseline that people know, what are the key terminologies, what is carbon aware, what is carbon efficient? How do they relate to greenhouse gas emissions, for example? And that training has been appreciated by not only the developers, but many of the CIOs I talk to, they say "we love that training. The first thing I did after taking the training, I told each of my direct reports, you need to take that training."

Right? So that was like, you know, making sure that we are all talking the same language. Right. And then there has been plethora of tools. Some of them are open source tools, some of the best guidelines, we created the awesome tools list for people to go through it and see which of these tools are relevant for them. 

And then, the third thing was about measurement. How do you measure emissions? How do you measure and express it? And I think one of the big achievements, in my tenure during the chairmanship, was the ISO standard that the Green Software Foundation helped create, called the Software Carbon Intensity, which talked about not only operational emissions, but also embodied emission. And how do you express it in a common way? You know.

So that has been an excellent outcome from the collective effort of our foundation members. And we are now trying to extend that to the AI space with, you know, SCI for AI as an example. So, the way we have tried to bring this culture change or to this change management, is we have, you know, created all these working committees where we have me all member representations, and then these groups, they work together to come up with the standards, tools and so on, so forth. And then our members are, you know, Accenture is one of the member, as an example. So we are, we can then take some of those best practices and bring it into our own organization and sometimes it's a reverse. Some of our best practices go into the Green Software Foundation, for example. Now 

reflecting on my own experience within Accenture, I think this change management process is not straightforward. It is, especially because we are a huge organization, so all the elements you said, you know, it has to come from the top. Absolutely. The leadership mandate has to be there. And then we have to make sure we are embedding it into all our processes, methods, into our tools so that, you know, it becomes, by default you are getting, if you are generating code from a code generator, the code you are getting is a green code.

You don't get a code, and then the human in the loop trying to make it green, the code generator gives you a green code and you can always enhance it, right? So how do you integrate it in your methods, in your tools, in your training, for example? And many organizations are doing gamification.

They want to recognize the best teams. They have done something great and make it like a role model for others. Right? And I've heard this from MasterCard and many other, organizations who have been in this podcast, you know, so these are like, some of the ways one can do. I also, we were very happy to have the Singapore government, IMDA, they are very active in this field.

Setting up standards for Singapore and, in fact, they have played a big role in SCI for AI conversation, you know, that we are doing in the foundation. And what the Singapore government has been doing is the Green Software trials program where they're taking all the best practices of things that work and making it available to the ecosystem in Singapore. 

Marc Zegveld: Nice. Yeah. 

Sanjay Podder: So small medium industries can now use this practices. So, you know, these are like great examples, very inspiring examples, all led by some few fantastic leaders. Right? You know, so, I think the change management is complex. That has been my first observation. But, you know, a lot of things, interesting things we can do around it to make sure that people absorb this.

Right. And, this gets translated into a competitive differentiator, as you rightly pointed out. Now, you know, slightly in an adjacent area, Marc, we are looking at a very interesting time, a time where that is clearly a sustainability challenge. A time where technology is getting more powerful than ever before, right? With artificial intelligence, generative AI, in particular, large language models, business is transforming. What do you feel will be the future of leadership, therefore, in these changing times where, you know, leaders will have to grapple with technology, the sustainability challenges and various other challenges that we are all seeing today. Right. Do you have any perspective on, in this new green world, if I may put it, what kind of leadership we need?

Marc Zegveld: Oh, that's a very difficult, but a highly intriguing question, Sanjay. And we're, I personally as well, but we're also try to understand and work on that. And that's specifically more let's say on the, what kind of dialogue do you need to have within the board, indeed combining that competitiveness with IT

in fairness, more AI both on opportunity, threats, risks. And what kind of leadership, but also what kind of dialogue do you need to have and who should be in their boardroom? But if we distill that from what kind of leadership there is or there should be, specifically with IT and now AI, we have something pretty unique and in my word, invasive, coming.

It is that, we all talk about it, but the number of people who truly understand what's going, who understand not directly the technology, but its ramifications, implications for a reputation, for a supply system, for relationship within the organization is pretty limited. So we need to have a

and I think that's what we're gonna build up in the upcoming 5 to 10 years is a leadership who really leads and sets a pace, sets a direction but combines that with having big ears and really willing to learn and to listen within the organization, around the organization, on what the new technology, on what AI, what IT is doing, can do, what it brings to people, what the risks are, and what we can do about it. And I think that's not a, based on the last 50 years, that's not a standard archetype kind of leadership. 'Cause we are well known with the people who know and set the direction and set the pace and set everything in motion.

And then everybody will start spinning and doing. Then we have the people, whether for a limited time, who said, "okay, we have all these business units and you just take the initiative." And from a capital allocation perspective, we see, 'cause we are convinced that the sum of the parts will be bigger than the whole, and now we need to. have a leadership who sets the pace, sets the direction, but how we do it and how all these interrelationships work. specifically from an AI perspective, nobody will know beforehand. So we need to be very adaptive and we need to be eager to learn, eager to listen, and to play with the information that we have.

And that's a, I would say, unique typology of leadership.

 And definitely there are examples on where we see it happen. But that will be my, that would be my two cents now based on where we are. So with the study and based on our assessments.

So from your experience, Sanjay, what will you see from that leadership role, of the leadership perspective? What will change?

Sanjay Podder: I think given the dynamic times we are looking into, the days of command and control structures are over, right? And leaders, in some sense, more from being the face of the organization, they have to be the coach of the organization. You know they, because they need more leaders, every organization will need more leaders given the pace of change, the number of areas and organization will have to fight, which means you need more leaders. And those leaders, the senior leadership will have to be the mentor, coach, and I think it's, in some sense, sometimes we call it leading from the back. But the, in the future, the future model of leadership will be, how do you, not only do you lead your role, but you create more leaders in the process, right?

That is the, that is one of the way you create team of teams in the process, right. And how do you bring outside in innovations? so for example, 

the Green Software Foundation is a great example, right? You know, where people are collaborating, trying to solve for a common challenge, which is a planetary challenge, like climate change. And then we are bringing those learnings back into the organization to fine tune our own process, right? So leaders will have to have an open mind. They will have to be like a mentor, a coach. They'll have to generate more leaders internally. Everyone becomes a leader in that sense. So it's going to be a very interesting, model in my mind, keeping the dynamic nature of organizations that we are looking into going forward, from the traditional command control that we are, we have been used to. But again this, you know, it is a lot of intellectual power is going into it. I'm sure people are learning everyday new things with the AI and everything else that's coming around us. We are still grappling with what impact AI will have in our lives, in our work. 

So it'll be an interesting observation. How does leadership change as a result of it?

Marc Zegveld: Oh, definitely. Yeah. 

But I think where you have, where you started with the Green IT Foundation, and we from a TNO perspective, we did our study, which is limited compare to what you and your organization have been pulling off. I think it's a great example of indeed leadership where from, let's say individual companies, organization's perspective, there's there's found common ground to learn from one another and to see where you can accelerate. And that's based not only from data transparency, it's also from all different angles. And I think that's a great example also in, let's say in the future ahead.

If we look at it from an AI perspective or other perspective, if we talk about regulation, et cetera. I think this is a way forward. 

Maybe it's not the fastest way, but it's, at least it's a way where

 indeed we're all able to cooperate, to collaborate and we can learn from one another because we all know if you need to do it yourself, it's too costful, it's too painful, too risky.

Sanjay Podder: Absolutely. So, Marc, this has been such a great conversation. For anyone who wants to dig deeper into the Green IT Value Case or connect with your work, where should they go? 

Marc Zegveld: Go to the website of TNO, tno.nl, and there you will find more information about the Green IT Value Case and the work that we do. And specifically from the unit IcT Strategy Policy. Yeah. And if there are, if you can find it, please send me a note, Marc Zegveld. 

Sanjay Podder: Great. So, Marc, we're at the end of this podcast. And probably if there's any area you'd like to talk more about to let me know, then I can further explore. But I think we are fine for the day, otherwise. 

Marc Zegveld: Great. Thanks. Was a joyful hour, Sanjay. Thanks. 

Sanjay Podder: So, well we have come to the end of our podcast episode, all that's left for me is to say thank you so much, Marc. That was really great. Thanks for your contribution and we really appreciate you coming on to CXO Bytes. 

Marc Zegveld: Great, Sanjay. Thanks for inviting me. 

Sanjay Podder: Awesome. That's all for this episode of CXO Bytes. All the resources for this episode are in the show description below, and you can visit 

podcast.greensoftware.foundation to listen to more episodes of CXO Bytes. See you all in the next episode. Bye for now. 

Hey, everyone. Thanks for listening. Just a reminder to follow CXO Bytes on Spotify, Apple, YouTube, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. Thanks again, and see you in the next episode.




Hosted on Acast. See acast.com/privacy for more information.

Show more...
3 months ago
37 minutes 42 seconds

CXO Bytes
Green AI Strategy with Adrian Cockcroft
In this episode of CXO Bytes, host Sanjay Podder speaks with Adrian Cockcroft, former VP at Amazon and a key figure in cloud computing and green software, about strategies for reducing the environmental impact of AI and cloud infrastructure. Adrian shares insights from his time at AWS, including how internal coordination and visibility helped drive sustainability initiatives. He also discusses the Real-Time Cloud Carbon Standard, the environmental impact of GPUs, the challenges of data transparency, and the promise of digital twins like meGPT in scaling sustainable tech practices.

Learn more about our people:
  • Sanjay Podder: LinkedIn
  • Adrian Cockcroft: LinkedIn

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • Stern Review - Wikipedia [02:26] 
  • OS-Climate [05:55] 
  • Amazon Sustainability Data Initiative [06:31]
  • Real Time Energy and Carbon Standard for Cloud Providers | Notion [12:47]
  • Real TIme Cloud | GitHub
  • Software Carbon Intensity (SCI) Specification [27:21] 
  • Kepler | CNCF [27:49]
  • Measuring Carbon is Not Enough | Adrian Cockcroft [37:15]
  • Virtual Adrian Revisited as meGPT [40:15]
  • Soopra.ai [43:44]
  • OrionX.net 
  • Will AWS Have Anything New To Say About Sustainability at re:Invent 2024? (Nope…) | by adrian cockcroft 

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Sanjay Podder: Hello and welcome to CXO Bytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting chiefs of information, technology, sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.

I am your host, Sanjay Podder.

Hi. Welcome to another episode of CXO Bytes, where we bring you unique insights into the world of sustainable software development from the view of the C-Suite, I am your host, Sanja Poddar. Today we are thrilled to have with this Adrian Cockcroft, a pioneer in cloud computing and a passionate advocate for sustainability and sustainable tech practices.

Adrian has been at the forefront of transforming software practices, driving the adoption of greener and more efficient cloud solutions. As a prominent figure in the Green Software Foundation, his insights are invaluable for anyone looking to build scalable and eco-friendly tech infrastructures. Adrian, welcome to the show.

Kindly introduce yourself.

Adrian Cockcroft: Thank you very much. thanks, Sanjay. I'm Adrian Cockcroft. I'm a consultant and analyst currently at Orionx.net. Happy to be here. I retired from corporate life at Amazon where I was a VP in 2022. And nowadays, I'm an advisor to several companies, from fast flowing global public organizations like Nubank, to emerging startups like NetAI.ai, and various other small things that I dabble in, in the startups space.

Sanjay Podder: Wonderful, and we'd like to hear more about all this. Before we dive in here, a reminder that everything we talk about will be linked in the show notes below this episode. Adrian, you have had an illustrious career in cloud computing and sustainability. Could you start by sharing what inspired you to focus on green software and how your journey led you to your involvement with the Green Software Foundation?

Adrian Cockcroft: Yeah, we have to go quite a long way back. Somewhere in the early 2000s there was a report, I think it was called the Stern Report, and it was a report on the economic impact of climate change. And around that time we sort of, a big rise in climate denial and a big attack, what I saw it as an attack on science. Good science. I have a physics degree, so I feel my, you know, going back in very long time ago, but I see myself as a scientist and someone that's able to look at a bunch of science and decide, you know, "this makes sense, this doesn't make sense." And what I saw was that the denialist arguments were incoherent.

They'd argue different points depending on who they were talking to. It just, it didn't add up. Whereas the, scientific arguments were consistent. And we're worrying, right? We were on a path to a lot of problems, which are happening now. We haven't been addressing them fast enough. So that was the initial thing.

It was nothing to do with my work at the time. I was probably at eBay at that time. I joined Netflix soon afterwards. So I was working on migrating, well, working on some personalization things with Netflix and then working on the architecture for migrating Netflix to AWS. And then, we've sort of dived in a little bit, I adopt, had solar panels for 2009, the electric cars since 2011. So sort of put, tried to act as a bit of a, you know, "put your money where your mouth is" as a someone that people could, you know, happy to be the early adopter and figure things out. And then after I joined Amazon in 2016, I could see Amazon was, had actually quite a lot going on that was related to climate and efficient use of energy and things like that. But they weren't really telling that story. And I tried to get involved to see if I could, you know, get involved at any way. And I, it took me a little while to do that. What I found was, you know, I was basically the vice president in charge of the open source program and also out there, acting as sort of a evangelist, explaining to people how you should move to cloud.

Doing lots of public speaking, I keyed some of the AWS summits and things, but there was no messaging around sustainability in the standard PR approved corporate messaging, which, you know, that's what you have to follow. So the challenge was how do we get PR to include the messaging in the standard,

you know, so that everybody knows what they can say and has the information to back it up? And AWS is a very strict PR policy. It's very managed, and you have that, you have to get them on board and and build all of the right content and get everyone lined up to do that. So that was the challenge.

And then, I found that somebody was trying to join a standards group called OS Climate, which is a Linux Foundation organization, as is GSF, and I'd previously been involved in, the work to get AWS to join the, CNCF, the Cloud Native Computing Foundation. I was the initial board member for AWS,

representing them at CNCF and the whole Kubernetes AWS and Kubernetes Arena. So I was, I understood how to join a standards body, basically. And so I helped Amazon join OS Climate. And that got me much more involved in the sustainability organization because they, it was open source climate information that was being shared for, mostly for financial analysts to do risk analysis.

It's a very specific thing. So you can go look at OS-Climate.org if you want to see what they've been doing since. This would've been in about 2000, around that time. And then that was related to something called the Amazon Sustainability Data Initiative, which is a whole lot of climate related data that is shared for free on AWS, one of the programs that most people hadn't heard of.

And eventually, I managed to make an internal move to the sustainability organization because they realized they needed to gather together everything that was going on. What was happening was customers were asking salespeople, what are we doing about sustainability? And they were calling, they were either making something up or they were calling random people in the sustainability organization.

And that organization's job was to do the carbon footprint for all of Amazon. It wasn't to talk to AWS customers. Right? And so the VP that runs that organization, Kara Hurst, basically created a position for me as a, as another VP to move across to that group, and to gather together everything that going on across AWS and act as sort of a, all of this incoming requests and information and whatever, and makes sense of it.

So this is actually kind of an interesting problem you have, right? If you're running a corporation, you find that there's a groundswell of enthusiasm around climate and it's driven by, you know, kids coming home from school and saying, "so what are you doing to make the world the place I can live in when I grow up?"

Right? everything from that to board members, suppliers, legal mandates, there's many reasons why people have a need to be either greener, or to manage and report their carbon information. And it was popping up in all different directions. So that's, so what we did, there was a couple of tricks.

One was that we started an internal email newsletter that went out every Monday. This is a very powerful trick. It's a pain in the neck to actually do every week. It's like, "damnit, I gotta write this stupid thing and get it out." I did it for a while and then I had a, luckily I had managed to hire someone to do it for me.

But the, what it did was it said, well, here's a group and what they're doing. And then at the end of the email was a list of every group I could find and who, what roughly what they were doing and who to call. So like a "who are you gonna call? List," right. And. That email got passed around and people say, "well, I'm not on that list."

So it gradually accumulated this long list of all the stuff, and if you ever read this message, you looked at the end, "wow, there's a lot of stuff going on." So it makes work visible, which is one of those principles of management, right? Make work visible.

 It made the work that was going on visible so people could see just how much stuff was going on.

And each week we'd feature a different one of these groups that we'd run into a little bit more detail on what they were doing or any actual announcements that were going on publicly, like we just say some more wind farms or stuff like that were announced or as we got through like the reinvent conference where we got some things going.

So that was one piece of gathering everything and getting everyone on track and generating enough internal sort of, Amazon and AWS are very distributed organization. There are lots of very independent teams, so the challenge is always doing one central thing. That's the hard thing to do at Amazon corporately.

So this was, you have to have a technique that gathers it together. So we got that together. We also had a principal marketing manager working for me, and she also worked with the Amazon Reinvent team to create a track at Reinvent that is branded sustainability. So every year there's, you can go look for sustainability related talks and there're all different areas, but there's like 20 or so talks every year.

So creating that track was also a big argument that eventually, "yeah, okay, we'll do it because there are only so many different tracks that they will create." And then we managed to get, there was also this well architected program for how to do software better for cloud, well architected for, you know, cost optimization or efficiency, sustain, security, all these different things.

And we pitched that there should be a sustainability one and got somebody to write it. Got it through the system. I'd edited a little bit of it, but mostly I was sort of the corporate sponsor to get it through the process of getting it released, keeping it all on track. So we got that released. We did a talk at Reinvent where we announced it.

And then finally there was the customer carbon footprint tool, which a team was already trying to produce. They knew they needed to produce it because various customers needed this data and it was being released to them in, you know, dribs and drabs under NDA. "Oh, here's some numbers." Right, but there was no standard way of getting this data.

It was being done on a one-off basis, which doesn't scale. So there was a, at least an attempt to put together a basic tool with the information you need for, basically accounting data. Like if you're doing your annual carbon footprint, you need to know how many are, how many tons to buy offsets for that kind of thing.

So that was really only for that purpose. But as somebody that's got a developer background, I always wanted to have something that was more real time. If I'm running a workload, what's the work, "what's the carbon footprint of this workload" was something I was very interested in. But the annual, you know, accounting information is much more like a CFO cares about that.

It's, but that data is not that useful for doing individual workloads. So that was roughly where we got to. I'd been at AWS for about six years and I was kind of done with it. And, I felt for various reasons, some personal reasons and some just like I was that, that it was the time that I wanted to retire.

So I retired from Amazon in 2022. Sort of left behind this sort of what we'd got done. And then, basically became this sort of independent consultant and started right talking about green software and things like that and talking to the Green Software Foundation. And eventually we decided, you know, I, that it would make sense to make a proposal.

So I proposed the realtime cloud project and I think that was 2023 we started doing that. So that was a long answer, but hopefully that's useful. 

Sanjay Podder: I think that's a fantastic answer. A lot of things to learn from there. And, since your time outside of AWS with GSF and other groups, have you seen the sustainability in tech become more of a first class citizen, or is it still an afterthought? You give an example of PR where you know, and I often see that even today many organizations forget to highlight the work they're doing on the sustainability side.

And in many cases they are doing it, but they miss it completely. And you give a good example of that awareness you created by just collating all the good work together. Suddenly you see, wow, you know, we are actually doing a lot of work but nobody paid attention to it. So do you see this changing with time as people are getting more aware of sustainability dimension of tech and making it a first class concern, rather an afterthought?

Adrian Cockcroft: I think we've seen some movement in that area. There are quite a few companies that have a public position, which is that we are green, whatever. You know, Apple is an, is a good one, for example. they have very clear public messaging and they need to do all of the work to back that up. So I think that's what drives it, right? At a corporate level, you want to say, we want to make a position on this company being green as a corporate thing, right? So you need to have the data to back that up. You need to have it, and then it sort of flows down. Everyone cares about it. If it's something, if it's not one of your priorities that the executives talk about, then you know you can do stuff around the edge, but you're basically being driven then by regulatory, supplier, you know, and employee enthusiasm, right? And so there's some level of that. And I think that the, with Amazon, there's the Climate Pledge program, which they've been pushing, which is basically, you know, we say we're going to be carbon neutral by 2040 rather than 2050.

So it was an acceleration of the Paris Agreement. And that, that was sort of the core public thing going out to get people to sign up to it. And you know, and Amazon itself sort of working to that goal internally, which drives a lot of internal activity. But the frustrating thing was we, it was hard to get projects that were going on internally that you could talk about publicly. It was very difficult and we were sort of had a big battle to get what we got out through sort of the PR filter. And people like taking pot shots at big companies. And the, it's sort of like the PR organization is very gun shy around this topic because they keep, it's a negative, right?

And unless you can come out with a story that's so strong that it becomes a positive, if you're in PR, you just avoid stories that are a negative. So, I mean, so they're doing rationally the right thing for the company, but it's very frustrating because there is actually quite a lot of good stuff. But it's hard to assemble it into a really big positive story when the, at a corporate level, it's a thing that Amazon does, but it's not as central as it is to some other organizations.

And sort of the position AWS has is that we're gonna buy enough green energy and, you know, building wind farms and whatever, that you don't need to worry about this. We'll just take care of all your carbon for you, right? Like, we'll take care of the data centers for you. It's a concern that we will, deal with for you.

And that works for some people, but it doesn't, I don't think it's enough for what the general discussion is. And it also means that it's hard to get enough information to, say, optimize a workload. So I have a thing I need to run for my company. I could choose where I can run it.

I can choose which cloud provider, which region, which country maybe, and if one of the metric, one of the objectives I have is to do that in a green way, then I need some information like which data center in which country, which provider? What's the difference going to be? And that's kind of the information that we've been trying to gather so that we can say this region, on a region by region basis

you can compare things like a PUE, energy usage efficiency, and water usage and the carbon offset that they have, you know, how much carbon free energy is being generated in that region, things like that. So you could make that comparison and that's really what we ended up producing in the GSF, realtime carbon,

the real time cloud group, right? So it's a, took all of the data that we could find from all the cloud providers, which is a huge mess, different models, different standards, different ways of looking at things, and tried to put it into a common format. And then the other thing is that data about regions is quite old.

It's a year or two old. So we've come up with also estimates for now what we think the, you know, if you're trying to measure, say, well, what's a workload gonna be like today, we trend some of the metrics and we figure out some others that don't, you know, to work out what, would today's data look like? 

Sanjay Podder: Two questions come to my mind. One is, in the time that we live in with AI as fast turning out to be one of the most important workloads in the data center, and some of the more popular ones are closed source, which means that very little is known about how big are the models and if you have to green it, therefore, you know, do you see the same challenge that you were articulating earlier with not as much information there? But now is that problem getting more compounded because there may be more things that we don't know and therefore to green the whole thing becomes a challenge, what's your perspective on that?

Adrian Cockcroft: Yeah, the emergence of the current LLM based, I mean, AIs have been around for a while, but 

the recent emergence of the LLM based AI, sort of the explosion of it, is causing multiple things to happen. One is a real change for computing to be GPU centric, which is much more energy intensive.

And it turns out that although it's very difficult to get realtime energy for the CPUs in a, particularly in a cloud environment, the way people run GPUs, you go to the NVIDIA interface and it tells you how many watts it's using, in real time, once a second. So there is actually very good energy data available for the GPU workloads. And the GPU is dominating. So if you've got that data, you could add a percentage for everything else, but it will give you a pretty good basis for the energy being used by a workload. And. Measured in real time. So that's actually quite helpful. the reason that CPUs don't provide energy information is usually they're virtualized and the virtualization, is there's effectively a security issues around Being able to measure the energy use of a system when you're actually just a VM running on the system, right? They don't have the energy on a per VM basis, but GPUs are normally used as entire GPUs, and you can find out the energy usage of it.

 So that's one piece of this. Can you measure it? Okay. We can measure it actually really well. In fact, some people tune their AI workloads by power, like if it, they tune it until it's running maximum power. 'Cause that's how they know it's doing maximum flops, right? The energy, right? If it's running at low power consumption, it means it's not running efficiently, which is a bit perverse, but it makes some kind of sense that you want to use your hardware efficiently.

So that's one thing. But then what we've found is that there are now huge data centers being built, and this wasn't part of the plan a few years ago. So if you're planning data centers and the energy infrastructure to support data centers, that planning is on a, like a three to five year, maybe two years would be quick.

You know, three to five years is normal for planning out where you are going in terms of energy or putting up buildings and doing very large scale infrastructure. Takes time. And this just turned on its head and all of a sudden there was a shortage. So there's a shortage of buildings and power, and I think it'll come back into alignment.

Probably oh in like maybe three to five years, we will be in a new steady state where we know what, where everything is, you know, we have enough energy to do what we need to do, but in the very short term, there was a sudden increase in the amount of energy needed for data centers, and this would be really bad if we didn't also have a rapid increase in electrical energy for cars and space heating, right?

If you look around, those are the three big new drivers that we are, we are switching from gas to electricity, gasoline and methane basically to electricity. So we've already, we already knew we needed a lot more electricity. And that's been driving investment in energy sources. But the AI data center has caused a, like a very rapid increase in a very short period of time.

So that's a problem. And what we've got effectively is there's going to be less clean energy for a few years as we get catch up. You can't just stand up wind farms in three months. It takes too long. So that's the second sort of thing that's happened with AI. And then the third thing is, 

can you use AI to help?

And I think the main problems we have are just lack of data and the fact that everything is very messy. AI might be able to help here and there. But I don't think that AI is really, I mean, there are people telling selling tools that will use AI to help you optimize your carbon footprint and things like that.

But I think the things you need to do are pretty obvious. The AI is going to help at some point as an optimization, but it's not the main driver. The main driver is wanting to do it in the first place and being able to get measurements out of the system at all. Right? If you can do that, your most of the way is pretty obvious what you need to do.

You don't necessarily need AI to tell you to make all your computers run twice as busy so that you need half as many of them. Alright, that's, kind of the obvious. My obvious thing to do is to work on utilization. Most people have very underutilized systems. If you can increase utilization, you save money.

'Cause if you use half as many computers, you pay half as much and it's half the carbon footprint. And people just seem to accept wasting, you know, leaving CPUs idle and GPUs idle when they should be being kept busy. Or if you're on a cloud provider, you should be, you know, giving them back so somebody else can use them. 

Sanjay Podder: Absolutely. Coming back to the real time cloud carbon standard project that you have been driving. How real time is this real time? Because part of the hyperscalers, as we know, their reporting is not of the same granularity, both in terms of frequency as well as what they report.

So how are we ensuring that we bring some amount of uniformity when we talk about real time cloud carbon standard?

You know that, there may be a lot of unknown. I think you even spoke about some of the challenges in your earlier role. So how are you trying to address this gap?

Adrian Cockcroft: I had say the word real time in there is aspirational. What we'd like is in real time, meaning I am running a workload, I want to know what is the energy use of that workload? What is the carbon footprint of that workload now? And if I'm trying to predict a workload, I want to be able to know I need enough granularity to be able to do that.

And like I said, you can kind of do that with GPUs because you can get real time data out of them. But in general, we went to the cloud providers and said, "we'd like this data," and they said, well, it's too expensive to build. And it's not just expensive in cost, the carbon footprint of adding additional metrics and instrumentation is not zero. And then the number of people that would use it and the amounts that they would save is also. You have to kind of look at, it's gotta be used pretty universally. But we have cleaned up some of the data that they do have. So I think that the main thing is to try the point of real time is like, is to make it relevant to somebody trying to run a benchmark.

In particular, if you look at SCI, the other GSF standard, I want to generate an SCI number for a workload, that means I need to know what is the carbon, like for that workload, right? That this is what we're, you know, in real time so that I can have an SCI number. That's kind of what we're trying to do, is if you're running that workload in the cloud, then you need to gather data that you can at least estimate what you're going to do.

The CNCF has a project called Kepler that works with Kubernetes. So if you have a Kubernetes namespace that defines a workload, you can find all the pods in that name space, it'll estimate the energy use of those pods as a subset of the energy use of the nodes they're running on, and give you the best guess of a real time number for energy.

And then you can go look at the region that you're running in and say, okay, that region has whatever, you know, 80% carbon free energy, meaning that the, that carbon, that cloud supplier has a lot of,

80% of its energy is coming from either wind or solar or battery or, and some mix of the grid, right?

Or you can decide you just want to look at the grid. It's up to you whether you want to do market based or location based kind of numbers. Different people have different reasons for doing these things, but the data is all there to come up with an estimate for what is the carbon of a particular scenario that you're looking at.

So in that sense, that's what, why it needs to be real time as opposed to the sort of accounting annual data that is sort of my,

the non-real time stuff, if you like.

Sanjay Podder: Makes sense. And do you see in future that you'd also like to extend it to other environmental resources like, say, water? Right? Where a lot of times people are concerned about the use of water for cooling, for power generation. So, are there any plans that, you know, we will have some kind of a similar standard that just does not talk about just carbon, but also about water?

Adrian Cockcroft: Yeah, we do have water in there. The data is published by some of the cloud providers, and there are two metrics for water. One is water usage efficiency, which is basically liters per kilowatt, right? The other one is replenishment rate, which is clean water, the water coming in versus water going out, right?

Because you have wastewater coming out, and it's a little odd because you can bring in dirty water and clean it up and put out clean water, which means your replenishment rate is greater than one. Right? If you take in, if you bring in dirty water and clean it up, so what they care about is the amount of clean water that comes out, versus the amount of water going in.

So it's a much more complicated sort of, 

and I mean these things, all of these metrics, when you dig into them, they all get complicated, right? Right. Carbon's the same, but water has these two characteristics, which is the water flowing through, and then how much is it related to the energy usage?

So there's an efficiency thing, which is really related to how efficient your cooling system is. And then the water treatment system is like where you're getting your water from, how much are you using, versus what is it going to. And there are definitely some plants out there, there's some of the AWS ones that take in dirty water from industrial sources and they put out water that's clean enough to be used directly for irrigation in farming. Right. That's a, that is defined as a clean water effectively. Right? They take out all the pollutants as it goes through the system. So that's, you know, that's a nice thing. Others are, some of the older data centers are incredibly inefficient.

They take in masses of water and just boil it off and have terrible replenishment rates. Right. But if it wasn't something you were looking at, then it typically will be bad. I'd say data centers built in the last five years are much better. It's the older ones, which are worse. So it's kind of odd because if you look at a data center, it might be very good on carbon, but very bad on water.

Or vice versa. It just, they're not correlated, really. But the latest builds are good on both. Right? 

If you're building a brand new data center today, it's likely to be very good on water usage and power usage efficiency and low carbon. Particularly the big ones that are being built for running these big GPU environments. You shouldn't use the average numbers from a few years ago to apply to those data centers because they are, the cost of the water and the energy is very high and they're optimized to use as little as possible. So they're, we're seeing some very clean systems there. And then the energy sources is another area that's quite interesting.

There's a whole lot of innovation right now in terms of alternatives to wind, solar, and gas, basically.

Sanjay Podder: Adrian, in your opinion, how far are we from being able to express, not just at a data center level with water usage efficiency, but at a workload level, if I have to say something similar to SCI right, if I have to say that for this workload, this is, for example, if the workload is related to AI being used for fraud detection.

You know, if I say it's x liter of water per hundred fraud detection that we are trying to do. Now, that's, that we are talking at a different level of abstraction than WUE. But how easy to do that.

Adrian Cockcroft: Yeah,

you could because you have the, you know, whatsoever, you run a number of these fraud checks that uses a kilowatt of energy, a kilowatt hour of energy, right? The kilowatt hour is a, is the energy, capacity, energy of it. And then you can say that used, you know, half a liter of water and its carbon footprint was, you know, 30 grams or something or whatever, right?

So those numbers are directly available once you know the energy of a workload. All right. The trouble is figuring out the energy of the workload. That's, well, that's part of the problem. And then the other question is where you get the numbers from to give you your water and energy, and how accurate do you want them to be?

Because a lot of these numbers are annual averages. And if you want something that's much more specific, you're trying to optimize hour by hour, an annual average won't show that. So if you're doing very tight optimization, you want to be using say, hourly data, and that's where the system, that's where you start digging into much more complex environments.

But ultimately I think that as we get better at doing this, we'll end up doing, This sort of fine grain, real time optimization, sort of minute by minute, hour by hour, rather than trying to do stuff at the monthly or annual level.

Sanjay Podder: Right. Yeah. That would be nice to be right. You know, because that makes it very actionable for the developer community to reduce the emissions. And if they know that I am having x tons of carbon dioxide emission and I'm using, you know, y kilowatt of energy and I'm using x liter of water per hundred fraud detection, can I lower it?

Or, you know, per, customer supported, you know, so it becomes very actionable for people to track. Hopefully we'll reach that state very soon in the working around. 

Adrian Cockcroft: Yeah, I think somebody did. Somebody did an analysis of ChatGPT because people have been very worried about it, and I forget the exact numbers. There's, I think you, we should be able to find the story, but it was a totally trivial amount. Like even if you do lots and lots of queries on chat GPT in a day, it's still, like a very small amount, you know, a few grams of carbon and a few milliliters of water and it's much less than, you know, going to the bathroom or drinking a cup of coffee, right? So you have to have a sense of proportion sometimes on these things. And because we see the sort of training workloads are huge, but what we care about is how efficiently the inference workloads run.

And what, how often people are using them. So you have to kind of be a little careful and not get carried away with the numbers and look at how does it relate to something that you are also doing, right? And if having a meeting to discuss saving carbon uses more carbon than the carbon you were gonna save, then it doesn't make sense.

Right? Just flying internationally is probably the biggest use of carbon that we have, on a personal basis. It's like a ton of carbon or something like that to fly from the US to Europe, for an economy flight, for an economy seat. It takes an awful lot of other things to add up to that much.

So there's a sense, one of the things that I think people tend to lose is their sense of proportion, because these numbers are just, there are too many big numbers floating around. I did a blog post on consequential, sort of, analysis as well. That was something that I came outta talking to Henry from WattTime, who gave, who really gave me a lot of feedback to really help me understand this.

And as I was trying to understand this, I wrote it down. So I ended up with a post on trying to understand the consequences of what you're doing and make, which is part of understanding the bigger picture of not just how much does this thing here consume, but how much impact does that have on everything around it?

And what bound. If you want to really say you're saving the world, then you have to think about the world as the boundary. Whereas most people are talking on a corporate level about the, "this is our corporate footprint." And just because you reduced your corporate footprint, you don't know whether that, you might find that the money, the carbon you saved was caused extra carbon to happen somewhere else. You know, the sort of, the kid's party balloon animal problem, like you squeeze one leg and the other leg gets bigger, right? That, there's a lot of that happens and a lot of double counting and missing things.

So it's just a big, messy area and I think that's the hardest problem. I think we can directionally say that there are things we do that make it better. When if you try to come up with very detailed measurements, you get down rat holes that become unproductive fairly quickly. So I think the biggest thing you can always do is just run more efficiently, use less, and that's always gonna be better.

Sanjay Podder: And be carbon aware, right. In terms of...

Adrian Cockcroft: Yeah.

 You know, think of it about where you're doing things. We still have the problem that most of the carbon, most of the high carbon regions are in Asia, 

Sanjay Podder: Yeah. 

Adrian Cockcroft: depending on which cloud provider and where in Asia. But Europe and the US are pretty low carbon now, and Asia has going to take another 5 to 10 years to clean up.

So it is just kind of a phasing thing. It's a, for the next few years try to avoid, if you can choose to put a workload in or a like, say an archive backup. You want to put an archive in another region, put it in Europe. 'Cause that's likely to be the lowest carbon place to put your archives for backup purposes, right?

If you leave them in Singapore, you're going to find that's high carbon. 

Sanjay Podder: There's a very good article very recently published in MIT Sloan Review the Maths of AI, with a lot of input from the recent work done by Sasha, from Hugging Face as well as you Boris from Salesforce. And it does give you different scenarios to show how the emissions and water can very quickly snowball to big numbers as we, look at, the growth in the sector.

Right? Yeah. So I also think you recently wrote a blog post on Virtual Adrian Revisited as MeGPT. What was the, you know, thought behind it about this digital twin, personal digital twin, and how do you see digital twin AI tools like MeGPT contributing to soft sustainable software development practices?

Adrian Cockcroft: Yeah, so I mean, I've been writing code for a very long time, and so one of the ways of understanding a new thing when it comes along is just try to use it, right? Just try to build something using it. And so I wanted to get my hands a little dirty, doing some work and I've been using, I've been coding in Python using, they call it vibe coding now.

Basically telling, like the cursor, Claude thing I have, please write me some code that processes YouTube videos, YouTube playlist into individual videos and stores them as a .json, blah, blah, blah, right? And it goes and writes that code in about 60 seconds and then run it and it works, right?

You point at a YouTube playlist and it prints out a bunch of resources that you can then share and ingest into an LLM, something like that. So that part of it was me playing around. And part of it is that I've got about 20 years worth of content that I've produced. I'm a sort of a developer advocate.

My actual title at Amazon was a VP of Evangelism for one of my roles. And the job was to go out and tell stories. So I have massive amounts of video and podcasts and presentations and all these things, right? And it's there to try and influence spread ideas. I'm not trying to monetize it, like people say, I don't want people using my AI content because I'm trying to monetize it.

That's one problem. I'm trying to spread this idea. So the more they get, the more they get spread, the better. So the easier I make it for the LLMs to understand my content, the more influence I have in the world. So I'm looking at it from that point of view. And this is like a marketing point of view.

If you want to spread some information about your product, you might want to build a expert for your product, really. All the documentation and examples and things. How do you teach the LLM to use that so that the LLM knows how to use your product versus somebody else's product when somebody says, "Hey, I need to solve a problem."

That's the area. And part of that corpus of data includes all the things I've written about carbon and optimization and performance tuning and all the other things I talk about, from everything from corporate innovation to DevOps to whatever, right?

 All the software, architecture, cloud, migrations, all those things.

So that information is all in, basically indexed by this MeGPT. It's on, if you go to GitHub, Adrian Co, which is my GitHub account, MeGPT. And the idea there is that as an author, I have sort of a virtual Adrian co author containing all my information. And you run, you build it and you end up with an MCP server that you can attach to your LLM, and then it will know how to query the body of content I have. And then there's a company called Soopra. There's several companies, but the main one I've been working with's called Soopra.io, soopra.io. And they have a persona based system where you load your information into it and they generate a persona that you can then query and have conversations with.

And it answers questions as that person. So it understand, it sort of, kind of follows your voice a little bit. But it, in my case, what it does is it pulls out all the information from these blog posts and things I've written. So it's a way for me to, and one of the things, I mean, you work for a consulting organization, they always say consulting doesn't scale, right?

You have to hire more people. So in some sense, this is a way of making consulting scale as we get better at getting the knowledge of a consultant, somebody like me that's got a 40 year career, I can dump what I know into this system and then people can query it and it, you know, I'm no longer, I don't have to be there in person.

So in some sense I'm sort of sharing that information.

Sanjay Podder: Right. You have your digital persona. I was about to ask you a question on where people can learn more about what you're doing and your work. Looks like you have already defined a lot of digital personas to help people to easily understand more.

Adrian Cockcroft: Yeah. I mean, you can find me on LinkedIn relatively easily, and if you go to orionx.net, there's, we have a monthly podcast where we talk about what's going on in the industry, and things like, you know, whatever coal weaves, stock price going up like crazy or whatever, you know, what's happening with green energy sources and bitcoin things.

There's different, it's not my expertise, but one of the other people in our, in OrionX is deeply into that area. Quantum computing, all these things. So we have an interesting little group of analysts. There's four of us that chat about stuff once a month. So we have a podcast, but you can find that at

OrionX.net along with links to myself, and if anyone wants to chat to me about, you know, tuning up their workloads or help figuring out how to, you know, work on a better sort of carbon strategy and sort of generally, I mean, I'm sort of semi-retired, so I'm not looking for work on a daily basis, but I'd be open to interesting opportunities to work with people.

Sanjay Podder: Wonderful. So I think, I guess we have come to the end of our podcast episode and all that's left for me is to say thank you so much, Adrian, and this was really great. Thanks for your contribution and we really appreciate you coming on to CXO Bytes.

Adrian Cockcroft: Thank you. That was fun. 

Sanjay Podder: Awesome. That's all for this episode of CXO Bytes. All the resources for this episode are in the show description below, and you can visit podcast.greensoftware.foundation to listen to more episodes of CXO Bytes. See you all in the next episode. Bye for now. 

Hey, everyone. Thanks for listening. Just a reminder to follow CXO Bytes on Spotify, Apple, YouTube, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. Thanks again, and see you in the next episode.




Hosted on Acast. See acast.com/privacy for more information.

Show more...
4 months ago
47 minutes 25 seconds

CXO Bytes
Navigating AI Risks with Noah Broestl
Host Sanjay Podder brings Noah Broestl, Associate Director of Responsible AI at the Boston Consulting Group, to the stage to explore the rapidly evolving landscape of generative AI and its implications for business leaders. Together, they talk about the requirements of present and future AI governance frameworks, the road to sustainability in AI, and how the emerging risks of Generative AI are shaping the future of responsible technology. 

Learn more about our people:
  • Sanjay Podder: LinkedIn
  • Noah Broestl: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • GenAI Will Fail. Prepare for It. | BCG [10:12] 
  • Software Carbon Intensity (SCI) Specification | GSF [25:52] 
  • 2024 Green Software Foundation London Summit: | BCG [33:36] 
  • Responsible AI | Strategic RAI Implementation | BCG 
  • Scale GenAI Responsibly and Confidently with Human + Automated Testing and Evaluation | BCG
  • GitHub - BCG-X-Official/artkit: Automated prompt-based testing and evaluation of Gen AI applications | BCG 
  • Analyzing Cultural Representations of Emotions in LLMs through Mixed Emotion Survey | Shiran Dudy
  • OECD Artificial Intelligence Review of Germany | AI Accountability in Germany 

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!


TRANSCRIPT BELOW:


Sanjay Podder: Hello and welcome to CXO Bytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting chiefs of information, technology, sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.

I am your host, Sanjay Podder.

Welcome to another episode of CXO Bytes, where we bring you unique insights into the world of sustainable software development, from the view of C-Suite. I am your host, Sanjay Podder, and today we have an exciting discussion lined up on the challenges and opportunities of responsible AI. Joining us today is Noah Broestl, partner and associate director of Responsible AI at Boston Consulting Group. With a career spanning Google, the Oxford Artificial Intelligence Society, the US Air Force, and now Boston Consulting Group, Noah has been at the forefront of AI safety, responsible AI, and technology-driven sustainability solutions. At BCG, he helps global business develop robust AI frameworks that balance innovation with responsibility. He is also a steering committee member of the Green Software Foundation, working on initiatives to ensure AI and software development are aligned with sustainability goals. Today we'll explore how AI governance frameworks, sustainability in AI, and emerging risks of generative AI are shaping the future of responsible technology. Noah, welcome to CXO Bytes. Before we get into details, can you please introduce yourself?

Noah Broestl: Yeah. Thank you so much, Sanjay. Yeah. My name is Noah Broestl, partner and associate director of Responsible AI at Boston Consulting Group. Super excited to be here today. You know, I think you've covered it pretty well, Sanjay, in my background and the things that I'm working on now. But I've been thinking about, responsibility and technology for well over a decade and been working directly in that space for well over a decade.

And so it's been a very exciting journey and as you're going through Oxford Artificial Intelligence Society, US Air Force, you know, a lot of really great memories working in all those spaces. But, you know, at BCG, really focused on helping people who want to be responsible in the deployment of technology

navigate what is effectively one of the most complex landscapes I've ever tried to operate in, which is we have technology laboratories that are producing on a weekly basis, what are, at least claimed to be, breakthrough technologies. We have, you know, research laboratories both academically and industrially that are producing amazing frameworks, amazing tools for responsibility. We have government entities, all over the US and all over the world that are producing guidance for how they should implement these technologies. But bringing all of those things together is really challenging and that's what motivated my move from Google into BCG, was to be closer to helping people. Now that we're seeing these technologies really have impact in organizations and in commercial applications, helping organizations figure out how you navigate any of that, it really does come down to, I think, two pillars, which are how do you govern this inside of organizations, and how do you build responsibility into the product development lifecycle? How do you enable your engineering teams, your product teams, your business teams, to really integrate responsibility as a component of the development lifecycle rather than as some stage gates that happens at the end of development prior to launch?

So, very excited for our discussion today. I'm sure we're gonna go a lot of interesting places.

Sanjay Podder: I'm really looking forward to learn a lot of things from you, you know, because I see that, you know, your interest in this topic has been for more than a decade, in fact, probably right from your education in Google and, you know, BCG. Just a quick question. You know, why? What got you interested in this?

Noah Broestl: Yeah, I mean, it's one of those questions that is, so first off, if you're ever in at a cocktail party or anything like that and someone works in responsible AI and you want to get them excited, ask this exact question. Like, what was the moment in your career that was the turning point for you, where you went into responsible AI and certainly all of the pathways that lead here, I think, are fairly winding. Like, you know, I started as a computer science major. I ended up getting degrees in sociology and history and law and data science and ethics, eventually, but I think I can point to one place in my career where I started asking questions about the intersection between, technology and society.

And one of the most gratifying roles that I've ever had was working in abuse response. So working on a major product with over a billion users and thinking about how do we protect these users from vandalism and fraud, how do we make sure that this product is trustworthy and useful to the people that we're providing it to? And you do that for a certain amount of time and you start to ask, I think, questions about why we're doing the things that we're doing, right? So if we take, for example, like for protecting a product leading up to an election, there are a couple of strategies you can take and one of them is just turn off the tap, right?

Like, just stop any user content from being posted to the platform. Identify high risk places and say, alright, we're not going to accept any UGC on this. We're going to heavily curate these features in our data set, and we're going to allow them to just sit there and not have any of the other components that you may need for modern platforms that are going to increase freshness of data and make sure you have, you know, the most up-to-date information.

So that is one strategy. You can shut everything down and protect it. Another question that you could ask there is, don't we have an obligation as a transnational platform, as a global platform, to amplify the voices of people expressing dissent with elected officials at the time when they have the most agency?

I think that's another perspective you could take on this. Right. And if you shut off the tap, are you impacting the way that people are approaching their, yeah, I guess, you know, expressing dissent with elected officials, like, are those perspectives important to amplify or are they important to protect?

And that question for me was the hinge that my career turned on, is how do we answer that question? And to answer that question, I had to make this shift from looking at my career as progressing through infrastructure engineering and technical program managements in infrastructure engineering,

and I made a shift to, first academically, I moved from a degree in data science that I was working on and went into ethics. I said, "I need to go understand how we make decisions about what we ought to do." And that degree in ethics, I did my masters at the University of Oxford at the Uehiro Center.

I was hoping that I would learn what I should do. I think I probably more accurately learned how to poke holes in what other people were doing. So I'm not sure it really gave me everything that I was looking for. No, but it was a fantastic way to understand, how do we approach decision making in these complex spaces?

And then secondly, the way that we're using artificial intelligence, and certainly this pivot in my career was almost a decade ago, when I really got deeply into this, the artificial intelligence technologies that we were employing were pretty crude versions of what we see now. And so I had a lot of questions about, you know, what is the direction that artificial intelligence is moving in and how should we be prepared for the next evolution of these technologies.? And so I moved into research, into AI research, and tried to get as broad of a perspective as I could in how those technologies were evolving and where we could deploy them, deeply understanding how they would be able to integrate with sociotechnical systems in the future.

And I remember at the time I said to my manager, "Hey, I'm going to shift my career to think about ethics and artificial intelligence." And he said, "oh, that's cute. You're never gonna make any money there, but if it makes you happy, go ahead and do that." And so it was definitely the point where my career shifted.

Definitely the point where I saw a problem, wanted to investigate that problem deeply, and that led me through all of the other things that, you know, the working in responsible AI research, the leading safety evaluation for generative products and eventually landing here at BCG, leading responsible AI.

Sanjay Podder: Fantastic. Very interesting and inspiring, I should say. I'm sure you are super excited with the growing adoption of generative AI, given the risk landscape has expanded. You know, you spoke about AI safety, you know, hallucination, you know, it's no longer just about bias and explainability and, you know, and the whole risk landscape is now so broad. You know, that brings me to an interesting question. You know, I know you recently wrote an article in the BCG website on Generative AI will fail. Prepare for it. And in the article you highlighted the inherent unpredictability of generative AI systems and the need for continuous monitoring and escalation frameworks, given that AI failures can range from misinformation to serious regulatory violations. And you just mentioned. How should organizations approach generative AI governance to balance innovation with risk mitigation, in this age of generative AI? Right. You know, earlier AI also had some of these, traditional AI also had some of these challenges, but now generative AI, you know, how do you do the governance part?

Noah Broestl: Yeah. Generative AI is different. Like the risk landscape of generative AI is different and we should definitely kind of dive into exactly why that is. But, the first thing I wanna say is, I think it's a fallacy that there is a trade off between innovation and risk mitigation. 

I think there's, it's a fallacy that there's a trade off between innovation and responsibility.

And there's a couple of analogies, one that we touch on, it's Tad Roselund who used to be our our chief risk officer at BCG used to say this all the time, which is, you know, you ask F1 drivers how, "what lets you go fast? What lets you go fast?" And their response is the brakes. "The brakes are the thing that let us go fast.

Being confident in our ability to break." And I think that's, I love this analogy particularly because if you're an F1 fan and you watch the Shanghai F1 this weekend, Lando Norris's brakes failed at near the end of that race. And as he was going around these laps, the brake distance was getting longer and longer to the point that there was a catastrophic failure in the brakes.

And you just see, George Russell catching up on him every single lap. And certainly if that race had been five laps longer, lando Norris would have come in third or fourth or fifth or sixth. Like, if you lose those breaks, you don't have the ability to confidently move at the pace that the organization can move at.

And I think that's a good analogy for the way that we should look at responsible AI programs and AI governance. It should enable us to move quickly. And we actually see this bear out in the research as well. We did a study with MIT, where we showed that the organizations that were spending the most resources and had the most interest in responsible AI,

were the ones that were also scoring the highest on innovation measurements.

 And so there's a little bit that you could say there around causality versus correlation and, you know, innovative companies are also interested in responsible AI, but the fact remains, that's, once it's implemented appropriately, it enables organizations to move quickly.

I also think about the analogy between program management and responsible AI. If you've worked in a tech organization, you know that a lot of engineers have a really bad opinion, or if you've worked in a tech organization, you know that a lot of engineers look down on program management. They see it as something that slows down velocity.

And I think that's because there's been very naive applications of program management in the past. Vanilla agile. You just come in and you say, we're just going to deploy Agile in this space without a clear understanding of where the friction exists inside of the development process. And we need to do the same thing with responsible AI.

Like we need to know all of the items that are available in our toolkit, we need to understand when it's appropriate to go more deeply into principle definition. We need to know when it's appropriate. To build risk taxonomies that align with the use cases that we'll see inside of the organization.

We need to know when we need to upskill people in particular areas. And we need to deploy those things in the appropriate order to resolve the friction that exists inside of the organization and to identify those really high risk use cases. At BCG, you know, we've implemented our AI governance in such a way that it all stems from definition of what are our no fly zones, what are our high risk areas, and what are, you know, low, medium visibility areas or medium risk areas, or low risk areas. And what we do there is we triage all of these cases that are coming in to say, these are things we're just not going to participate in, we're just not interested in the way that they're aligned with our corporate ethos.

At the very beginning, at the ideation phase, we say "this is something that we're not comfortable pursuing." We then look at all of the rest of those cases and we try to target ourselves so that we're looking at about 10% of those cases we're really giving high visibility into. We really want to be able to dig into what's either particularly high risk, or particularly new for us so that we can develop enablement materials with our responsible AI team being really hands-on with that work. And once you have that, then the other 90% you provide enablement materials, you provide oversight, you make sure things are in your AI inventory, but you're enabling those teams to really explore and do interesting things and solve interesting problems with technologies we feel more comfortable with. And, that is what really enables you to start to scale and experimentation while still mitigating those risks. And so you can highlight that innovation. Now, we can shift a bit and talk about risk as well, and novel risks and generative AI.

But I mean, does that jive with what you see, Sanjay? Is that what the space looked like as you're spending time looking at responsible AI programs?

Sanjay Podder: No, absolutely yes. And in fact, a couple of things here, right? You know, I believe we have also written about the non deterministic nature of AI, which makes it very, you know, you can't really predict all the risks, right? 

And you can't really, therefore, like a traditional system, plan to do a complete check of all the risk.

In fact, one of my areas of interest has been metamorphic testing in the past, given how AI models behave, how they're vulnerable to, you know, attacks. So, you know, how do you look at an AI system in a way that you can still manage the risk of going through the traditional way of, you know, finding out all the risks and then trying to address it one by one.

Right? 

So that has been my work in metamorphic, but even if you look at regulations like the EU AI Act, even there if you see, you know, the different type of models that an organization can have, you know, you are classifying it upfront as high risk or medium risk or low risk, you know, so that you can focus more on AI models that are serving more high risk kind of business use cases. 

I think where I'd really like to go more, then let me ask this question to you, Noah. You know, Noah, traditionally, what I have observed is when we talk about responsible AI, know, one of the risks that did not receive the importance it deserves is the impact of AI on the environment. Whether it is the emissions, whether it is resources like water, and that can be many, right, energy use. We all know generative AI is making a lot of demand on the electricity grid. When you look at the governance risk and compliance and monitoring today for responsible AI, how do you see sustainability getting integrated in the governance part. And I would therefore also like to understand what brought you closer to the Green Software Foundation because that's our sweet spot, right?

The Sustainability part. 

Would love to hear your insight on this point.

Noah Broestl: Yeah. Yeah, I mean it's a really critical point in the trajectory of the technologies right now. You know, we see these shifts. Organizations are beginning to abandon their goals around net zero commitments, and that is a direct result of the trajectory of generative AI right now. And so I think that there's a lot of discussion that's happening in the space of, "oh, how bad really is this?"

and people say, "oh, it's, you know, training one of these models is like flying 15 private jets around the world 20 times." Or, and then someone else says, "oh, training one of these models is like leaving your light on for, you know, for too long when you average it out over the year." Like the data here is, it's really difficult to track down where

the actual environmental impact of artificial intelligence is. And I think that makes it difficult to navigate the space. When I think about responsibility, so maybe taking like a huge step back and saying, you know, what is responsible AI? I think we first have to start from this space of what is artificial intelligence? And the challenge that I see here is that when people approach this topic and they say the risks of artificial intelligence are X, Y, and Z,

we have this taxonomy problem where they could be talking about anything from a simple univariate, linear regression, all the way up to killer robots, right? Like there's just this huge space that we could be talking about when we say artificial intelligence. And so responsible artificial intelligence then becomes even harder to define.

But one of the things that I've found in the work that I've done is anchoring on this word sustainability. And I think sustainability, multidimensional sustainability is incredibly important to how we deploy responsible AI programs inside organizations. And that means that we need to be thinking about the social sustainability of the systems we're deploying.

We need to think about how those impact the groups of users, the social institutions, the historical social biases that come out of these systems. We need to be thinking about the cultural sustainability of systems. We need to think about, and there's a lot of work that's being done here when we deploy these artificial intelligence systems as thought partners into, you know, the university settings.

Is it really a thought partner to the student or is it impacting the perspectives that the students have? And, Shirin Duddy out of, I think she's at Boston University now, did some fantastic work around cross-cultural impacts in student populations of these systems. So we need to be thinking about the cultural sustainability, or we're going to end up living in a very boring world in a few years where everybody shares the same opinion on everything.

We need to think about the economic sustainability, of these systems. We need to make sure that this is not the equivalent of strip mining in the way that we approach the usage of these systems. We need to think about the regulatory sustainability. Very challenging to anticipate all of the regulations that are coming in this space.

But we also need to think about climate sustainability. And I think that this piece, as I said, with the progression of energy usage at companies who are leaning into these technologies, we really need to think deeply about. I also like to think about AI technologies as kind of six components. And I think a lot of people, when they talk about AI, they're really talking, they're, thinking about two things, which is, data and compute.

And that used to be the very traditional way of approaching the limitations in artificial intelligence. You either need more data or you need more compute. Now, I think we can say that there's at least six areas that we need to be focusing on when we're thinking about limitations in artificial intelligence.

And certainly data is one of those, certainly methodology is one of those, like, it's fairly clear that just building a an LLM based on transformer architecture and implementing it as a simple chatbot is not going to get us to those super intelligent systems that people are talking about.

So we need to think about how methods are limiting us in these spaces. we need to think about workforce. We need to think about education and how people are able to interact with these systems. We need to think about application, we need to think about that. That is to say, how it's integrated into our businesses and how we can really provide value, how it can increase efficiency, but also capture that efficiency to increase productivity. But the last two, of those six are hardware and, for lack of a better term, I think I might need to split these out as I have these conversations in the future,

energy and water. So those resources that go into the data center.. So we can think about hardware as, you know, not just the GPUs, but we can also think about that as the physical build out of the data center, right? The concrete that goes into building out these data centers, the production of rebar, those types of products that result in really high embodied emissions going into these systems and then the energy and water that are needed to run these data centers in the long term. And particularly double clicking in on that energy piece, 

there's a conversation that happens in the artificial intelligence space and I spend a lot of time talking about energy and AI, and we can look at two sides of that, right?

Which is there's AI for energy, and there's energy for AI. And those are generally the two areas where people try to have these conversations. And I think that there is a tension right now between the people who are saying that AI will solve our energy problems, and the people who are saying that AI is causing more problems in the energy grids, right?

Like, and I think you could frame that as optimist and pessimist. I don't think that's the right way to frame it, but often that's seen as the way it's framed. Like, don't worry about it. The progress in AI will make the grids so efficient that we won't have these problems. And first off, I think there's reason to be highly skeptical of that.

But before we get to a decision on which one of those perspectives is right, we need to be able to measure. We need to be able to measure, and we need to have transparency into the reporting of the measurements of the emissions of these systems. And this is where I see the Green Software Foundation and what gets me so excited about the work with the Green Software Foundation, is I'm a measurement guy, like I'm a tested evaluation person. I want data to understand what's happening in the world. I certainly have intuitions, and given how long it takes to get some of this data, we have to have those intuitions. So we have to act on those intuitions. We have to have a bias for action in this space, but we also need the data to be able to do this.

The work that the Green Software Foundation has done, particularly around the carbon intensity measurements for traditional tech systems, incredibly important that we have the tools to be able to bring that data, to be able to produce that data and bring that to members internally in an organization.

The same thing applies for the Software Carbon Intensity measurement for AI, which is something that the Green Software Foundation is working on right now. And I've had a large role in helping structure that and define that. There are questions that we need to be able to answer here about how we approach inference versus training versus research emissions that go into these systems. How do we account for those? How do we look at when you train an AI system, what portion of that embodied emissions goes into each inference? How do you have a function that gives some amortization or some exponential decay to, you know, encourage particular types of behaviors with these systems?

Really hard questions that I don't think we have the answers to yet. And certainly when we look at the ecosystem for artificial intelligence right now as, at a very simple bifurcation between closed source, you know, behind API models where we have a single locus of control for those systems versus open source, where we train a model and now we put it out there and people can download that and use these open weight models.

How do you account for embodied emissions in that space? Like these are the really hard questions that we need to be able to answer from the technical perspective of measurement, even if we're able to answer all those questions, we have a completely separate problem, which is how do we give visibility and transparency into the way that these systems are using energy and producing carbon as a result of it, or other greenhouse gases or, you know, whatever it is in the hardware and construction of data centers.

We need to be figuring out the approach here, and I think this is another place where partnership with the Green Software Foundation is incredibly important. We need to be able to articulate to business leaders the value in exposing this information to users. We could wait for regulatory pressure.

Like that is one option here, right? Like, let's just wait and hope that regulators force these organizations to explicitly outline the carbon that is generated as a result of using these technologies. And there's progress there. Like I definitely want to call out that we see a lot of progress, particularly in Germany around accounting for the emissions of data centers, bringing down the emissions of data centers.

And I think it's really important that we continue doing that. But we also need, from the business side, as I said, to convince business leaders that there is value in exposing this information. And I think this comes to something that I have been looking at for a while, which is the ceiling that we see in performance of a lot of these artificial intelligence systems.

We are moving rapidly towards a space that is commoditized from an accuracy standpoint, and we will still continue to see progress. I don't think it will be the exponential progress maybe that we saw right after the release of ChatGPT, but we will continue to see progress within these systems, but we're going to see a lot of providers in the space,

they're going to be commoditized from a sense of accuracy, and so now you have to think about market differentiation for yourself as a platform provider or as a user of these platform systems inside of your business applications. And if you can confidently say that, yes, you can go to company X, and you can get a hundred percent of the accuracy, or you can come to my company, company Y and you can get 80% of the value for 20% of the carbon generation.

That is a value proposition to users that allows them then to really make decisions about how they're moving in the marketplace. All of the data that we're seeing is increase in consumer interest in the environmental impact of their choices in the marketplace. And so I think it's incredibly important that in partnership with the Green Software Foundation, that we're working with organizations not only to provide tools for them to be able to measure and report on these things internally to make decisions for themselves, but also being able to expose this confidently in a way that is mutually beneficial and presents a virtuous cycle where users are saying, yes, we believe that low carbon emission technologies are what we wanna spend our money on, and that encourages more companies to participate in the marketplace in that way. And so I'm excited about the work with the Green Software Foundation, as it helps us move both of those levers.

Sanjay Podder: I think that was a fabulous response, Noah. And I like the way you articulated different aspects of sustainability, right. Though in the Green Software Foundation for obvious reason, we are very much focused on the climate aspect of sustainability.

And I also believe that traditionally the other aspect, the social aspect, those have been you dealt with more thoroughly than the environmental aspect.

So this is therefore much more interesting to. Figure out, you know, what needs to be done when it comes to environmental impact. And that will be a big focus for the Green AI Committee of the Green Software Foundation as well, and experts like yourself who are all coming together to help us get these answers, you know, this would be the game changer for us. Right. So, Noah, I'm, you know, I'm sure the work you're doing in BCG as well as the work you are doing in GSF is going to be incredibly important for advancing this space. And before I wind out this podcast, you know, I would like to hear from you if there is one final piece of advice you give to technology leaders looking to embed responsible AI and sustainability into their AI strategies?

Noah Broestl: I wanna make two points here if you'll allow me. So the first one is on that point you made about, you know, the Green Software Foundation is focused on the climate sustainability. You know, social sustainability has gotten some focus. Economic sustainability gets a lot of focus, probably.

But one of the things that I think is deeply challenging in this space for practitioners, is that we'll never reach bedrock here, right? Like there is always going to be work to be done. In my lifetime, we will not solve these problems, and that makes it very difficult to get out of bed and do this work every single day, right?

Like these are long-term problems that we need to be focused on and we need people who are passionate about solving particular areas, and we need to ensure that we're providing the tools so that people focused on these problems can really have that for, not to overuse the term sustainability, but sustainably perform in these spaces.

Like we need to make sure that we're providing the resources and the tools. 

I think it's very important that we have community around individuals who are working in this space. So I think we need, broadly, to just connect more. Like we need to spend more time connecting. You know, go to the Green Software Foundation Summits, go to research conferences, connect, talk about these issues.

Very important that you find a community to help support you as you continue to approach these things. Now, when it comes to advice for organizations that are in this space, I still look at the maturity curve around the deployments of artificial intelligence technologies, and we're still very far on the left side of that, right?

There's still a lot of space to get to a place where we are deploying these technologies and really seeing value out of them. And so my advice to organizations is probably twofold. First off, identify a set of key stakeholders who are going to be thinking about responsibility inside of your organization.

This is not a technology problem. This is a business problem, and you need to ensure that you have responsible AI and AI sustainability and climate impact integrated as a risk stripe across your entire organization. You cannot just ask the technical components of the organization to tackle this. The second thing is, get started.

Like, get in there, get your hands dirty, and start piloting generative AI technologies. Start seeing where they will work inside of your organization. This does not mean go launch an external app tomorrow. It means find places in your organization where you know that the risk is low and the value is high to increasing efficiency inside of your organization.

Do that as a mechanism to build the muscle in your organization to manage generative AI risk. Do that in concert with the group of cross-functional stakeholders and start building your responsible AI program from there.

Sanjay Podder: Great. Well, we have come to the end of our podcast episode. All that's left for me is to say thank you so much, Noah. That was really great. Thanks for your contribution and we really appreciate you coming on to CXO Bites.

Noah Broestl: Thank you so much, Sanjay. This was fantastic. Really enjoyed being here.

Sanjay Podder: Same here. Awesome. That's all for the episode of CXO Bytes. All the resources of this episode are in the show description below, and you can visit podcast.greensoftware.foundation to listen to more episodes of CXO Bytes. See you all in the next episode. Bye for now. 

Hey, everyone. Thanks for listening. Just a reminder to follow CXO Bytes on Spotify, Apple, YouTube, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. Thanks again, and see you in the next episode.




Hosted on Acast. See acast.com/privacy for more information.

Show more...
5 months ago
36 minutes 40 seconds

CXO Bytes
Carbon Accounting with Eric Gertsman
Host Sanjay Podder is joined by Eric Gertsman, Director of Tech Sustainability at Salesforce. They talk about shaping the future of green IT, with Eric sharing his journey from entrepreneur to sustainability leader, his work decarbonizing data centers, and the importance of accurate carbon accounting through Salesforce’s Net Zero Cloud. They explore the AI Energy Score, a new tool developed in collaboration with Hugging Face to benchmark AI model efficiency, and discuss managing water as a critical resource in sustainable operations. Together they highlight how aligning sustainability with core business objectives can drive both environmental impact and business success.

Learn more about our people:
  • Sanjay Podder: LinkedIn
  • Eric Gertsman: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • The Business Guide to Carbon Accounting | Salesforce [05:21]
  • Salesforce Joins Technology and Academic Leaders to Unveil AI Energy Score Measuring AI Model Efficiency [10:12]
  • AI Energy Score | Hugging Face
  • Water | WRI [14:09]
  • Our founders created the 1% Pledge. | Salesforce [22:23]
  • Aligning sustainability with good business practices | Salesforce [23:57]

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Sanjay Podder:
Hello and welcome to CXO Bytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting chiefs of information, technology, sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.

I am your host, Sanjay Podder.

Welcome to another episode of CXO Bytes, where we bring you unique insights into the world of sustainable software development from the view of the C-Suite. I am your host, Sanjay Poddar. Today's guest brings a rare blend of entrepreneurial spirit, tech savvy, and sustainability leadership. Eric Gertsman is the Director of Technology Sustainability at Salesforce where he's helping shape the future of green IT, from data center planning and infrastructure strategy to carbon accountability. Before Salesforce, Eric co-founded a company that built solar powered artisan travel trailers. Talking about walking the talk. He's also been a passionate advocate for ethical capitalism and the role of business as a platform for change. Eric, welcome to the show. Kindly introduce yourself.

Eric Gertsman: Thanks, Sanjay. I appreciate you having me on. Excited to be here. Yes. I'm the Director of Tech Sustainability at Salesforce. As the title indicates, right, I focus on Salesforce's cloud infrastructure, which is a bedrock of the company and it's one of the first companies to embrace this infrastructure as a service model.

So it's a very interesting view that the company's had over the years on infrastructure. I focus on decarbonizing our co-location and hyperscale data centers in this wildly changing industry, right? Most recently with the rise of AI. But every decade presents wild new challenges.

I also pay attention to our water footprint, our waste footprint, in a number of other areas. So I'm excited to talk to you today.

Sanjay Podder: Wonderful. Some great topics we will get insights from you on. So looking forward to this conversation, Eric. So, Eric, let's start with your journey. You have worn many hats, marketing, consulting, startup founder, and now sustainability leader at one of the world's biggest tech companies. What inspired your transition into Green Tech and how did you, you know, select this role and your current work at Salesforce?

Eric Gertsman: Well, I think the one word answer would be 'meaning.' I'm not the type of person that checks in and checks out, very well for a paycheck. I really need to focus on things that are positively contributing to humanity. So, way back, almost two decades ago, I thought long and hard about what my career was gonna look like and realized that I wanted to focus on changing the world really from the inside out, incrementally moving organizations towards more sustainable, more efficient, more responsible activities. And you know, obviously over the last 20 years, while that may sound sexy, right, it happens in smaller, seemingly mundane chunks where, you know, it's one spreadsheet at a time or one case study or a meaningful meeting or a data source or something that, you know, you make incremental change and could have pretty wide effects.

But yeah, I mean, it, you know, it, I could certainly talk about, for younger people how to do it, but really just to keep pushing and keep learning, whether you're in a company you wanna do intrapreneurship, to do more on the sustainability side and broaden your scope, to have a bigger impact.

You know, whether you're in school and you wanna like look at the right classes and projects and case competitions or whatever, you know, keep pushing hard 'cause there's plenty out there to find. And all of that set me up for a path, over the last six years at Salesforce, to have really good impact.

And I will say my time here actually has not just been sustainability. In fact, a lot of it has been around data center planning, infrastructure strategy, FinOps. Which has given me a much richer picture of what the company is and what it really means to do sustainability work. So, I'd welcome other people to, you know, bring cross-functional capabilities to the field.

Sanjay Podder: No, I think you bring up a very good point on the cross-functional, you know, capabilities because to green the tech, you need to understand the underlying tech very well, right? Because in some sense, the greening of tech is building a tech which is much more efficient, bringing in some of the best practices.

So I think all the past experience you had really enables you to now deliver on the promise of green tech, but that's fantastic. I'm sure a lot of our young people are interested to do a career here, so these are great points.

So Eric, last year, I remember Salesforce released a Business Guide to Carbon Accounting, which breaks down how companies can measure and reduce their greenhouse gas emission. Why is carbon accounting so central to corporate sustainability today? And how are you implementing these principles inside Salesforce's own infrastructure operations?

Eric Gertsman: Yep. Great. Great question and thanks for pointing out that resource. It's a good resource, your listeners can go to our website, download it, and I think you'll be providing links, I think, about some of the stuff we're talking about here, so it's a good one. Yeah, I mean, carbon accounting and the way I see it, right, we've all heard the old ad adage you can't manage what you can't measure.

I think that's never truer than it is in sustainability. Proper tracking is the precursor to pretty much everything. All programs aimed at identifying opportunities, implementing action and determining results requires that, and it's vital, right, for any sustainability organization. 

It's often new stuff. It has been, there's new ways. It's a little bit of art, a little bit of science. We obviously at Salesforce confirm things with third party verification, but sometimes bring in our own sources and our own methodologies and make sure everything is strong and well devised.

I also wanna take an opportunity, I think, 'cause a part of, for me, carbon accounting is not just the sort of the back looking after the fact accounting. I think a lot of it is the forward looking target setting, which I think is, a big part of, I think what this, world of, you know, data and data tracking and, sustainability really is.

And so, you know, at Salesforce, one of our biggest north stars is science-based targets. And for those of you who aren't familiar with that term, they're long-term corporate goals, that are aligned to keeping the globe warming at no more than 1.5 degrees centigrade. Which, of course, is an enormous goal and quite frankly, maybe impossible today.

But leading companies truly do have to think about our future and we need to be ambitious. We need to push for where we need to be. As you mentioned, our founder and CEO Marc Benioff has always said, I think you mentioned that it's a platform for change. Business is a platform for change.

And it's been embedded in our culture in a variety of different ways. Because, you know, as the world shifts and as politics, winds shift around, you know, it's about corporate leadership, I think, that's gonna really pave the way. And so by setting these goals, I think is one real important way for us to stay laser focused on what we need to do as a society from the business perspective.

And there are a lot of different ways how to do this right, to set different scopes and using different metrics. Yeah, and there's ways to engage, right? The Science-Based Target Institute is one organization that can help, but there's many pathways to doing this. We set an aggressive 2040 goal with interim 2030 goals, focused on scope one and two total decarbonization, and then scope three intensity reduction. And so it looks a little different for everyone, but I think that's super important, because as the company has multiple priorities, right, we don't want sustainability to be a nice to have, but a need to have, and that other priorities won't swamp it or dissolve it at the first chance because, you know, they're not as, you know, they don't have as hard and fast of the targets and objectives as other groups.

Sanjay Podder: Yeah. And you know, when we started the Green Software Foundation, one of the mission was to see how we can reduce emission from the IT sector in line with the Paris Agreement. And things like carbon accounting is so critical there because, as you rightly mentioned, you can only reduce what you can measure, right?

And bringing the emission from IT central to the way organizations are trying to address their net zero goals, right, meet their net zero goals, I guess, what you are, really doing in Salesforce is a very good example of what successful organizations do. You know, we have seen studies that point to the fact that successful organizations in their industry, they integrate sustainability in their tech strategy. And that also ends up with better shareholder value. So, it's a win-win for all. And, I think what you are doing are great examples for people to learn from. Eric, I think it was just a couple of months back, I saw the announcement of AI energy score and as you know, this year in the Green Software Foundation, one of our big focus is sustainable AI, green AI, how do you reduce the environmental impact of AI? So I really liked what you announced, and I wanted to go a little deeper with you on the AI energy score. What was your thinking behind it and, you know, how did you come together with Hugging face and many such organizations to create something very tangible for people to rank the emissions from AI? What is your vision going forward? So if you can just educate us a little bit on the AI Energy Score and point us to the right set of resources to learn more.

Eric Gertsman: Absolutely. Yeah. I think it's actually an interesting time for AI. Obviously, everyone points to the fact that we're gonna be seeing a lot of energy increases and that very well may be the case. We're seeing a lot of demand. It remains to be seen whether we're gonna see a doubling, a tripling, who knows what we're gonna see in the industry, from an increase there.

But I'm thankful that many of us are paying attention much more today than we did 20, 30 years ago at the rise of sort of the internet. I mean, people are laser focused on the fact that this will have an impact and we have to get in front of it early as opposed to later when, you know, a lot of things are gonna be baked and, much more difficult. 

So we did collaborate with Hugging Face, Cohere, Carnegie Mellon University to develop the AI Energy Score. And that's a standardized framework for measuring and comparing AI model energy efficiency, which scores a bunch of different inference tasks, right? Like text generation, image generation, summarization, and gives them a one to five star rating, right?

With five being the most efficient. And we're focusing on the inference, energy side here because it obviously presents a very complex challenge with a wide array of variables, hardware configurations, model optimizations, deployment scenarios, usage patterns. And so we're aiming at addressing these by establishing like standardized benchmarks, which will ultimately, I think, help developers and all users identify, choose more sustainable models, and eventually create this sort of like energy label paradigm, right, for their model scoring. Right now we have, I think we're evaluating 166 models, including a few of Salesforce's own models. But the idea is that with more transparency and more ranking, I think you can get folks to understand sort of where they stack up and how they can refocus in on, sustainability given the visible nature of it.

Salesforce's aim, just to be, to spend one more second on it, is to create these smaller language models that are optimized for our use cases, right? Accurate, reliable tasks without using just so much energy on these enormous models that just aren't relevant to our use case.

So I think we're gonna start to see that kind of trend more and more going forward.

Sanjay Podder: Yeah. And I think that's a great strategy because smaller the model less is the energy it uses, right? So you don't necessarily need to go to the largest of the model for all your tasks. So that's a great strategy, and I think the AI Energy Scores will be a very, you know, good benchmark for further research on this topic to see, you know, which are the, what are the great strategies, which are the good models to deploy in an organization.

So, you have definitely raised the bar here. That's great. Eric, you have been thinking beyond emission. You have also been thinking about impact to other resources in the environment like water. Tell me more about what you are doing in that space. How do you think one can manage that impact? Your plan for Salesforce?

Eric Gertsman: Yeah, thanks. I think water is a big deal. And it's something that's become, I think, a major factor for us to start considering in the sustainability realm, globally, but certainly at Salesforce, I've got a lot of thoughts on it. I mean, I think as a species in general, we've just been treating it as an unending resource forever, and it's clearly not, and even in sustainability circles we're not really paying as much attention to water and haven't, because carbon has been such a momentous issue on you know, climate change and sometimes obviously there's perceived or real trade-off between energy and carbon, right? With some cooling systems like evaporative cooling that actually are more efficient from a carbon perspective if they use more water.

And so sometimes there's tension there and that's why I think it's sort of not been always fully appreciated. I think the equation, especially between the trade off I was talking about, but just the focus on water really starts to change when we start talking about water-stressed regions in particular.

And you could look at the resources online. WRI, maybe put that in our resource link list here. But every geographic area in the world has a number, which is how much withdrawals they're getting per the full aquifer in that area. And if it's a high percentage year over year, even though there's some recharge, it still presents a tremendous amount of risk.

And so, these are areas where if you have a data center, you know, resource there, you have to be very mindful. Why, again, keeping it the a hundred thousand foot level is like 2-3 billion people live in these water constrained areas, right? I think it's like 20 to 40 ish percent of the world's food are cultivated in these areas.

So if you start putting, depleting these aquifers and putting pressure on populations and agriculture, whatever, you're gonna disrupt humans in a very profound way that will cause serious environmental and social and economic and political issues. And so the data industry has to be really mindful of how they think about that.

It also has, you know, coming down to more like 10,000 feet, business risks that are pretty salient to companies, whether you're, a, you know, co-location company, whether you're a hyperscale company or you're a tenant in one of these types of data centers, putting, you know, residents or agriculture, putting pressure on them, it is not a great way to build community relations. And that's a big topic that has been happening over the last, you know, five years where, you know, community bridge building is one of the big efforts made by sort of the data center industry to work in their local communities. And so being aware is really, and being proactive is really important there.

Operational disruption, when you have water cooled systems and you can no longer access water either for, you know, physical constriction or political reasons, you're gonna have serious trouble. You're not gonna be able to cool your data centers and they will literally melt down. And that may not happen today, right?

It may not happen tomorrow or next year. It may happen, you know, further in the future. It's also worth noting that the power sector operates with a lot of water. Most thermal power plants, coal, natural gas, nuclear use millions and millions of gallons a day. So again, if the water runs out, aquifers run out, you're not gonna have power to power your data center.

So it's an area that where we're not really paying attention to, but it's an area that we need to think about. So, I mean, how can we avoid this? Just to put a more maybe rosier picture on it a little bit. It is of course avoiding water-stressed regions, right? Ones that are deemed high or very high on the WRI scale that I mentioned earlier.

It may not always be possible. There are data residency drivers that a lot of customers expect. There's embedded, you know, current regions that have a lot of data center infrastructure. We realize it's not gonna happen overnight, but if you're a company like us thinking about going into a new continent or a new sort of sector, thinking about where the water constraints are, I think is, really important. And that's like areas like the US southwest, Western Europe, Singapore, India, the Middle East, you know, paying attention. And if you do site there, make sure you have proper cooling systems that are being used, right?

Closed loop systems wherever possible, air site decarbonization, liquid cooling for chips, even hybrid systems using recycled water, things like that. And also consider projects for aquifer recharge and wetland restoration and watershed retention. Which, by the way, can also help with fire risk too, which is another big factor as climate change increases for our data center environments.

And then just disclose, right? Disclose your withdrawals, your consumption, your discharge. That's an important way to show the impact over time. 

Sanjay Podder: I am glad that you brought out the aspect of water dependency in power generation because typical conversations are limited to cooling, right? But there is a lot of water required during power generation which does pose an operational risk if you are not able to ensure that water supply. One question that comes to my mind is similar to the carbon accounting solution you have provided, I guess, through your net zero cloud solution platform, is there anything that addresses the water problem in a much more comprehensive way?

Eric Gertsman: We do have the opportunity within our net zero cloud platform to track water and to manage it in the same way as carbon. So it's absolutely within that paradigm. That's a great tool. I'm glad you mentioned that. It actually rose that the product rose out of our internal efforts to use Salesforce's own software to do carbon accounting for ourselves, which is really cool.

And we got so good at it that we said, "Hey, let's productize this and sell it to customers." 'Cause we think it'd be valuable to them. And today many, you know, big companies, you know, Accenture, CVS, Bank of America, NBC, like a lot of big names are using that tool to, you know, as a central resource for managing their carbon accounting.

And actually, you know, the roadmap is gonna continue enhancing for this product where we're, as you know, we're, Salesforce. So AI and agentic capabilities are kind of part of our strong ethos today and said it's gonna get really fun where we start applying AI onto the data we're getting within net zero cloud so that we can do more in that area. Opportunity identification and a variety of things that I think are gonna be really useful to sustainability.

Sanjay Podder: Wonderful. Eric, you talked about Salesforce's culture, how sustainability isn't just a department, it's moving into the company's DNA. You have science-based targets as well as 1.5 degrees Celsius reduction as your company's North Star. How do you keep the culture alive across global teams and how do you balance compliance with the deeper business case for sustainability?

Eric Gertsman: Yeah, good question. Lemme start by saying one of the things that salesforce did 26 years ago at its founding was establish the 1 1 1 model. And that's a model that means employees collectively devote 1% of their time to volunteering, the company gives 1% of its revenues to philanthropy, and we donate 1% of our product to, nonprofits in other institutions.

And clearly that all isn't focused necessarily on sustainability, but it does create this ethos of doing the right thing and helping our community and helping our world. So it sort of sets a foundation that feeds into sustainability and ESG efforts within the company. But that said, right, sustainability still isn't easy.

There are a lot of headwinds for all practitioners, I think, in this function. And on the tech side, like I focus on, sustainability, I mean, executives have dozens of priorities, right? Profit margin targets and trust and reliability issues, AI integration, new product delivery. And we personally, you know, our company are transitioning aggressively to public cloud from co-lo (co-location).

So there's just a ton of priorities. Those are the big broad categories. There's a lot of even, you know, smaller, more salient things that they have to think about. So it's hard to carve out headspace for sustainability, and you know, much less dedicate the necessary resources to it.

So it's important obviously for sustainability professionals to be clear and concise and creative. But also I think to part of your question is to align sustainability with other business priorities. 

My view is that corporate action, you know, is, you know, sustainability is not altruism.

Sustainability is good business, you know, by its very definition, it looks at the long-term viability of a company and, you know, the world and its community, everything else., which obviously is essential to, right, the company's long-term success. So, and sometimes that involves short-term trade-offs.

Often not right. I'm a big fan of aligning sustainability with benefits like, you know, carbon savings being energy cost reduction, and optimizing, you know, our hardware or power and space infrastructure for, you know, better capacity constraints, you know, improving capacity constraints and voluntary compliance with regulations before they coming to pass so you can avoid risks down the line. From a macro lens, you could point to sustainability positively supporting brand and sales and marketing efforts and employee recruitment and retention innovation, right? It's, you know, it's hard to talk about those every day when you're talking about every sustainability initiative, but it's important that people re that remind themselves that sustainability does have business values and it, you know, many studies show that right there are, you know, a lot of claims, within many reports that sustainability is good for business. Whether you look at metrics like operational performance or profitability, return on equity, even just stock results. And you know, obviously some of this may be correlation, right?

But because the best companies do the right things, but I think there's a lot of causation to the topics I talked about, right. You do, you know, good sustainably work and you get a lot of business benefits along the way, so I absolutely think it's a critical function in business.

Sanjay Podder: Absolutely. When done the right way, sustainable tech, there is a very deep correlation between GreenOps and FinOps.

Eric Gertsman: Correct? Yeah. Yep.

Sanjay Podder: You know. Yeah. So, We have come to the end of our podcast episode, Eric, and it's been a pleasure having you on the show. Your work really captures what CXO Bytes is all about, moving from vision to implementation in a way that's practical, scalable, and deeply human. Thanks for your contribution, and we really appreciate you coming on to CXO Bytes. We'd also like to know from you how our audience can learn more about your work and the work that your organization is doing. So if you can also touch upon that.

Eric Gertsman: Sure. Well first thanks Sanjay for having me. It was a real pleasure to speak to you. I keep up your great podcast. It's, a fantastic thing. The way people can get ahold of me, probably the best way is through LinkedIn and you can search for Eric Gertsman. I'd love to see and hear how you guys are impacting the world and your perspectives on sustainability.

Regarding Salesforce, please go to salesforce.com/sustainability. You'll find a lot of resources there, a lot of case studies, a lot of articles, a lot of press releases, things like that you'll see about what we're doing in the industry, our stakeholder impact reports, things like that.

I think that'd be a great place to start for a lot of folks.

Sanjay Podder: Wonderful. So that's all for this episode of CXO Bytes. All the resources for this episode are in the show description below. You can visit podcast.greensoftware.foundation to listen to more episodes of CXO Bytes. See you all in the next episode. Bye for now. 

Hey, everyone. Thanks for listening. Just a reminder to follow CXO Bytes on Spotify, Apple, YouTube, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. Thanks again, and see you in the next episode.




Hosted on Acast. See acast.com/privacy for more information.

Show more...
6 months ago
28 minutes 25 seconds

CXO Bytes
Responsible AI with Dr. Paul Dongha
CXO Bytes host Sanjay Podder is joined by Dr. Paul Dongha, Head of Responsible AI and AI Strategy at NatWest Group, to discuss the evolving landscape of responsible AI in financial services. With over 30 years of experience in AI and ethical governance, Paul shares insights on balancing AI innovation with integrity, mitigating risks like bias and explainability in banking, and addressing the growing environmental impact of AI. They explore the rise of generative AI, the sustainability challenges of AI energy consumption, and the role of organizations in ensuring AI ethics frameworks include environmental considerations. Tune in for a deep dive into the future of AI governance, sustainability, and responsible innovation in the financial sector.

Learn more about our people:
  • Sanjay Podder: LinkedIn
  • Dr. Paul Dongha: LinkedIn | Website
 
Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • What the data centre and AI boom could mean for the energy sector – Analysis - IEA [17:37]
  • A pro-innovation approach to AI regulation - GOV.UK [25:28]
  • Harnessing the potential of AI in banking | NatWest Group [34:56]

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!


TRANSCRIPT BELOW:


Sanjay Podder:
Hello and welcome to CXO Bytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting chiefs of information, technology, sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.

I am your host, Sanjay Podder.

Welcome to CXO Bytes, the podcast where we explore the intersection of technology, sustainability, and AI with leaders who are shaping the future. I am your host Sanjay Podder, and in this episode, we dive deep into the world of responsible and sustainable AI in financial services. Today, I'm thrilled to be joined by Dr Paul Dongha, head of responsible AI and AI strategy at NatWest Group. With over 30 years of experience in AI, financial services, and ethical governance, Paul has played a pivotal role in ensuring AI is deployed responsibly, balancing innovation with integrity. He has spearheaded AI ethics frameworks at major banks, advised on government AI policy, and even taught Generative AI at Harvard Business School.

His expertise spans AI bias mitigation, responsible AI frameworks, regulatory alignment, and environmental impact of AI. In this episode, we will explore how AI is transforming banking, how ethics and risk management can drive innovation, and why sustainable AI is crucial to our digital future. Paul, welcome to CXO Bytes. Let's start by having you introduce yourself to our listeners.

Paul Dongha: Great. Hello Sanjay. Thank you for that delightful introduction. Really great to be here. So I'm Dr Paul Donga. I'm head of Responsible AI and AI Strategy at NatWest Banking Group, based here in London. So as part of my role, what do I do? So it's really split into two parts. So as head of Responsible AI, I have a team of dedicated professionals who look at people, process, and technology.

They look across the bank and ensure that the right people are involved to manage ethical risks of AI. So that's the people part. The process part is ensuring that the processes and workflows we have both within technical teams and risk management teams are appropriate for managing ethical risks. And as part of the technology part, my team work with model development, machine learning engineers, data scientists, to ensure that we have the right tooling in place in our platforms to mitigate ethical risks.

And as part of the strategy, I have a team that lays out the bank's AI strategy for the next three to five years and ensure that across the bank, all teams are working towards implementing the strategy. 

Sanjay Podder: Great. So, Paul, you know, you have had an incredible career, I can see, from AI research and academia to leading responsible AI at one of the UK's largest banks. Can you share your journey and what inspired you to champion ethical AI in financial services? 

Paul Dongha: Well, Sanjay, as you say, I've had a long career, so it's a long story, but I'll try and be brief. I mean, look, I started programming in the 80s, right. A long, long time ago. And I was just, taken with programming. I love being technical. And I was lucky enough to study computer science at university.

And as part of my one final year project, I just got into AI and I thought, "wow, this is really exciting." That led to me eventually doing a PhD in artificial intelligence in the early nineties. And I used to teach natural language processing. I used to teach AI. And I found it super fascinating, but as an academic in the 90s and being in probably the third AI winter, there was actually no jobs in AI.

Which is kind of really weird to say, but right now, looking back, there was literally, it didn't really exist as an industry. So I had a choice. I could stay as being quite a poor academic choice. I could stay as being quite a poor academic in a field that was looked like it was going nowhere, or I could leave and get a job in a commercial enterprise. So I chose the latter, right? So in the late nineties, I came to the city of London and I spent 20 years working in various investment banks, always building systems, so building complex bank-wide, either risk management, pricing, derivative systems, and so on.

But I always had this hankering to go back to my passion, which was artificial intelligence, and I think it was around 2015 I started seeing AI popping up, you know, Netflix, Amazon, collaborative filtering, and I got to thinking, "hold on, this is AI. This is kind of the stuff that we used to talk about."

And over the next sort of two, three years, we saw more of it in mainstream news, right? Google were doing AI research, Amazon, Facebook, all the big tech companies. And I guess it was about 2009, '18, '19, my kind of midlife crisis. I thought to myself "well, look, do I want to go and do that work as a fairly old person, which is a passion, or do I just carry on working in banks, doing my thing?"

And I made the decision that, look, I'm going to, I'm going to go, I'm going to leave my career and go back to AI. So I spent, about a year, I sat at my desk, I did loads of research, kind of caught up on a lot of the AI work that happened for the last 15 years. And it was amazing. You know, we have Keras, we have TensorFlow, we have frameworks.

You know, you can build machine learning applications in days rather than, you know, when I was working on it. And really quickly, I happened upon ethics and I thought to myself, what does ethics got to do with AI? And really quickly it became apparent we have a problem, right, that the probabilistic approach to AI, the so called transformer architecture that we have now, this is only an approximation to what we really want to do.

So it was very quick. I just realized that there'll be no end of technical people building the most powerful AI, but how many people really understand the ethical risks from the ground up, from building them and being able to take a view as to how harmful they could be and what those risks were. So I decided this was it.

This was exactly what I want to carry on doing. So I worked for a tech company. I headed up AI ethical research for the European division for about a year and a half. And then I went to Lloyds Banking Group as their group head of data and AI ethics. And then NatWest Group, running strategy and AI ethics.

And Sanjay, it's amazing. I'm doing the work that I love in an area that I think is really urgent, that we have to pay attention to. So that is my journey over sort of 25, 30 years. 

Sanjay Podder: Wonderful. And, you know, with the advent of generative AI, this risk landscape just gets more complex, right? Or responsible AI, you mentioned trust, ethics, you know, safety of AI. You know, one thing that brought both of us together was a very different aspect, which is sustainability of AI. You know, when we first connected over the topic, it was about environmental impact of AI.

And what I have observed myself is that traditional, you know, responsible AI frameworks, they tend to ignore the environmental impact largely, but that is changing. What has been your observation? Do you think sustainability should be a first class citizen when we look at a responsible AI framework? 

Paul Dongha: Sanjay, absolutely. And 

I think what really triggered it, was the launch of the transformer architecture. The famous 'attention is all you need' paper. And when that was embodied within ChatGPT, we started looking at actually how much compute is involved in just satisfying a simple prompt.

And there are billions of, they're called FLOPs, floating point operations. And when you really look at that architecture, you think, wow, that is really quite something. Compared to a Google search pre gen AI, when you compare the two, you realize this is a significant undertaking. And imagine scaling that up, not just for ChatGPT but for applications for different use cases, cross industries, the consumer market, as well as the corporate market.

This technology is so diffuse and has permeated everyone's lives. It became really apparent that this wasn't going to be a technology that just large corporations can use. It's going to be a technology that's just used everywhere and used everywhere very quickly. And some people talk about exponential acceleration and so on.

And I came to realize, actually, we really need to pay attention to this, although it was talked about the kind of the side of ethics. So if you look at the, the EU's high level expert group, when they came up with their seven responsible pillars of AI, it was mentioned. But was there actually anything happening on that?

I don't think so. So now what I talk about is exactly as you say, Sanjay, sustainability needs to be a first class citizen in ethical risk management. We need to treat it with the same seriousness. We need to increase our level of awareness of it. And organizations need to pay attention to what they do

with AI. So yes. So my, I guess the new approach now for me is to talk about this and to work with organizations to ensure that steps are taken to mitigate the use of, you know, reduce climate impact through carbon reductions, reduce the use of water, usage for data center cooling, and actually optimize the operations of Generative AI. 

Sanjay Podder: Absolutely. And, you know, there is so much of more thinking to be done around, you know, what are the, how do we measure the environmental impact? What should be the thresholds? How do we comply with regulations or, you know, the business's own standards? You know, what are the things to monitor? What's the impact on water, for example, and other resources, not just, you know, energy? So this whole area is becoming so much deeper and needs almost the same amount of focus as we have traditionally looked at areas like, you know, explainability, bias, and so on and so forth, right. So that's, really a very interesting area to further explore.

So Paul, you know, you're from financial services. And the financial services industry is one of the most AI intensive sector from fraud detection to hyper personalized banking, as well as explainability, where regulatory compliance is critical. How do you ensure that AI models used for credit scoring, lending, or fraud detection are explainable and auditable?

And how does a company like the NatWest Group mitigate those risks while still fostering innovation? 

Paul Dongha: Yeah, Sanjay. I mean, it's a really good question. So, the technology is used for things like credit scoring, credit lending and so on, they predominantly fall into the traditional AI camp. So before the generative AI, we had predictive AI. So that's where AI is, in effect, helping to make decisions like credit lending decisions.

Now, fortunately in that technology area is much less complex than generative AI and transformer architectures. And it's been around for quite a long time. So if we go back even sort of 5 to 10 years, that was the first wave of predictive AI. And as that came up and became widely used in banks, it allowed financial services institutions to build the frameworks to validate those models and to put those models in their enterprise risk management frameworks and so on and so forth.

So if we step back a little bit, risk management as a practice within banks has a long history, 20, 30 years. So we have things like model risk management, model validation, risk tolerance, risk appetite, model risk policies. All of these kind of artifacts and processes were established some time ago.

So when predictive AI came about 10 years ago, they were adapted to make sure that they could deal with things like credit lending and credit scoring and insurance underwriting decisions. So some of the things that the financial institutions, including NatWest Bank have, is they'll have a robust model validation process and team.

And what that is, is professionals who, mathematically qualified professionals that can look at a model. When I say model, I mean exactly that, a predictive lending model. And they can reason about its behavior. They can look at the data that goes into it. They can look at the results. They test it.

They look at the limits of it. They look at how it behaves. And in partnership with the development team, they ensure that credit lending decisions that the model makes are understood, predictable, and to some extent can be explained as well as possible. And there's technology tooling you can use for that.

So within a risk tolerance and a risk appetite, a model will be created. It will be overseen by model validation. This will be weaved into the risk management that the bank already has in place. And those models will be tested rigorously with the folks that develop the models. So it's quite a well understood and very robust process.

And on top of that we have the audit function of a bank. Now the audit function of a bank looks across the bank at all sorts of different projects and processes in place and really ensures that it's robust and fit for purpose as well. Some organizations call that a third line of defense. So it's really an extra layer of checking and validation to ensure that everything that should have been done to mitigate risk has been done.

And typically the audit function will have a reporting line into the bank, into quite a senior level and can advise the board to ensure that things have been done properly. On top of that, of course, there's things like model monitoring. So when we put a credit lending system live and it's in operation, we don't just leave it and not look at it.

It's monitored very closely, both using technology and both using our risk management teams to ensure that should the model start behaving in a way that we don't want it to, because models drift, aI models have this thing called either data drift or concept drift, whereby over time they'll behave in ways that slightly move away from their behavior when they were launched.

So we have, model monitoring and we have technology and tooling to allow us to identify early on if the model is starting to do that. And if it is, we'll intervene. The development teams will intervene. They'll retrain the model and relaunch a model. And that has the same level of scrutiny as the development practices do as well.

So that's quite a rich and large practice that's in banks. And what banks have done more recently, and I've done it at two banks now, is we have an ethics panel, or an ethics committee. And the role of that committee is to really early on look when we're deciding to build a model before we build it, we'll look at the problem we're trying to solve.

We'll look at what AI solution we think we're gonna implement, and then we'll see if there are any unanticipated cons, bad consequences that could come from it. And there are techniques to try and unearth and try and filter these out. 'Cause no one builds AI with a bad intention of a bad consequence.

Bad consequences are unanticipated. So we have techniques to using people from different diverse backgrounds to look at a problem and say, "ah, actually could something happen that is not really purposefully designed?" And using that committee or panel will surface that and we'll get the development teams to think about that and then we'll advise risk management that this could happen and that maybe they should look at risk mitigation techniques for any unanticipated consequences.

And there are other things that we do in the model development lifecycle, but I think an ethics committee or an ethics board is really important.

Sanjay Podder: Thanks for a very elaborate response that really is very helpful. You know, at the same time, I wonder, you know, with generative AI in particular, there are a couple of new risks coming up, right? Hallucination. While grift is understood, but hallucination is a new kind of risk .Or harmful content. Or the main topic that we'd love to discuss today is the energy use because traditionally we have always been thinking about the training of AI model that needs a lot of energy, but now we are looking at inferencing, right?

You rightly pointed out that AI is now getting democratized, right? Everybody is using prompting, inferencing, and so many of the study shows that the energy use and the emissions, therefore, are happening more while you prompt rather than when you train, right? So a lot of new things coming up with Gen AI and especially hallucination as well.

And while the traditional AI, there has been a lot of rigor around which you put the, you know, the safeguards, the guardrails and everything, you know, any thought on how that game changes when we talk about gen AI, right? We shift from traditional AI to Gen AI, you suddenly look at a larger, you know, landscape of risk.

And in your own personal experience, you know, how has that shaped up in financial services industry? How are people trying to manage that risk?

Paul Dongha: Yeah, again, really good question, Sanjay. So I think that, in my mind, there are two things that are major things to look at, right? When it comes to the democratization of AI, let's put it this way, 

I think the use of copilots in organizations is gonna be big, right? There's Office 365 Copilot, Dynamics 365, GitHub, GitLab copilot. Copilots, I think, are gonna be everywhere, and most, if not all, knowledge workers will have at least a co-pilot of some sorts.

There'll be copilots for teams. There'll be copilots for departments. There'll be, who knows, but the underlying message is that there's a huge, going to be a huge proliferation of copilots that, I think of them as knowledge assistants, knowledge worker assistants. So they'll be like part of the team.

Everyone will have one. Now, when we look at each of those, every time a prompt goes in there, it's a prompt to a large language model, right? Like a GPT model. And there you have. So if you imagine the proliferation of these and the continual prompting of these, it's really quite mind blowing how much it's going to be used.

It's not even a every knowledge worker has one. We're looking at proliferation of these. So I think once you get your head around the use of that, you think, "wow, that's huge." But I think the second thing I'd like to mention on top of Copilot is actually when we look at agentic AI. So, I think that's going to be the word of 2025, it seems.

Agents.

 In fact, my PhD was all about AI agents in the 90s. Believe it or not, we used to talk about AI agents back in the 90s. We couldn't build them. So, I think we will have agents and multi agent systems, where you'll have, each agent could be a copilot. We'll have the ability to solve problems in a very narrow domain, but it will have the ability to talk to other co pilots, ie other agents. And that inter-agent communication, maybe delegating a task or delegating a query. The ability for agents to cooperate and communicate, that is going to open up a vast amount more computing happening through, through transformer models.

And one can only guess how vast that's going to get. So, you know, when you look at organizations like NVIDIA and they look to the future and talk about your millions of agents being co workers, you know, when you take it to that extreme, the mind, it really stretches the mind to think, "wow, that's how much compute this is going to use."

So I think those two, we call them inflection points, call them what you will, but in my mind, those are two massive things that just mean that there's so much more compute is going to happen. So if we look on the compute side. Now, we know the cost per tokens are going down. Fortunately, they're going down dramatically.

And we've seen DeepSeek and so on launching in late January. So I think, Sanjay, to your point, we shouldn't have to worry about the training costs anymore. I think the training costs are going to go down. The massive thing is going to be the inference time, compute. That thing of using them, right, indiscriminately, that's where all the cost is going to be.

That's where all the usage is going to be. And that's why all three big tech companies, google, Amazon and Microsoft are invested in nuclear power plants and you've heard about the, off the coast of Pennsylvania, Three Mile Island, I think it was Microsoft, I can't recall which of them brought that sort of, disused nuclear, mid sized nuclear power reactor.

So big tech companies have got it. They are betting that this is going to happen. There's going to be an unsustainable amount of compute. So from the grid, they can't satisfy the demand, so they're going to generate their own electricity. Fortunately, it's almost renewable, right? So it's good. So, however, that's just one part of how to deal with it.

The other part is going to be cooling. Cooling datacenters are predominantly using water. I mean, there are technologies now that, that are trying to reduce that, but we really need to pay serious attention to that because even training GPT 4, there was water crisis issues there. So we have to really be careful about the inference type cooling.

And I think the other one on carbon emissions, look we know big tech companies still have data centers that use carbon-fuelled electricity. Let's put it that way. And big tech companies use renewable offsets so that they have a better story when it comes to reporting climate emissions.

But actual location-based climate emissions is still growing. It's still a problem. So in different geographies around the world, you'll find, Sanjay, as you well know, you'll find that there are heavy uses of carbon-intensive electricity sources. So I think we all have to look at that and organizations like banks and large organizations that are going to be big users of these tools need to think, "how do we use co pilots?

When do we use co pilots? How can we be efficient in that use of co pilots?" Because I don't just think it's a big tech problem to solve. I think just like climate all over the world is everyone's responsibility. We, you know, we have electric cars now. We're careful about how we recycle.

We don't use, you know, we don't use plastic bags indiscriminately at supermarkets. You know, we're careful about how we go about our daily life. We use packaging that's eco-friendly. I think the same mindset shift needs to happen in the use of copilots in organizations. Everyone needs to think "what role do I play in ensuring that I just don't use them for absolutely everything just because I can?

But I use those technologies wisely for when I need to use them. And when I don't, I'll use something else." Standards bodies have a role to play. I think governments have a role to play. I think organizations do. I think individuals have a role to play. I think model developers have a role to play. And there are all sorts of things that are in flight on that, making models more, more efficient, distillation of models, quantization of models.

So there's many threads that we can pull on. Fortunately, GreenOps and FinOps, as you know, Sanjay, through the Green Software Foundation, amazing work that Green Software Foundation has done. And I'm a big promoter and big believer that we should continue down that track. I'll stop there because I've probably, talked a lot about that.

Sanjay Podder: Great insights. Thanks for sharing Paul, you know, and Paul, my other question would be regarding regulations and regarding, you know, recently the UK government's AI white paper advocating for pro-innovation approach to AI regulation. Now, given your experience advising policymakers, what do you believe are the key policy shifts needed to ensure AI innovation while maintaining trust and ethical safeguards, as well as reducing the impact AI is having on the environment by decarbonizing the software industry?

Let me hear your perspective here. 

Paul Dongha: That's an interesting question. I think when it comes to policies, they're really important. So I certainly believe the organization should increase and continue to have dialogue with regulators. So I think the need to continue talking to regulators is of paramount importance.

So when we look at financial services in industry, financial services has the luxury of rewarding employees well, especially in technology. and we create a lot of models. We're close to our customers. We have a good sense of how to create models and use AI to benefit our customers. and I think in keeping the dialogue open with regulators to say, "look, this is how we're using AI.

These are the issues we see. This is how we're tackling issues. This is how we're doing risk management for AI, using generative AI" and so on and so forth. I think that dialogue has to continue so that we can help them, those regulators, understand our pain points much more. So this, the take off of AI, this proliferation of generative AI, I think demands that we continue to have that dialogue, and probably more frequently than we're currently having.

And I'll give you one, I guess, one example. The regulators are keen to say, "look, banks and organizations should treat their customers fairly." So fairness, absolutely 100%, that's at the forefront of our mind as a bank. And we want to promote fairness. But fairness is a contested concept. There's no single agreed upon definition of fairness.

And how do you define bias and how do you define fairness? And there's concepts like unfair bias. And there may be things like fair bias. So I think it's a really nuanced conversation how you go about approaching fairness and how you define fairness thresholds. And those kind of conversations need to happen because the regulatory guidelines do not prescribe how to be fair or what constitutes fairness.

And that's right, that they shouldn't have to be prescriptive. But I think organisations should talk internally about how they are fair. And should talk to regulators about how they are being fair and their approaches to fairness. Because there is a classic problem that the accuracy and fairness, there's a trade-off between them.

So how good a model is, how accurate it is, compared to notions of fairness, there's usually a trade-off. So you can try and be fairer, but you might not have such an accurate model. Or you can have a model that's very accurate, but actually might not be fair. And managing that trade-off is quite nuanced and difficult to do.

So I think we need to elevate the conversation both in risk management and to our policy experts and governments that talk about how we manage that trade-off. And there are similar trade-offs that I think we'll see around sustainability, where you might want the most accurate model. And you might say, okay, in a high stakes use case, accuracy is super important, but sustainability is less important.

So I'll implement a model that might be very compute intensive because I'm very concerned about having a hallucination free, superbly well curated answer, that's compute intensive to generate. Because it's a high stakes scenario and I don't want to make an error. Or on the other hand, you might say, actually some uses of gen AI, maybe a kind of a back office, low risk process,

maybe I can sacrifice a little bit of accuracy and therefore have a solution that is not so compute intensive, because it's not a high stakes use case. And I think we're seeing trade-offs there that will probably have to be equally managed in the right way. So I think being open and talking to regulators about that is super important.

And I think go, let's go back to agents, you know, the use of agents. So this is a really important point because if we imbue agents and we now talk about AI agents, how are they different from normal generative AI? Well, people often say two or three things. They say agents have reasoning.

So that means that an agent does far more on its own through creating a plan of things it does, and we've seen it in the GPT 4o and in DeepSeek. And secondly, it has autonomy. So it has more autonomy than we would otherwise give AI systems. So when you combine autonomy and reasoning together, you have quite sophisticated AI systems.

How do you deal with that sophistication? What is the agent accountable for, and what is the human accountable for? Has that line shifted? And really it shouldn't shift, right? I think and I have a belief that AI should always be a tool and it augments knowledge workers and the work they do. So accountability stays with people but when you do give agents an increased role and more autonomy, where has that accountability gone?

Has it shifted? In a multi agent system, where is accountability? When you've got agents talking to each other, asking for things to be done and then there's agents doing it. Well, you know, I think we're just starting to scratch the surface on these kind of problems and I think we will need the help of organizational theorists, psychologists, sociologists to talk to us around how we think about organizations that are comprised of things like that.

I think we're just scratching the surface, Sanjay, on multi agent systems, for example, 

Sanjay Podder: Yeah, absolutely. And that's a wonderful note to, you know, conclude our podcast today. You're entering the brave world of human plus machines, right? And you know, how we are going to navigate it. You know, that was very insightful, Paul, absolutely insightful. So, Paul, with all the insights that you have shared with us, especially around responsibility and financial services, and all of us will be thinking about the future a lot. I'm sure of it. Your work is shaping the future of AI ethics, sustainability and innovation in banking. So before we wrap up, what's your final message for technology leaders looking to integrate responsible AI into their organizations?

Paul Dongha: I think there are two messages here, Sanjay. I think some people I've worked with and alongside think of responsible AI or AI ethics as something that slows down innovation, right? Like, we're the guys that say, no, you can't do something. I think that mind shift needs to change. I think I really believe it has to.

And I'll tell you why. To get generative AI models to do the right thing, right? To be accurate, you kind of need people like AI ethics professionals in the room with the development teams. We together need to make AI systems accurate and behave the right way. So you can't just leave it as a technology exercise in and of itself.

It has to be other people involved. And AI ethics people are technical as well, right? I'm very technical as well. So, we need to be part of making systems more competent. This thing about competency is so pivotal. Generative AI that is probabilistic, it's in some ways non-deterministic. So actually, responsible AI techniques are there to make the models better and competent.

We're not there to stop things from happening. This is a real, it's a real mind shift thing that needs to happen in organisations to get boards to understand that actually we're part of developing models, not part of slowing down innovation. And I think the second takeaway, if I was talking to senior folk in an organisation, is have AI in the boardroom, right.

Have a knowledgeable AI strategy, responsible AI person in the boardroom and equip your boards with the level of knowledge that they need to deal with this. I do believe it is, people talk about it as a transformational technology, some people talk about it as the new electricity or something, I don't know if it's that, but what I do know is I really do believe it's going to be literally everywhere. So organizations need to have this as a priority on their board to understand how they can adopt it, how they can use it, and how they can keep up with their peer group of organizations who will undoubtedly be trying to adopt it. So I think those are my two takeaways. 

Sanjay Podder: Great, Paul. And where can the audience find more about your work and what NatWest is doing as well? 

Paul Dongha: Oh, well, you can follow me on LinkedIn. I do post from time to time. I have particular views on technology. I want to point out that very recently, I think it was literally a week ago, we published our AI ethics code of conduct on the NatWest website. So if people go to www.natwest.com, you will find on our sustainability pages, our AI ethics code of conduct that talks about AI ethics and data ethics and what we as an organization are doing internally to promote that and to make sure our risk management processes are there.

So I really encourage people to do that. And there are many, online resources to learn about more about responsible AI that you can find online. There's loads of them. 

Sanjay Podder: Thank you so much, Paul. This was really great. Thanks for your contribution. And we really appreciate you coming on to CXO Bytes.

Paul Dongha: Great. Thank you, Sanjay, it's been a pleasure and thank you very much. Bye bye for now.

Sanjay Podder: Awesome. That's all for this episode of CXO Bytes. All the resources of this episode are in the show description below, and you can visit podcast.greensoftware.foundation to listen to more episodes of CXO Bytes. See you all in the next episode. Bye for now. 

Hey, everyone. Thanks for listening. Just a reminder to follow CXO Bytes on Spotify, Apple, YouTube, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. Thanks again, and see you in the next episode.


Hosted on Acast. See acast.com/privacy for more information.

Show more...
7 months ago
36 minutes 41 seconds

CXO Bytes
Green Manufacturing and Supply Chains and the Role of Green IT and Responsible AI with May Yap
CXO Bytes host Sanjay Podder is joined by May Yap, Senior Vice President and CIO of Jabil, to talk about the intersection of green IT, responsible AI, and sustainable manufacturing. May shares how Jabil integrates renewable energy, circular economy principles, and AI-driven solutions into its global operations, contributing to its recognition as one of America's Most Responsible Companies. The discussion delves into Jabil's ambitious sustainability goals, including achieving carbon neutrality by 2045, and highlights initiatives such as energy-efficient manufacturing, water conservation, and e-waste management. May also emphasizes the importance of responsible AI and green IT practices like desktop-as-a-service, no-code platforms, and energy-efficient algorithms in driving sustainable innovation across Jabil's manufacturing and supply chain ecosystems.

Learn more about our people:
  • Sanjay Podder: LinkedIn
  • May Yap: LinkedIn | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • America's Most Responsible Companies 2025 - Newsweek Rankings [07:11]
  • Data centres & networks - IEA [11:08]
  • Jabil Makes Meaningful Sustainability Progress, Releases Fiscal Year 2022 Report [15:59]
  • Electronic waste (e-waste) | WHO [19:27]

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!

Hosted on Acast. See acast.com/privacy for more information.

Show more...
9 months ago
24 minutes 39 seconds

CXO Bytes
HBR Türkiye Business Summit: Sustainable Technology with Sanjay Poddar

In this episode of CXO Bytes, Sanjay Podder is hosted by Beliz Kudat to talk about the dual role of technology in driving sustainability while also contributing to environmental challenges. They explore how businesses can integrate sustainable strategies into their technology operations to minimize carbon footprints, optimize data center energy consumption, and leverage tools like AI and cloud solutions responsibly. Sanjay highlights actionable techniques such as carbon-aware scheduling, efficient coding practices, and emerging tools to measure the energy impact of AI. The discussion also emphasizes the business value of sustainability, including improved ESG scores, employee attraction, and outperforming competitors in shareholder returns, making sustainable technology a critical strategic imperative for organizations.

Learn more about our people:
  • Sanjay Podder: LinkedIn
  • Beliz Kudat: LinkedIn

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • Key Findings Data Centres Metered Electricity Consumption 2023 - Central Statistics Office) [10:55]
  • Carbon Aware SDK [12:54]
  • CarbonCloud [15:40]
  • Impact Framework 15:56]

If you enjoyed this episode then please either:

  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW: 

Sanjay Podder: Hello and welcome to CXO Bytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting chiefs of information, technology, sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.

I am your host, Sanjay Podder.

Beliz Kudat: Okay, Sanjay, welcome to our business summit. 

Sanjay Poddar: Thank you so much for having me today. My pleasure. 

Beliz Kudat: It's a pleasure having you. So, you know, in today's rapidly developing digital technologies and this digital transformation, a significant dilemma arises, especially for sustainability. And on one hand, these technologies offer substantial, huge potential to address environmental issues.

And on the other hand, there exists an entire substantial resource consumption. So first, we'd like to start by asking your perspective on this and how can technologies both solve and exacerbate environmental problems? 

Sanjay Poddar: Great question. And there's a duality here between technology and sustainability. You know, when you look at sustainability, and if you look at sustainable development goals that we have, the 17 sustainable development goals, one thing that strikes you that they are exponential in nature.

The impact is huge. You know, we are not talking about small things. We are talking about scale. And you cannot do anything at scale without technology. And in this case, if we talk about information communication technology, we talk about artificial intelligence, for example, these are precisely the kind of tools we need today to address the sustainability challenges that we are facing, whether is it climate change, whether it is, you know, issues of building a more inclusive society, for example, biodiversity destruction that is happening. Each of these areas, you need technology, you need AI, you need blockchain, you know, you need digital, right? There is no second thought about it. In fact, we did a survey of companies and we found out that 70 percent of the companies we surveyed, who were able to reduce the carbon emissions in the production, in their operation, they were able to do it because they use artificial intelligence.

Now, so there is absolutely no question about the role of technology in sustainability. But what we miss out is, you know, if we are not using this technology in the right way, in the right manner, technology itself has a carbon footprint. Technology can cause a big environmental impact. For example, technology can amplify the issues of bias.

For example, privacy. So, we have to make sure that while we use this technology, we have to use it in a very sustainable and responsible way. And the data points, are very interesting. For example, the same AI that is going to help us so much. You know, if you look at AI, you know, you take a large language model like Bloom, which is open source, so some of the data we have, we know.

A 160, 176 billion parameter model. When they trained it, you know, I think the carbon emission out of it is somewhere around 24.7 metric tons of CO2 equivalent. And if you look at all its life cycle, including the embodied carbon of the hardware on which it was trained, it goes up to 50 metric tons of CO2 equivalent, for example.

And if you take larger models, you know, all the more popular large language models, they may go as high as 500 metric tons of CO2 equivalent. So the same technology that is helping us on one hand is also causing emission, carbon emission. And the impact is not just restricted there, as we know. It is also on other resources like water.

You know, we can, you do some, you know, very harmless query to your, you know, the large language models for some questions, "where do I, which other cities I should visit in Turkey in my next trip to Turkey?" Right? You know, you asked 20, 30 questions. Behind the scene, that's half a liter of water that was used.

For cooling the data centers, for generation of electricity, and we also know about the other dimension about energy use. So, that's the whole thing. Now, the good part is, we don't necessarily have to have such a severe impact. There are tools and techniques and methods whereby we can design, develop, deploy these systems in a way that they are much more, having lower impact on the environment.

For example, they are safeguarding privacy, they give you much more safer response, so you know, there's less bias. So overall, it is very much possible to bring a culture such that the software you write is more sustainable and more responsible. So that's the silver lining, right? So to your first question, a big duality.

If you are in business, therefore your strategy, your technology and sustainability strategy needs to be integrated. And you have to look at it very holistically, not just at sustainability by technology. And "how do I use tech to do sustainability," but sustainability in technology, "how do I make sure that the technology is being used in a much more sustainable and responsible way?"

Beliz Kudat: Yeah. This is the crucial question as you said, and technology is crucial, as you mentioned in all those sustainability efforts as well. And we also know that software is at the core of all these technologies and companies need to adapt the way software is designed, developed, deployed, as you said, and used to minimize its carbon footprint.

So how can they achieve this? 

Sanjay Poddar: Well, you know, the software stack, there are many decarbonization levers in the software stack. When you talk about a software stack, there's obviously the code itself, which has to be written in a manner that it makes less demand on the underlying hardware, for example, right?

So you need to bring that kind of design patterns, architectures, choice of programming languages, all that have a bearing on the emissions or the energy use and emissions. For example, you know, there is a whole study about interpreted languages and compiled languages. You know, a language like C++, if you write a code and you write a similar code for doing the same thing with Python, obviously it is found that the C++ code will need less energy and will emit less carbon.

Now, not to say that people have to write in C++ but it's just a data point that, are you even thinking about, you know, which language are you selecting? And then there are, around architectures, for example. And then a very interesting decarbonization lever is the migration of your workloads to hyperscalers, for example, to the cloud.

And why does that reduce emission? Because the hyperscalers because of the scale and investments, they invest a lot in renewable energy. They have the right technology, like they use AI, for example, to make sure that their data centers are run with a relatively lower power usage, efficiency, what we call the PUE.

So they have the elasticity because of economy of scale. Their utilizations are higher, so the idle time of hardware is less. And now if you see, there is, you know, a lot of investment in what they call the custom silicon chip. And that's the next big thing where you write software with the underlying hardware in mind, optimizing the capabilities of the underlying chips.

And now, when you do all this, you know, the code you write, the system you build, it needs less energy. And also because this, cloud centers are typically, you know, you can select where you want to put your workload. You can select a location if your business strategy permits, where the carbon intensity of electricity is lower. In other words, the electricity is more generated by renewable energy, for example. As a result of this, not only you're using less energy, you're also, you know, emitting less carbon. And there are similar decarbonization levers even in the field of AI. You can, you know, you don't need to take the biggest of the large language models.

You know, you don't need to use models with billions and trillions of parameters. You have to use the model which is fit for purpose. You have to use the model which gives you the required accuracy. And there are a lot of startups coming up in this field that allow you to do, for example, dynamic routing to a large language model, which has less emission, for example, right?

And in the field of AI, a number of different techniques, you can do pruning, quantization. You can write your prompts in a way, you know, so that the overall emissions are lower. It's called green prompting techniques, for example. Probably that's a whole session I can take, but...

Beliz Kudat: But I really would like to, I really would like to come to the AI and what can the companies can do about it, especially regarding the energy consumption.

But I want to dig in a little bit more in the data centers, because we've been talking about data centers and everybody knows that how they impact the global energy consumption. And, so what innovative technological solutions can be applied here in data centers? And can we specifically discuss softwar-based solutions here?

Sanjay Poddar: Yeah, you know, a number of different things can be done when it comes to data centers, and you're right, you know, the data centers are mushrooming, thanks to the generative AI, widespread adoption, in fact, some data points, I was looking, for Ireland, for example, the data center power usage, went, quadrupled from 2015 to 2023 from 5% to 21%.

There are cities like London, which is not allowing new housing because there is a challenge of power. The power is being consumed by the data centers. Now, what are the kind of solution one can think about now? First of all, not all data centers are same, right? The data centers, and I also touched upon it in my earlier response, you know, we did a very detailed study for one of the hyperscalers, to understand, you know, if you move a workload from one data center to a hyperscaler, how much emission reduction is possible? You know, anywhere from 50 to 90%, for example. Again, there are several different factors based on which is the hyperscaler, which is the location, and so on and so forth, but typically you will see the PUE of hyperscalers because they run it at scale and for all the reasons I've mentioned, it's far better, right?

That is one. Now, from a software-based approach perspective, you know, when you design workloads for a particular, system for to be run on the data center, you can make it much more carbon-aware. Now, what do I mean by carbon-aware? You know, your backup jobs, for example, will run when there is renewable energy, so they are, they're scheduled at the time of the day, or they will be run in a location where there's a bit lower carbon intensity of electricity, right?

So, yeah, you know, I'm also the co-founder and chairman of the Green Software Foundation. One of the things that we built was, we have defined is the carbon-aware SDK. So you can, for example, use a carbon-aware SDK to figure out how do you make your systems, you know, run at a time when the carbon emissions are lower, the carbon intensity of electricity is lower.

That is one thing. You can build systems which are more cloud native. Serverless architectures, for example. That is the other thing you can do. There are, the software-based solutions that, you know, more advanced data centers use, they use AI, for example, to predict how they can lower the energy that is used for non IT purpose, for example, cooling purpose, right?

So they are able to optimize and distribute that energy. So there's a lot of use of AI there. 

Beliz Kudat: So when you just, I'm sorry I interrupted you, but when you just mentioned the AI, I also want to ask my other question too. Maybe you would like to combine your answers with them because I really would like to, we would like to learn about the tools and methods that can be used to measure the energy consumption of AI and machine learning models, too. So maybe you can... 

Sanjay Poddar: and, you know, this is, again, an evolving area, but I can tell you what state of the art, because a lot of new things are happening as we speak. But when it comes to AI, you know, there are, you have to look at AI very holistically across its life cycle, right? In traditional AI, people were more worried about training, whereas in the generative AI, people are now more worried about inferencing because that's where more emission is happening.

Now, in each of these cases, how do you really measure the emissions happening or energy use, right? So when you are deploying AI, if you are deploying in your own infrastructure, the first thing you can do is the carbon accounting tools that each of the hyperscalers use, give you, right? And then you can use that to figure out, you know, how much emission is happening, how can you lower that?

Because you can only reduce what you can measure. So that is, because end of the day, AI is also a workload, right? So you can, that's on the cloud side. And then, you know, there are techniques which are more on the software side that have come up, like the very recent ISO standard by Green Software Foundation called the Software Carbon Intensity Specification, that also can be used.

Then there are a host of open source tools, you know, code that can be used with Python libraries. There is a, you know, Cloud Carbon, CCF Cloud Carbon Framework, you know, and again, the Green Software Foundation has created an impact framework. And then I'm also coming across a lot of API calls, which help you open source, which help you to tell how much was the carbon emission for every prompt that you just made, right?

And there are a lot of startup systems coming up in this space. So this is a very evolving field. But, it's, coming up with a lot of open source solutions, a lot of solutions from big tech players, from the startup community. That's a big opportunity for the startup community. So that's what you have.

A lot of host of tools. The GSF which I chair, we are also currently, focused a lot on the SCI for AI. That's the version that we are working on. 

Beliz Kudat: Okay, so, of course, there's this issue of ESG goals when we're especially talking about sustainability. So, how can promoting the sustainable use of technology contribute to the companies in achieving their ESG goals, and in attracting talented employees at the same time?

Sanjay Poddar: No, I think this is the best question, right? Why should we even do it? You know, I know there's a bigger climate change sustainability thing, so it does appeal to the, you know, all the talented, you know, youngsters who are entering the field, right? You know, but the fact is, they want to work for organizations who are serious about the sustainability issues.

So, that's about the talent. So, I'm aware of businesses which are weaving the sustainability messages in their corporate communication because they want to reach to their employees, to the stakeholders about what are they doing about it, and employees want to work for such organizations.

There's a lot of research in that area. The other important research, in fact, we did in Accenture, was the correlation between sustainable technology, the ESG score, and business performance. And what we observed in our study was that sustainable technology, organizations which have a strategy on sustainable technology, they have a correlation to better ESG score compared to their peers in the market.

And the other interesting fact was that, businesses which have better ESG score, they outperform their competitors 2.6 times in the total, you know, shareholder value they return, right? So, even if you are not as concerned for the planet as you should be, you still have a real tangible reason because your business benefits when you have better ESG score and your ESG score benefits when you embrace sustainable technology.

So there's a pure correlation and obviously your employees are asking for it. So I think that is the other big driver for decision makers. These things cannot happen unless it comes from the top. It's a culture change, right? All things that we discussed. And it is a very strategic imperative. And that's what sustainable technology is all about.

Beliz Kudat: Sanjay, thank you very much. It was a pleasure having you in our summit. And thank you for all these valuable insights. 

Sanjay Poddar: Thank you. The pleasure is all mine.

Sanjay Podder: Hey, everyone. Thanks for listening. Just a reminder to follow CXO Bytes on Spotify, Apple, YouTube, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. Thanks again, and see you in the next episode.




Hosted on Acast. See acast.com/privacy for more information.

Show more...
10 months ago
19 minutes 50 seconds

CXO Bytes
Sustainable IT and Supply Chains with Niklas Sundberg
Host Sanjay Podder is joined by a guest who embodies what it means to lead with purpose in the digital age. Niklas Sundberg is the Senior Vice President and Chief Information Officer at Kuehne+Nagel, one of the world’s leading logistics companies, with a mission to drive sustainable change across the supply chain industry.

Niklas is a trailblazer in sustainable IT, author of Sustainable IT Playbook for Technology Leaders, and a respected voice on the role technology plays in building a sustainable future. His work goes beyond the logistics sector to shape the conversation on how technology leaders can achieve climate goals and navigate the challenges of data and energy efficiency.

They explore how Kuehne+Nagel’s Vision 2030 aligns with sustainability initiatives and the Green Software Foundation’s Climate Commitments. From data storage practices to carbon-aware computing, they uncover what it takes to create a truly sustainable digital ecosystem. 

Learn more about our people:
  • Sanjay Podder: LinkedIn
  • Niklas Sundberg: LinkedIn | Book | Website

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • Tackling AI’s Climate Change Problem | MIT Sloan Management Review [16:00]
  • E-waste challenges of generative artificial intelligence | Nature [23:40]

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:
Sanjay Podder: Hello and welcome to CXO Bytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting chiefs of information, technology, sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.

I am your host, Sanjay Podder.

Hello, welcome to another episode of CXO Bytes, where we bring you unique insights into the world of sustainable software development from the view of the C suite. I am your host, Sanjay Podder.

Today, we are joined by a guest who embodies what it means to lead with purpose in the digital age. Niklas Sundberg is the Senior Vice President and Chief Digital Officer at Kuehne+Nagel, one of the world's leading logistics company with a mission to drive sustainable change across the supply chain industry.

Niklas is a trailblazer in sustainable IT, author of Sustainable IT Playbook for Technology Leaders and a respected voice on the role of technology in building a sustainable future. His work goes beyond the logistic sector to shape the conversation on how technology leaders can achieve climate goals and navigate the challenges of data and energy efficiency.

Today we will dive into how Kuehne+Nagel's Vision 2030 aligns with sustainability initiatives and the Green Software Foundation's climate commitments. From data storage practices to carbon aware computing, we'll uncover what it takes to create a truly sustainable digital ecosystem. Niklas, thank you for joining us on CXO Bytes.

Welcome to the show. Please can you introduce yourself?

Niklas Sundberg: Thank you, Sanjay. Very happy to be here. Yes. I'm Niklas Sundberg. I'm the Senior Vice President and Chief Digital Officer at Kuehne+Nagel and I'm very looking forward to our conversation here today. I'm also a member of the board of SustainableIT.org, which is a sister organization to Green Software Foundation and we do some work together as well to advance the field of sustainability

within technology. So really looking forward to our conversation to hear and also share your journey into this.

Sanjay Podder: Thanks Niklas. So my first question Niklas, you have just come off the back of Kuehne+Nagel's first ever Tech Summit, Beyond Boundaries; where a lot of focus was on the role of AI and innovation in logistics. Can you share some of the AI driven solutions Kuehne+Nagel is implementing and how they are transforming the logistics landscape?

Niklas Sundberg: Sure, happy to, it was a great event, internal where we discussed not only about AI, but also, how we unlock data and traceability, asset tracking and so forth, real time, ETAs and so forth. So if I could just share some examples that we are working on that, I can talk publicly about it would be, how we work with customer service, for example, to be able to use an agent to respond to customer inquiries, both internally and externally, for example, and here we have scaled that out to a number of agents, but now we're looking to actually take the next step and scale it out to about 10-11, 000 people of the population.

So really a mass adoption at scale, which I think is quite tremendous. And this type of use case can also be scaled across other types of functions like HR, finance, and other types of business units. Another one, which is maybe not that obvious, but extremely powerful in our business, where data is extremely important.

The data quality is paramount when you speak to our customers because everybody wants to automate the whole supply chain flow as much as possible. What we talk about is e-touch, where we want to make the processes as streamlined as possible, run without human intervention, and so forth. And here we actually see that we can use gen AI to clean up our data and also staying clean.

And we see that we actually get better results than a human would get where maybe we get about 70 percent quality with a human correcting, cleaning data and so forth. But with a gen AI agent, we get up to north of 95 percent data quality and also helps us to stay clean. And obviously this is very cost efficient as well.

So we see a cost improvement of 95 percent on this use case. Another one I think is quite exciting is estimated time of arrival. So, together with our customers that they share data with us so that we can give them better ETAs when things will be arriving at port, arriving at a warehouse, and so forth.

This leads to that the customers will have better staff planning, for example, they don't have to pay for excessive overtime, they get a better flow of goods into their warehouses, for example. So this is something that really benefits our customers, so to say.

A fourth one, which is not that related to AI, would also be, how we do asset tracking, across the world, because I think this is extremely important. Where is my goods? Has it arrived at the airport? Has it passed customs? So to say, so not only looking at it from a wide perspective and looking at it from it has arrived at the airport, but also looking at the opportunity of geofencing, for example, so really very precise identity of where the goods are, so to say.

So, really excited about what we're doing about our digital ecosystem. And a lot of our customers are also quite excited about this. And just to put this a little bit into perspective, when I looked at the numbers last year, we were roughly trading at one and a half billion messages per year with our customers and partners.

So that's an extreme number, but we're actually continuously growing by 30%. So, the power of the digital ecosystem is extremely powerful where we can really integrate seamlessly into our customers, and make their operations run more smoothly.

Sanjay Podder: Excellent, and I think the last point you made about how the number, you're scaling up your whole digital ecosystem, right? And it, it makes me wonder. about the sustainability implication, because I know, Kuehne+Nagel's vision 2030 to build a, you know, trusted and sustainable supply chain. My, my, the question that pops up in my mind, Niklas, is when you try to use all these wonderful technologies, generative AI, you spoke about customer support, accuracy of information;

some of the challenges of technologies like gen AI is, for example, hallucination. you know, how do you make it bias free explainable when you have to exactly say where is the, you know, the good in the supply chain? So a lot of this risk that comes with gen AI. What we also talk about responsible AI risk. I would like to hear a little bit more from you on you know, how are you making this wonderful new transformation responsible so that, you know, there is less bias, there is more accuracy? You know, you spoke about the data, is the data free of bias? So yeah, can you just educate the audience and me a little bit more on what, how you're thinking about this dimension?

Niklas Sundberg: Yeah, for sure. So I think it's important, regardless of any technology that you work with, that you're not trying to go out and find a problem to, with the technology, so to say. It's important to identify what is the problem that you're trying to solve. So what we have adopted internally, And we, within our responsible AI policy, we have nine principles that we are, or are, are really sort of targeting and communicating wide across.

It's all obviously about data privacy, it's to make sure that we put a human in the loop, it's that we build AI in a sustainable way, that we are conscious about energy consumptions, water, and so forth. The key thing to adopt this technology is, is really to think about, the problem. What problem are we trying to solve?

And then, secondly, the people. We always need to make sure that we have a people, a person in the loop regarding the technology when we buy. Because I think we are also in a nascent state with gen AI. We talked about hallucination. We also need to make sure that we continue to build trust in this. And I think this will take some time.

So we really enforce a strong force that point that we also need to have a people aspect into this. And then the third thing is that we need to be principle driven. So coming back to the nine principles that we have, derived as part of our responsible AI policy. So to really sum up, it's three P's, which is quite easy to, remember.

Problem, people, and principle.

Sanjay Podder: Wonderful. Easy to remember the three Ps and I will probably put the S outside the P, which is about the next question, which you are so passionate about. I love the Amazon bestseller that you have written; the Sustainable it Playbook. The environmental impact, because very often when we talk about responsible AI, we forget the environmental impact, and that's something both SustainableIT.org, Green Software Foundation, we champion a lot right? What is the demand on energy, demand on water resources, emission, which is obviously going to snowball with the widespread adoption of AI as we see, but we all believe that there are steps one can take to bring this, you know, issues under control.

So any thoughts on going back again to your book, as well as the big problems that you are, the first of the three P, the problems you are trying to solve, right? You know, how are you bringing the sustainability dimension and putting it in the center, right? You know, the environmental impact, also, given you're in Europe, the EU AI Act, a lot of new regulations coming up, would really like to see how this is translating in into practice, the perspective from a practitioner right from the top?

Niklas Sundberg: Yeah, I think when I wrote the book, a couple of years back, that was sort of in the nascent state of gen AI. At that point, I think that the narrative around sustainable IT was a bit easier because if you're, you can run your code more efficient, use your hardware more efficient, use it for longer time, data centers powered by renewable energy and so forth, then that's also a positive case on IT cost. So, if you are efficient to reduce IT cost, then you can also be quite efficient to improve the sustainability parameters, if you are aware of the different levers, so to say. I think the challenge now that we have, if you fast forward a couple of years, and we see that the gen AI, Race is really powered by, you know, three, four powerhouses, so to say, into the space.

And that forces everyone really to put more pressure on these larger organizations, like Microsoft, Google, and Amazon of this world. Unfortunately, what we see, obviously, is that we have, if you look at Google, they have increased their emissions by 48%. Microsoft have increased their emissions by 40%.

So the promise that they made back in, 2020 when Microsoft, for example, said that by 2030 we're gonna be net positive, and by 2050 we're gonna give back all of the CO2 that we have. expanded since the inauguration of the company in the early 1970s. I think they are really struggling to meet this commitment now, which is becoming a bit of a problem.

And also, if you take another example with water, for example, we also see that Microsoft, in the last two years, with the build out of OpenAI infrastructure, they have increased water usage by 14 million cubic meters of water. And just to put that into perspective into something that's tangible, 14 million cubic meters is the same amount that Reykjavik, the capital of Iceland, with 300,000 people in population uses in one year. So obviously, the water is becoming a big issue as well. And we already see that in the U.S in Iowa, for example, where Microsoft has put a lot of their data centers. So it's a competition between, do we build out the agriculture and the farming, versus building data centers? So, I do think we have, Some major challenges ahead of us as well, but then on the positive side, I think we also seen some black swan events like the energy crisis in Europe that wasn't really planned, so to say, but it came because of the evil of the war in Ukraine and everybody had to rethink their energy security, so to say. So within a couple of years, you really saw the build out of a lot of renewable energy sources. A lot of companies was rethinking in terms of the energy security, and then we're going more towards renewable energy sources. And I think that needs to continue to happen because otherwise we're really going to have a massive problem on our hands.

When I wrote the book two years ago, nobody in the U. S., was not really talking about the energy consumption in, in the U. S., but now there are some numbers stipulating that if we're not careful, also in the U. S., by 2030, 25 percent of the energy, is going to go to data centers.

So I think we are starting to build this massive problem and we really need to find, you know, cross industry solutions to start building these things out, so to say. And to find a good way in terms of how we build this harmony because I think the genie is out of the bottle when it comes AI, you're not gonna be able to stop it.

But I think we need to find more sustainable ways to build sustainable infrastructure around this. I do think we also, unfortunately, need some legislation. I think we need to bring some more awareness. last year I wrote an article on MIT Sloan Management Review, where we also, put out a number where

that one chat GPT call is equal to 100 Google searches, for example. So obviously it's massively consuming, not only when you train the model, but also when you're actually consuming the model, so to say. So, I think also we need to bring the awareness and I think you, we need to. have some more legislation around how we can build out sustainable infrastructure and not only sell the promise of what we can do with AI and build these sustainable solutions, but I think we have a very great responsibility to make sure that we build sustainable digital infrastructure.

Sanjay Podder: Absolutely. It looks like a big reason for you to write the next version of your book. You know, you can have a whole chapter dedicated to sustainable AI, sustainable AI training, inferencing, fine tuning, and this is indeed a big challenge, you know, without any doubt, gen AI is going to transform our business in a very positive way, but we have to manage this risk at the same time.

Going back to your book, Nicholas, you spoke at that time about three pillars when you spoke in the context of the IT strategy, sustainability in tech, by tech, and sustainability in IT, by IT, and IT for society, right? Do you want to talk a little bit more, especially given some of the recent challenges that you spoke about?

You know, how do organizations, how do the chief information officers, and chief digital officers, look across these pillars as they craft these sustainable IT strategies for their organization. Any thoughts?

Niklas Sundberg: Sure, so I think, you know, what I also mentioned in my book is the, the EU CSRD. Which now also, comes into life, so by 2025, you need to start reporting on your scope one, your scope two, and also your scope three, and for example, for a company like Kuehne+Nagel, 98 percent of our, scope is in scope three, which also means that we are reliant on our providers, our vendors to provide us with reliable data. And I think here it also needs to mature within the IT realm, to advance this sort of say. I think, I don't think that reporting will be perfect, but I think it's a good starting point to, to really start putting the headlights on these topics. but I think that what I still would recommend to do is that you at least establish a baseline within the context of sustainability in IT, in terms of your own footprint, to look at, okay, what are the bigger levers that, that you can leverage to become more sustainable? Is it in your hardware? Is it how you develop your software? Is it how you leverage cloud versus data centers? Can you leverage more automation? Is there an opportunity to relocate, your workload to, renewable data centers or renewable cloud providers, locations where they have renewable cloud?

So I think it's important to not get overwhelmed because you can easily find 15 different great initiatives, so to say, but I think, you know, pick the three to four based on your baseline that really makes a difference. And then you will probably see that this can probably make 70-80 percent impact on the total scope of your emissions, so to say.

Sanjay Podder: In your present role as well as earlier as a CIO of Assa Abloy, what are those top three levers you found very helpful, you know, where people should start with? It may, it may vary by organizations, but any particular ones you would like to highlight that we, like the low hanging fruits, we miss out often?

Niklas Sundberg: I think, you know, it's important to also understand that every company has different starting points, so to say. But if you work with the large companies like Assa Abloy or Kuehne+Nagel of this world, I think you're always in a mix of having cloud, being on premise. So I think that's a great opportunity to look at your landscape and say, okay, how can we optimize this, for example, what makes sense to put in the cloud?

Where can we put this more into containers? Where can we re-architect in terms of function as a service? And so forth, where the code is only, consumed in terms of energy expenditure when it's run, for example, rather than having a, a monolith application that just, it's idling for a very long period of time, but it's still consuming energy, so to say.

So, I would really recommend to look into your data centers versus your cloud, where the opportunities are there. I would start measuring if you have a lot of internal software development. I would look to measure, your internal product teams, and make it a little bit of gamification here to really show how efficient your code is running, how can you optimize it and really bring that awareness to really put the power at the hands of the software engineers. I think that's a very important message. And then the third thing is, is obviously the e-waste, because I think any company that of a larger size, a company of 30,000 people with 30,000 assets and then a refresh cycle of three years. Only that hardware for laptops is 50,000 ton of CO2 over a 10 year period. So here, obviously there's a lot to do as well in terms of working more in a circular way, work with reputable partners that can help you to refurbish, upgrade, then either you can sell that hardware or you can actually donate it.

So also to, to instill that third principle, IT for society or, or tech for good. So at the moment, for example, we are instilling one of those programs in Portugal where we see a huge need for teenagers, that go to school that don't have access to computers at home, for example. So we're, I think it's also important to think about the democratization of technology.

So that, the ChatGPT, the technology that we are developing, it's not only for the privileged few, but we also need to make sure that we bring it in a democratized way. I think also that the IT for society piece is also important to think about to see if we can donate hardware, for example, because if we keep the hardware alive for another year, if we calculate on a three to four year life cycle, it's another 25 percent reduction in CO2 spent per hardware asset.

So we really have a great benefit of keeping the hardware alive because this is also a massive, massive challenge, with 60 million ton of electronic waste every year. So it's comparable to the same amount as the Great Chinese Wall, when we talk about 60 million ton of electronic waste. And it's the fastest growing, e-waste stream in the world.

So we really need to curb this. And since we're talking about AI, I recently saw a study as well from Nature, where they suggest that by the end of this decade, by 2030, the hardware, just from AI, could contribute to 5 percent of that, electronic waste by the end of this decade. So I think also we also need to make sure that we think not only about the energy, the water, but also, the circularity in terms of how we manage the hardware and the life cycle.

Sanjay Podder: Excellent. And I'm glad you brought out the whole point of, you know, gamification and building a culture amongst developers, for example, for, because, you know, this is not just about writing efficient code or using less hardware. This is about organizations embracing a culture of sustainability IT of green software practices and that needs to come right from the top.

And that's the purpose of this podcast as well. I also like the fact that you highlighted on e waste because very often we lose sight of embodied carbon. We are all thinking about operational carbon emission and the embodied in some cases may be larger than even the operational emissions. and therefore everyone needs to take care of how long you use that hardware and, you know, how do you do your procurement practices, which are much more sustainable and stuff like that. So these are all great, great insights. So finally, thank you, Nicholas. You know, this was, I'm sure for all practitioners, this is going to be very good insights to get started on this journey or even fine tune the journey.

I'm looking forward to the next version of your book. I was very delighted to write a small piece in your earlier book, but would love to contribute on the sustainable AI part of the next version of your book. And thanks for everything you're doing to, accelerate the adoption of sustainable IT, with the playbook that you wrote, which for me is one of the finest books I have read on the topic.

Thanks for joining us today. And I hope to see more progress in this area with your thought leadership, and SustainableIT.org, which is again a very fine organization that Green Software Foundation we love to work with. thank you so much. So, until the next time, this is the end of this episode, but I look forward to meeting you all in the next episode of CXO Bytes.

And, just as a reminder, everything that we discussed, it will be linked in the show description below the episode. So see you again in the next episode of CXO Bytes.

Niklas Sundberg: Thank you.

Sanjay Podder: Hey, everyone. Thanks for listening. Just a reminder to follow CXO Bytes on Spotify, Apple, YouTube, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. Thanks again, and see you in the next episode.

 





Hosted on Acast. See acast.com/privacy for more information.

Show more...
11 months ago
27 minutes 7 seconds

CXO Bytes
The Future of Green Payments with George Maddaloni of Mastercard
In this episode of CXO Bytes, George Maddaloni, CTO of Operations at Mastercard, joins Sanjay Podder to discuss how Mastercard is driving innovation in sustainable technology through green software practices. George shares insights on the company's approach to reducing energy consumption in software development, the role of AI and data in enhancing sustainability, and the importance of fostering a culture of green software from the top down. He also highlights Mastercard’s collaboration with the Green Software Foundation and how the organization is helping to shape their ESG goals. From edge computing to responsible AI, George provides a comprehensive look at how Mastercard is balancing technological advancement with environmental responsibility.

Learn more about our people:
  • Sanjay Podder: LinkedIn
  • Geroge Maddaloni: LinkedIn

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • Quantum cyber threats are likely years away. Why — and how — we're working today to stop them | Mastercard 
  • Events  | Mastercard
 

If you enjoyed this episode then please either:

  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Sanjay Podder:
Hello and welcome to CXOBytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting Chiefs of Information, Technology, Sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.

I am your host, Sanjay Poddar.

Hello. Welcome to another episode of CXOBytes where we try to get into the world of sustainable software development from the perspective of the C-Suite. Today, I am extremely delighted to have a special guest, George Maddaloni. George is the CTO of Operations at Mastercard.

 I would like to talk to George today more about how Mastercard, a leading payment giant, is navigating the intersection of technology and sustainability. Also, Mastercard has been a member of the Green Software Foundation. So what has been the role of the foundation to help shape Mastercard's approach to sustainable technology? And finally, also talk a little bit about the future of sustainable technology the context of financial services. George, welcome to the podcast.

George Maddaloni: Thanks for having me. Really appreciate it and looking forward to it. 

Sanjay Podder: Absolutely. And I'm so delighted about the contribution George is having in the field of sustainable technology.

It's very rare to find a CIO who is so passionate about this topic. So this is going to be a great conversation. Before we dive further, I would like to give a reminder that all the things we discuss will be available and linked to the show note below the episode.

So George,

why don't we start with a few words from you about Mastercard and about yourself.

George Maddaloni: Yeah, sure. So, George Maddaloni, CTO of operations for Mastercard. And maybe I'll start with some things that probably most people know about Mastercard and then lead into a little bit that, from a technology perspective, you don't.

Mastercard connects billions of individuals and businesses to the digital economy and makes them an equal part of, or an inclusive part of that economy and really views its mission as enabling, empowering people as well as powering economies so that anybody that's out there, whether they're paying their bills, buying their groceries, they have that capability. And from a technology perspective,

that means a lot in terms of what Mastercard does on a day to day basis. Every product we offer is a technology product. And for my team, that goes back to the mission that we have, which is to provide reliable, scalable, secure, and sustainable, technology platforms to continue to transform the payments industry.

We, the team itself runs a vast network, that connects those billions of people to hundreds of millions of acceptance points, thousands of financial institutions, and back to many data centers around the world. To make all of that happen on a global basis, in a very low latency manner, because time matters in our business, and I really use this phrase within Mastercard that we're running a cardiovascular system of the company,

and it's why our team is one of the larger technology divisions in the company and has a really front row seat to all the innovation, all of the product development that occurs and it helps also enable all of the employee technology across globe for the company. So, it's a great job. Something I'm really passionate about is technology, and it's great have the 

Sanjay Podder: opportunity to lead such a great team. Fantastic, 

George. 

And there cannot be a better use case for sustainability, given what you are trying to do at an enterprise scale, and I'm curious I have seen you as a person who deeply cares about sustainable technology. You have got recently the board of sustainableIT.org, Mastercard has been a member, a very important member of the Green Software Foundation. What makes you feel that this is an area that you are, that is important and you are particularly passionate about? What drives that passion? 

George Maddaloni: Yeah, I think, first off, the impact that technology has is great, but you can absolutely see that the growth that is occurring in the technology landscape, at no time, have, has technology been more important to everybody's day to day life than now. And I think as we think about our ESG goals as a company, all across ESNG, that Mastercard is a place that really deeply cares about those goals. We've put actually, both executive and our own compensation goals around that. And it's important as I said, that we're not just impacting, here for the business, but we're here for the world and the impact that we're making on the world, across those goals.

So as a technologist, it's kind of natural to be understanding what's happening from an energy perspective.

And for me, this became a little bit of how are we managing consumption in a more efficient way as things are growing? And a little bit of a platform for, to help, my team understand that consumption, and make sure that we're making the right choices.

so I think it was both, "hey, macro level, this is having an impact, from an energy perspective." Even at a micro level or in a day to day decision making, how can we use that lens to think about the consumption we're about to put forward for a project or a refresh or software development, and what is that going to mean in terms of our overall footprint? 

Sanjay Podder: Fantastic. George, it just reinforces the conviction we had in the Green Software Foundation that while we focus a lot on the developers and how we enable them to write greener code, what is actually required is a culture of green software or sustainable tech. A culture that comes right from the top and you kind of reinforce that because unless it comes from the top, sustainability will never be a first class concern in your software development process. So that in some sense is the essence behind this podcast series where we are able to articulate what are leaders like you doing to make this real, make, it should not be academic, but it should be actionable, you know? And, this is great to hear from you. Very recently, I read a nice article from Mastercard, your Technology Trends 2024, extremely insightful. And I like the three areas that you have articulated, AI, computing, and data. And the question is, you also kind of explored the confluence of AI, computing, and data and how that's going to reshape commerce. And that's very powerful with all the examples. Now, as a consumer I'm also a user of Mastercard Payment. I'm thinking "how does that translate into kind of innovations you foresee happening in the way we do payment as consumers?" Right? And there was a very interesting statement in the report which I really liked.

It said that, in the context of computing, it said that it's not about how computing will get more powerful, it's about how do you make that power, how do you distribute that power in an intelligent, trustworthy, and sustainable way?

And that's again the interplay of sustainability and technology. So the question that I had in my mind is, as a consumer, how do I see that innovation playing out in the payment process? And how do you balance, therefore, that innovation with the sustainability dimension, given you're thinking about AI and data 

George Maddaloni: computing. Yeah, I

Sanjay Podder: computing?

George Maddaloni: I'll start actually where you started, back to the culture, 

and it, does start at the top, the company was

one of the first first payments organizations to really put forward its net zero, an aggressive net zero goal. And that really, again, started the organization rallying around this particular topic. But every, piece of technology and those investments that we're thinking about as well, we're, back to this principle of "are we empowering people?"

And a lot of times we have to think about, especially in the world of AI and data, this has been reinforced, what are our principles and how are we going to approach this, specific innovation? So, and you can't enable AI without a key focus on data. I think knows that, especially in the generative AI world, the more data that you have, the more effective that, that model is going to run. so we established a set of data, data, principles, handling principles. One very focused on eliminating bias, another very focused on making sure it's used from an inclusive perspective, and of course consumer protection, at the forefront a lot of the regulation that we're subject to, but much less helping influence. So when you're swiping your Mastercard as you were gracefully articulating, the power of the network includes AI capabilities. those have been around for over a decade, now and we've actually won other awards for a particular AI tool that we've developed that focuses on fraud, detection.

And, you know, we've seen the improvements that we're able to make with new techniques in the AI space that, specifically generative AI techniques that we can employ in In those models, and I mean, they were already effective as well. I think everybody over the past 10 years has experienced some, "hey, is this fraud, is this not?" Or actually something that was detected and you get call saying someone is using your card.

So that, those capabilities have been there for a long time, but using these generative AI techniques, we've been able to see, an improvement in terms of, on average, a 20% plus improvement in terms of the fraud detection and even a better improvement in terms of elimination of false positives, which, we, all want that time back.

So from our perspective, that's, sending billions of dollars back into the economy, into people's pockets. and just out of the criminal hands and also preventing people from, or discouraging, at least, people stealing identity and stealing that data.

So, when we're approaching these topics, I mean, sure, there's a sustainability topic, but there's also just "are you doing the right thing?" perspective.

Are you holding your principles as you're deploying this new technology? And I think that area specifically has been a core example of  where that mission really fruits out. 

Sanjay Podder: Fantastic. And I did read about your solution for detecting fakes, scoring and approving billions of transactions. And that caught my attention because the moment you start using generative AI and you also, I think, use recurrent neural networks. And when you use it at scale, the way you are you are actually scoring billions of transactions.

George Maddaloni: Yeah,

Sanjay Podder: is a 

George Maddaloni: 140 billion last year alone with a growth rate on top of it. So 

Sanjay Podder: But what did catch my attention was the fact that we all know that there was a time where we were all concerned about the energy use and emissions during training of generative AI and AI, but now, post generative AI, we're also concerned about the inferencing part, because every inference you make, there has been a lot of interesting studies, one from Hugging Face that says 30 to 40 inference, for example, is half a liter of water, and things like that, right?

So there is an environmental aspect when we talk about training, refining the models, or, inferencing the models. So, clearly, in addition to bias, in addition to ensuring data privacy and various such responsible AI practices, a new concern is sustainability, which is also being highlighted in the EU AI Act.

So I was wondering How does one address that aspect when, especially when you're talking about large volumes, billions of transactions?

George Maddaloni: Great question, and actually, you hit on some things that are core in our technology training models. Large large data models, especially language gets very, consumes a lot very quickly. Payments language is not quite like the English language, first and foremost. So I think you got to think about from a training perspective, what do you need to focus on? And, not bring in too much information. Two,

inferencing, absolutely more intense, rightfully so.

I'd even double down on that and say, inferencing in real, in a real time business is an order of magnitude different. So, this comes back to, kind of topic we talk about a lot in the practices, you know, with the Green Software Foundation and other industry partners that we've been discussing, is how are you engineering this stuff and that decision you're making around what you're about to consume and bring into the fold? Because A, it could be very power hungry, but B, it could be very costly too. So our business wants us to make that decision at the same time.

So these things usually go hand in hand. But the practices that we focus on first and foremost from a data center perspective the traditional PUE and how we're engineering the efficiency of our data center or where we're running that workload is absolutely critical and we're constantly looking at that and improving on that.

But two, these practices around engineering for sustainability. We've incorporated for our technology team, we've created an ESG guide for technologists that cover, the things that they need to think about when considering a new technology, so from a procurement perspective, what are the things we want to see? From a new supplier, be it hardware or software for that matter, what are, what kind of information can we glean from what we're about to use? Because a lot of these tools that we're referring to, and you know, there's some heavy software underpinning that, that's making that consumption happen. And then number two, from a software practices perspective, development of code, use of data, back to that thing I was referencing before, like,

you don't need to bring the whole model, right?

You need to be focused on your actual, deployment. And then thinking about right sizing of the workload for either training or inferencing and making those decisions so that, You can create both a reactive, a responsive application and something that's not going to over-consume CPU storage, these things that we that we care about it comes to actually the deployment the technology itself. 

Sanjay Podder: Absolutely. All some great ideas. And, George, I also, read in the report, about your views on edge computing and you also referred right now real time, low latency. So, how do you see edge computing playing a role in your, operations process, and when you design with edge computing, how do you make sure that the processes are therefore getting more energy efficient, for example? Because with edge computing, with the new cloud computing at the other end you can now distribute the workloads in a very interesting way. So, how do you bring all these ideas together with the sustainability in the center to make the systems more energy efficient? You know, emit less carbon.

George Maddaloni: It's, the nature of Mastercard's technology footprint. Edge computing is a it, and it's something that I realized, when I first got here, four years ago. You look at those thousands of endpoints that we run, which connect the largest financial institutions and the smallest financial, all financial institutions around the world.

And it is, by and large, can be thought of as a large edge computing network. And We make some decisions there, we make some back in a data center or in a cloud in depending on the capability that we're, looking for, the product that we're running. And so, I think there's two things that we think about in this landscape.

One is, can we make that edge computing more efficient? And we've actually explored that very, very carefully over the past year because that is such a critical part of the whole overall technology, that cardiovascular system that I referenced. And we were able to, via the advances that you can make now in a smaller footprint of equipment, we've easily been able to run that 25 percent more efficient from a power perspective using new generation of processors and capabilities on those servers. You think about a small rack mounted server and we can do more with it. So you can get 25 percent more capability out of something that's 25 percent more efficient right? That is huge capability that you can now enable at the edge to put more decisions there. And the better part of that, as you think about it, is you're not bringing all of that decisioning and forcing that CPU back home. So you're eliminating network, you're eliminating data center CPU cycles and things of that nature.

So, I absolutely think getting that right, mean it is an engineering prequationhat you need to make, but getting that right is something that we focused on and I think is something that's a bit of core of Mastercard technology too. 

Sanjay Podder: Wonderful. George, moving to a slightly different topic on Mastercard has always been known for the programs of financial institutions, you know, empowering communities. I'm sure these initiatives are backed with intelligent digital solutions. When you build the solutions, how are sustainability factored in? Any thoughts there that you'd like to share? 

George Maddaloni: Yeah, sure. We've, I think everybody globally is starting to get interested what is in their own personal consumption, so we've actually, at a payments level begin to enrich, we not only do we enrich data that we share in our network around fraud, as I mentioned earlier, but we do have a service that some financial institutions take on in "hey, what is the sustainability information for this particular purchase people are making?" And we've partnered with various data sources that. are out there to provide that information. 'Cause it is, as you know, and we're very good at providing the technology as we think about that sustainability, scoping or the, what's the CO2 of an emissions aspect and all that. So we're bringing that data through as best we can and it's been a successful launch of that sustainability product and certain cusrtomers and financial institutions are using it. 

A lot of our capabilities continue to focus on "what are we doing for small and businesses give them competitive capabilities and to grow?" Countless examples of that in and probably, small, micro businesses, giving them the ability, throughout the world, great examples going on in India as well,

in terms of community pass programmes things of that nature.

Sanjay Podder: Great. And, George, I think it has been sometimes, you have maybe a year since you joined the foundation, the Green Software. Very curious to know, has the foundation been able to influence some of your thinking as you are trying to bring sustainability in the center of the way you do technology in Mastercard? So anything you would like to share?

George Maddaloni: Yeah, the partnership with the Green Software Foundation has certainly helped us. I referenced before this ESG guide for technologists and some of the, we point

people to the training program that the Green Software Foundation created with the Linux Foundation.

I've received the certificates. I was one of many people at Mastercard that went through that training, and I think it's great because it helps people understand what is the index of the technology that they're deploying. There were several great examples provided from other organizations are doing great work.

And I think any company, I think, gets very focused on what they're doing. So it's great to have that exposure of outside in. And we get that through a lot of the that we have. We've hosted before.

We, have a software engineering guild or practice within Mastercard that has been participating in the Green Software Foundation. They have a special interest group within Mastercard where people can questions. The members that have participated in the sessions with the Green Software Foundation can help answer those questions. So, while we have close to, I think over 6 000 software engineers at Mastercard, all of those can be, you know, feeding off one another in terms of the knowledge that they're getting.

And then by hosting some of the events, we have some of those players show up, hear from the other people that come in, and I think those are great examples, and I know we've got one coming up in October. 

Sanjay Podder: Absolutely, we are all looking forward to the summit in October. George it has been a wonderful discussion, but there's one question I like to ask all my guests.

What would be your bite sized advice to tech leaders as they try to balance innovation and sustainability? 

George Maddaloni: I think, three things. I'll try to keep them bite sized. You know, all this is extremely important that we continue to collaborate within communities like we're here talking about, and others to understand how to work on sustainability and green practices.

It's, this is a complex problem that I don't think anybody's out of the box, figured out.

So that's number one. Keep collaborating. Number two is get ffocused on your data. Youknow, there's, like

we just talked through, our tech footprint is a lot different than everybody else's tech footprint. You've got to dive into your own data and really get focused on that and understand what's happening. And then number three, I go back to, this is a great, focus on consumption. This is a great parallel to what you're spending, what you're producing, what you're consuming. And I've seen that culture, when you bring that to the table, the culture in your organization can really benefit from that.

Because that's something that everybody wants understand

and can start to make decisions off of it. So, it's collaborate, focus on the data, and then bring it to the culture. 

Sanjay Podder: Thank you, George. Thank you, George, for your contribution to this area of sustainable tech and for joining this CXOBytes podcast.

So, that's all for this podcast. But again, a reminder, everything we discussed will be linked in the show notes below the episode. I will also request you to go to podcast.greensoftware.foundation and listen to other episodes of CXOBytes. So until then, I hope to meet you in the next podcast. Bye for now. 

Hey, everyone. Thanks for listening. Just a reminder to follow CXOBytes on Spotify, Apple, YouTube, or wherever you get your podcast. And please do leave a rating and review if you like what we are doing. It helps other people discover the show. And of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. Thanks again, and see you in the next episode.

Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 year ago
29 minutes 25 seconds

CXO Bytes
AWS Summit AI-Ready Infrastructure Panel with Prasad Kalyanaraman, David Issacs & Neil Thompson
CXO Bytes host Sanjay Podder is joined by Prasad Kalyanaraman, David Isaacs and Neil Thompson at the AI-Ready Infrastructure Panel at the AWS Summit in Washington, June 2024. The discussion featured insights on the transformative potential of generative AI, the global semiconductor innovation race, and the impact of the CHIPS Act on supply chain resilience. The panel also explored the infrastructure requirements for AI, including considerations for sustainable data center locations, responsible AI usage, and innovations in water and energy efficiency. The episode offers a comprehensive look at the future of AI infrastructure and its implications for business and sustainability.

Learn more about our people:
  • Sanjay Podder: LinkedIn
  • Prasad Kalyanaraman: LinkedIn
  • David Isaacs: Website
  • Neil Thompson: LinkedIn

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • CHIPS and Science Act - Wikipedia [06:46]
  • IMDA introduces sustainability standard for data centres operating in tropical climates [12:54]

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW: 

Sanjay Podder: Hello and welcome to CXO Bytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting Chiefs of Information, Technology, Sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.

I am your host, Sanjay Poddar.

Welcome to another episode of CXO Bytes, where we bring you unique insights into the world of sustainable software development. I am your host Sanjay Poddar. Today we are excited to bring you highlights from a captivating panel discussion at the recent AWS Summit in Washington held in June 2024. The AI-Ready Infrastructure Panel featured industry heavyweights including Prasad Kalyanaraman, VP of Infrastructure Services at AWS, David Isaacs from the Semiconductor Industry Association, and renowned researcher Neil Thompson from MIT, and it was chaired by Axios Senior Business Reporter Hope King.

During this panel, we take a look at the transformative potential of generative AI, the global race for semiconductor innovation, and the significance of the CHIPS Act in strengthening supply chain resilience. Together, we will hopefully have a better picture of the future of AI infrastructure and the innovations driving this field forward.

And before we dive in here, a reminder that everything we talk about will be linked in the show notes below this episode. So without further ado, let's dive into the AI-Ready Infrastructure Panel from the AWS Summit.

Prasad Kalyanaraman: Well, well first, for the avoidance of doubt, generative AI is an extremely transformative technology for us. You know, sometimes I liken it to the internet, right, the internet revolution. So, I think there's, we're very early in that journey. I would say that, at least the way we've thought about generative AI, we think about it in three layers of the stack, right?

The underlying infrastructure layer is one of them, and I'll get into more details there. And then there is the frameworks. We build a set of capabilities that makes it easy to run generative AI models. And then the third layer is the application layer, which is where, you know, many people are familiar with like chat applications and so on.

That's the third layer of the stack, right? Thinking into the infrastructure layer It always starts from, you know, obviously, finding land and pouring concrete and building data centers out of it. And then, on top of it, there's a lot more that goes in inside a data center in terms of the networks that you build, in terms of how you think about electrical systems that are designed for it, how you land a set of servers, what kind of servers do you land, it's not just the GPUs that many people are familiar with, because you need a lot more in terms of storage, in terms of network, in terms of other compute capability.

And then you have to actually cluster these, servers together because it's not a single cluster that does these, training models. You can broadly think about generative AIs like training versus inference, and they both require slightly different infrastructure. 

Hope King: Okay, so talk about the generative, what that needs first, and then the inference.

Prasad Kalyanaraman: Yeah, so the training models are typically large models that have, you know, you might have heard the term number of parameters, and typically, think of them as, and there are billions of parameters. So, you take, the content which is actually available out there on the internet and then you start, the models start learning about them.

And once they start learning about them, then they start associating weights with different parameters, with different, parts of that content that's there. And so when you ask, the generated AI models, for completing the set of tasks, that's the part which is inference. So you first create the model, which requires large clusters to be built, and then you have, a set of capabilities that allows you to do inference on these models.

So outcome of the model training exercise is a model with a different set of parameters and weights. And then inference workloads require these models. And then you merge that with your own customer data, so that customers can actually go and look at it to say, okay, what does, this model produce for my particular use case?

Hope King: Okay, let's, and, you know, let's go backwards to, to just even finding the land. Yeah. You know, where are the areas in the world where a company like Amazon AWS is first looking? Where are they, what, are the areas that are most ideal to actually build data centers that will end up?

Producing these models and training and all the applications on top of that. 

Prasad Kalyanaraman: There's a lot of parameters that go into, picking those locations. Well, first, you know, we're a very customer obsessed company, so our customers really tell us that we, that they need, the capacity. But land is one part of the equation.

It's actually also about availability of green renewable power, which I'm sure we'll talk about through the course of this conversation. Being able to actually provide enough amounts of power and renewable sources to be able to run these compute capabilities is a fairly important consideration.

Beyond that, there are regulations about, like, what kind of, of, content that you can actually process. Then the availability of networks that allows you to connect these, these, servers together, as well as connect them to users who are going to use those models. And finally, it's about the availability of hardware and chips that are capable of processing this. And, you know, I'd say that, this is an area of pretty significant innovation over the last few years now.

We've been investing in machine learning chips and machine learning for 12 years now. And so, we have a lot of experience designing these servers. And so it takes network, land, power, regulations, renewable energy and so on. 

Hope King: David, I want to bring you in here because you know obviously the chips are a very important part of building the entity of AI and the brain and connecting that with the physical infrastructure.

Where do you look geographically? What, or your, body of organizations when you're looking at maybe even diversifying the supply chain, to build, you know, even more chips as demand increases. 

David Isaacs: Yeah, so I think it's around the world, quite frankly, and many of you may be familiar with the CHIPS Act, the past, two years ago here in the US, something very near and dear to my heart.

That's resulting in incentivizing significant investment here in the US, and we think that's extremely important to make the supply chain more resilient overall, to help feed the demand that AI is bringing about. I would also add that the green energy that was just alluded to that will also require, substantial, semiconductor innovation and new demand.

So we think that improving our, you know, the diversity of chip output, right now it's overly concentrated in ways that are, subject to geopolitical tensions, natural disasters, other disruptions, you know, we saw during the pandemic, the problems that can arise from the supply chain and the problems, I guess most prominently illustrated in the, automotive industry, we don't want that holding up the growth in AI.

And so we think that having a more diversified supply chain, including a robust, manufacturing presence here in the US is what we're trying to achieve. 

Hope King: Is there any area of the world, though, that is safe from any of those risks? I mean, you know, we're in the middle of a heat wave right now, right?

And we're, you know, and we're not, and we're gonna talk about cooling because it's an important part. But do you, just to be specific, see any parts of the world that are more ideal to set up these systems and these buildings, these data centers for resiliency going forward? 

David Isaacs: No, probably not. But just like, you know, your investment portfolio, rule one is to diversify.

I think we need a diversified supply chain for semiconductors. And, you know, right now, The US and the world, for that matter, is reliant 92 percent on leading edge chips from the island of Taiwan and the remaining 8 percent from South Korea. You don't need to be a geopolitical genius or a risk analyst to recognize that is dangerous and a problem waiting to happen. So, as a result of the investments we're seeing under the CHIPS Act, we projected, and in a report we issued last month with Boston Consulting Group, that the US will achieve 28 percent of leading edge chip production by 2032. That's, I think, good for the US and, good for the global economy.

Hope King: Alright, I'll ask this question one last time in a different way. Are there governments that are more proactive in reaching out to the industry to say, please come and build your plants here, data centers here? 

David Isaacs: I think there's sort of a global race to attract these investments. There's, counterparts to the CHIPS Act being enacted in other countries.

I think governments around the world view this as an industry of strategic importance. Not just for AI, but for clean energy, for national defense, for telecom, for etc. And, so there's a race for these investments and, you know, we're just glad to see that the US is stepping up and implementing policy measures to attract some of these investments.

Hope King: Sanjay, can you plug in any holes that maybe Prasad and David haven't mentioned in terms of looking just at the land and where are the most ideal areas around the world to build new infrastructure to support the growth of generative AI and other AI? 

Sanjay Podder: Well, I can only talk from the perspective of building data centers, which are for example greener, because, as you know, AI, classically and now gen AI.

They are, they consume a lot of energy, right? And based on what is the carbon intensity of the electricity used, it causes emission. So wearing my sustainability hat, you know, I'm obviously concerned about energy, but I'm also concerned about carbon emission. So to me, probably, a recent study in fact we did with AWS points to the regional variability of what, for example, AWS has today in various parts of the world.

So if you look at North America, that's US, Canada, EU, you will see that the data centers there, thanks to the cooler weathers, cold weather, the PUE, the Power Usage Effectiveness is much better, lower, right? Because you don't need a lot of energy to just cool the data centers, for example. Whereas, you will see that in, AsiaPac, for example, because of warmer conditions, you might need more power not only to power your IT, but also to keep the data centers cooler.

Right? So, purely from a geography perspective, you will see that, there are areas of the world today where the carbon intensity of electricity is lower because the electricity is largely powered with renewable energy like Nordics. But at the same time, if you go to certain parts of the world, even today, a lot of the electricity is generated through fossil fuels, which means the carbon intensity is high.

So purely from that perspective, if I see, you know, some of the locations like EU, North America, Canada, even Brazil, for example, a lot of their grid has renewable energy. the, the PUE factors are more favorable, but having said that, I have seen, for example, governments in Singapore, they're creating new, standards for how do you run data centers in, tropical climates.

They, in fact, one of the interesting things that they have done is they have raised the temperature by one degree Celsius, the accepted level of temperature in the data center because that translates to a lot of energy savings.

So, wearing my sustainability hat, if you ask me where should the data centers be, I would say they should be in locations where, you know, the carbon intensity of electricity is lower so that we can keep the emissions low. That is very important. And obviously, there are various other factors because one needs to also remember that these data centers are not small.

They take a lot of space. And where will this space come from, you know? Hopefully they don't cause a trade off with other sustainability areas like nature and biodiversity preservation. So the last thing I would like to see is, you know, large pieces of forest making way to data centers, right? Hopefully good sense will prevail.

Those things won't happen. But these are some of the factors one need to keep in mind, you know, if you bring the sustainability dimension. How do I keep emissions lower? How do I make sure impact to water resources are less? One of the studies shows that 40 to 50 Inferences translate to half a liter of water.

So how do I make sure natural, resources are less impacted? How do I make sure the forest and biodiversity are preserved? These are the things one has to think holistically when you select. And obviously other factors that Prasad will know, proximity to water supply for cooling the centers. So there may, it's a complex decision when you take to select a data center location.

Hope King: I love the description and how detailed you went into it. I mean, I just, I think for all of us, you know, looking at our jobs as members of the press, right, we want to know where the future is going, what it's going to look like, and from what I am putting together from what everyone has said so far, I'm, thinking more data centers are going to be closer to the poles where they're cooler, and maybe more remote areas away from people so that we're not draining resources from those communities.

And Neil, you know, I don't know if this is just, well, it's probably a personality thing. But like, I sit there and I say to myself. I could ask ChatGPT, because I've been dabbling with it, you know, to help me with maybe restructuring the sentence that I'm writing. Is it worth taking away water from a community?

Is it, is me asking the query of it worth all the things that are powering it? I mean, these are things that I think about, I'm like an avid composter, like, this is my life, right? What, are we ultimately doing, right? Like, is, this, what are we all ultimately talking about when we now say AI is going to be a big part of our lives and it's going to be a forever part of our lives,

but then you, know, you're hearing, you know, David and Sanjay and Prasad talk about everything that is required just to do that one thing to give me a grammar check? 

Neil Thompson: Yeah, so, I mean, for sure it is remarkable the sort of demand that AI can place on the resources that we need. And it's been a real change, right?

You don't think of saying like, well, should I run Excel? You know, am I gonna use a bunch of water from a community because I'm using Excel, right? You don't think about that. And it's, but you know, some of the calculations, you know, there are things that you can do in Excel and you're like, oh, maybe I've, you know, I've put it on ChatGPT and I shouldn't, I should have done it, put it on Excel or something like that.

So, yeah, so you absolutely have this larger appetite for resources that come in with AI. And the question is sort of what do you do about that, right? And so, I mean, one of the nice things is, of course, that we don't have to put everything on the data center, right? People are working very hard to build models that are smaller so that it would live on your phone, right?

And then you have, you know, you still have the energy of recharging your phone, but it's not so disproportionate to training a model that is requiring tens of thousands of GPUs running for months. So, I think that's one of the things that we can be thinking about here is the efficiency gains that we're going to get and how we can do that and that's happening both at the chip level and also at the algorithmic level and I'm happy to go into a lot more detail on that if folks would like.

But I think that's the trade off we have there is, okay, we're going to make these models more efficient. But the thing is at the same time, there's this overwhelming trend that you see in AI which is if you make your models bigger, they become more powerful. And this is the race that all of the big folks, OpenAI, Anthropic, are all in, which is scaling up these models.

And what you see is that scaling up does produce remarkable changes, right? Even the difference between, say, ChatGPT and GPT 4, if you look at its performance on things like the LSAT or other tests that it's doing, big, big jumps up as they scale up these models. So that's quite exciting, but it does come with all of these resource questions that we're having.

And so there's a real tension here. 

Prasad Kalyanaraman: Yeah, I would add that so one of the things that's important for, everyone to realize is that it is so critical to use responsible AI, right? And that Is 

Hope King: me, using that for grammar? Is that responsible use of AI? Well, I mean I think I should know 

Prasad Kalyanaraman: Responsible AI.

Yeah. what that means is that we have to be careful about how we use these resources, right? Because you asked a question about, like, how much water is it consuming when you use Excel or, any other such application. The key is, you know, this is the reason why when we looked at it and we said, look, we have a responsibility for the environment and, we were actually the, the, ones that were actually came back and said, we need to get to net zero carbon by 2040 with the climate pledge 10 years ahead of the climate accord.

And then we said, we have to get to water positive. I'll give you a couple of anecdotes on this. So we will be returning more water to the communities than what we actually take on AWS by 2030. That's quite impressive if you actually think about it. And that is a capability that you can really innovate on if you try to think about how you use cooling and how you actually think about what you need to cool and so on, right.

The other day I was actually reading another article, in Dublin, in Ireland, where we actually used heat from a data center to help with community heating, right? And so district heating is another one. So, I think there are lots of opportunities to innovate on this thing, to try and actually get the benefits of AI, at the same time be responsible in terms of how we actually use it.

So, 

Hope King: talk more about the water. So, how exactly is AWS going to return more water than it takes? 

Prasad Kalyanaraman: Yeah, so I'll tell you a few things there. One is, so just the technology allowing us to look at leaks that we have in our pipes. It's a pretty significant amount of water that gets leaked and gets wasted today.

And we've done a lot of research in this area trying to actually use some of our models, trying to use some of the technology that we built, to go look for these leaks from municipalities when they actually transfer it. And that's one area, renewable water sources is another area, so there's a lot of innovation that has happened already in trying to get to water positive.

We took a pretty bold stand on getting to water positive because we have a high degree of confidence that this research will actually get us there. 

Hope King: Neil, how do you, is this the first time you're hearing about the water being, I mean this sounds like an incredible development. 

Neil Thompson: It is the first time I'm here, so without more details, I'm not sure I can say more about it specifically.

But 

Hope King: is it, but it's almost like, you know, we need AI to solve these big problems. It's almost, it's, it's, a quagmire. I mean, it's, yeah, you have to use energy to save energy. Like that, that, it's a paradox. 

Neil Thompson: Sure, well, so let me, so I guess let me say two things here. So what, one is to say that it's, like, it is absolutely true that we have, there are a bunch of costs that are associated with using AI, but there are a bunch of benefits that come as well, right?

And so we have this with almost all the technologies, right? We, when we produce you know, concrete roads and things like that. I mean, there's a bunch of stuff that goes into that, but it also makes us more efficient and the like, and in some of the modeling that my lab has done and others have done, right, the upside of using AI in the economy and even more so in research and development to make new discoveries can have a huge benefit to the economy, right?

And so the question is like, okay, if we can get that benefit, right, it may come with some cost, but then we need to think carefully about, okay, well, what are we going to do to mitigate the fact that these costs exist, and how can we deal with the things that they come in? 

Hope King: All of these things are racing at the same time, right?

So you've got the race to build these models, to use it to get the solutions, but then you've got to build the thing at the same time, and then you've got to find the chips. And, I mean, like, obviously it's not my job. My job is to ask the questions. I don't understand how we can measure like what, what of these branches of this infrastructure is actually moving faster and it doesn't, does one need to move faster than the other in order for everything to kind of follow along?

I don't know if anybody in here 

Prasad Kalyanaraman: Yeah. Look, there's sometimes a misconception that these things started over the last 18 months. That's just not true. Of course. Right. Yeah. Data centers have been there for a long period of time. Okay. Trying to actually use, like, right now if I talk about cloud computing, you know, many of us are very familiar with that, like, ten years back or a decade back, we were very early on that.

So, it's not something that is a change that has to happen overnight, right? It's something that has evolved over a long period of time. And so, the investments that we've been doing over 15 years now, are actually helping us do the next set of improvements that we have. So, you know, we say this internally, there's no compression algorithm for experience.

And so, you have to have that experience and you have to actually have spent time going all the way down to the chip level, to the hardware level, to the cooling level, to the network level, and then you start actually adding up all of these things, then you start getting real large benefits. 

Hope King: So, on that point though, is it about building new or retrofitting old when it comes to... because I've seen reports that suggest that building new is more efficient in another way because, you know, rack space or whatever.

So just really quickly on that, I know Neil wants to jump in. 

Prasad Kalyanaraman: It's a combination. It's never a one size fits all. Generally 

Hope King: speaking, as you look at the development data. 

Prasad Kalyanaraman: I would say that there's obviously a lot of new capacity that's being brought online. But it's also about like efficiencies in existing capacity.

Because when we design our data centers, we design our data centers for like 20 plus years. But then, hardware typically actually lands, is useful about like six to seven years or so. After that you have to refresh. Right. So you have an opportunity to go and refresh older data centers. Yeah, 

Hope King: it's not an over there update.

Yeah. You know, what were you going to say? 

Neil Thompson: So you asked specifically about the race. I think that to me is one of the most interesting questions and something we spend a lot of time on. And it certainly is the case there's been an enormous escalation. So since 2017, if you look at large language models have been something like a 10x increase in the amount of compute being used to train them each year, right?

So that's a giant increase. Compared to that, some of these other increases in efficiency have not kept pace. So what you're talking about is the most cutting edge, the resources required are going up. But it's also the case that if you look at generally the diffusion of technology that's going on, right?

Maybe some capacity already exists, but other firms want to be able to use it. That's just spreading out. Well, there you get to take advantage of the efficiency improvements that are going on. And at the, for example, at the chip level, if you look in terms of the floating point operations that are done, the improvement is between 40 and 50 percent per year in terms of costs of flops per dollar.

That's pretty rapid, but even that is actually not that rapid compared to efficiency improvements. So efficiency improvements, for those who sort of don't think about it in this way, think about it as like, I have some algorithm that I have to run on the chip, that's going to use a certain number of operations, and the question is, can I design something that achieves that same goal, just asking for fewer operations to do, right?

It's just a pure efficiency gain. And what we see is that in large language models, the efficiency is growing by 2 to 3x every year. Which is huge, right? So if you think about the diffusion side of things, actually there, efficiency gains are very high, and we should Feel a little reassured that as that happens, the demands will drop.

Yeah, 

Hope King: but then you have to contend with the volume. Because, you know, even if you reduce, I mean, even if you're making each more efficient, you're still multiplying it by other needs, so what does that offset? Math, net. 

Neil Thompson: Yeah, so, I mean, the question there is exactly how fast is AI growing? Sure. And that actually is a very, turns out to be a very deep question that people are struggling with.

So, you know, many, unfortunately, many of the early surveys that were done in this area were sort of what you might call a convenience sample, like you called people you, cared about because they were your customers. But that was not representative of the whole country. So a couple of years ago, the census did some work, found out that there was about six percent of firms had actually been operationalizing AI.

Not that many. So we know that's going to be growing a lot, but exactly how fast, we're not sure. I think what we can say, though, is as that happens, you know, we could have a moment where it's happening faster, but as long as these efficiency increases, you know, continue over the longer term, and we do, you know, so far we've seen them to be remarkably robust.

That suggests that as that diffusion happens, then it will go down, so long as we don't all move to the cutting, the biggest cutting edge model in order to do it. 

Hope King: David, I think you have probably some insight into how quickly the chip makers themselves are trying to design more efficient chips, you know, to this end.

David Isaacs: Yeah, let me just, start with the caveat that I'm not a technologist. I'm, a scientist, but I'm a political scientist. But, you know, so, but I'm glad that conversation has turned to innovation and efficiency gains because some of the questions ten minutes ago were was assuming that, the technology would remain the same and that's not the case.

Many people in the room are probably familiar with Moore's Law and the improvements in computing power, the improvements in efficiency, and the reduced costs that have been happening for decades now. That innovation pathway is continuing. It's not necessarily the same in terms of scaling and more transistors on silicon, but it's advanced packaging.

It's new designs, new architectures. And then there's the software and algorithm gains and the, like. So, we believe that will result in the efficiency gains and the resource savings. There's been a lot of third party studies. We know for the semiconductor industry that, the technologies we enable have a significant multiplier effect in reducing climate emissions, conserving resources, whether it's in transportation or energy generation or manufacturing. So we think there's a very substantial net gain from all these technologies. And I guess the other thing I would add is, you know, the CHIPS Act, the formal name is the CHIPS and Science Act, and there's very substantial research investments.

Some of which have been appropriated and are getting up and running. Unfortunately some of the science programs have been, and this is Washington speak, authorized but not yet appropriated. We need to fund those programs so that this innovation trajectory can continue. 

Hope King: Yeah, I mean, just by nature, you know, we're the skeptics in the room, and we're looking at just the present day, and the imagination that is required for your jobs, is one that's not easily accessible when a lot of us are concerned about the here and now.

So, I appreciate that context, David. Sanjay, I want to talk about, what is going to take though in terms of energy, right? that is something that will remain the same in terms of the needs. So what are you seeing in terms of how new data centers are being built or even maybe systems, clusters of, infrastructure that support these data centers that, that you're seeing emerging and what is the sort of best solution as you know, more companies are building and looking for, land and, to actually grow all these AI systems?

Like what, are the. Actual renewable sources of energy that are easy and sort of at hand right now to be built? 

Sanjay Podder: Well, I'm not an expert on that topic, so I won't be able to comment much on that, but I can talk about the fact that, you were discussing on efficiencies. For the same energy.

You know, you can, do a lot more with ai and what I mean by that is, the way we build AI today, whether it's training or inferencing, there is a number of easy things to do, which has a huge impact in amount of energy you need for that AI. I think there was a reference to, for example, large gen AI models.

They are typically preferred because, probably it gives more accuracy. But the reality is, in different business scenarios, you don't necessarily have to go to the largest of the models, right? And, in fact, most of the LLM providers today are giving you LLMs of different t shirt sizes. 7 billion parameters, 80 billion parameters.

One of the intelligent things to do is, fit for purpose. You select a model which is good enough for your business use case. You don't necessarily go to the largest of the model. And the energy need is substantially lowered in the process, for example, right? And that to me are very practical levers. The other thing, for example, there was, I think, Prasad referred to that these models, end of the day, they are deep learning models.

You know, there are techniques like quantizing, pruning, by which you can compress the models, such that smaller models will likely take less energy, for example. And then, of course, you know, inferencing. Traditionally, we have been always thinking about training with classical AI. But with generative AI, the game has changed.

Because with gen AI, given it's how pervasive and everybody is using it, millions of queries, the inferencing part takes more energy than training. Now again, simple techniques can be used to lower that, like you have one shot inferencing, if you batch your prompts, now when you do these things, you come to your answers quicker.

So, you know, you don't have to go and query the large model again and again So if you want to, you know, design your trip itinerary to San Francisco, instead of, you know, asking 15 prompts, you think about it, how do you batch it, this is my purpose, this is why I'm going, you know, this many days I'll be there, and, you know, it is very likely you'll get your itinerary in one shot, and in the process, a lot less energy will be used.

So the point here is, and of course data centers, right? All the chips that we are talking about, all these can, the custom silicons can lower the energy needs. So while I may not be able to comment on, you know, what are the best sources of energy because renewable energies are of various types, solar, wind, now people are talking about nuclear fusion and whatnot, but I'm seeing even within the energy that we have, we can use it in a very sustainable way, in a very intelligent way so that you get the business value you're seeking without wasting that energy unnecessarily. 

Hope King: It sounds like you want companies to be more mindful of the, of their own data needs and not be overshoot, but be more even efficient in how they're engineering, what the, applications can do for them.

Sanjay Podder: Absolutely right. And you know, and on that point, about 70 to 80%, maybe more of the data is dark data and what is dark data? These are data organizations that store with the hope that one day they will need it, and they never need, require them. So you are simply storing a lot of data, and these data also corresponds to, you know, energy needs.

Right? So, there are a lot of rebound effect that you mentioned some time back, because compute, the cost of compute went down, storage went down, the programming community became lazy programmers. So what happened as a result of that is, You know, you are having all this, you're not bringing efficiency in the way you're doing software engineering.

You are having all these virtual machines, which you are hardly using, they're hardly utilized. They all require energy to operate. So there's a lot of housekeeping we can do, and that can lower energy, you know, in a very big way. Energy needs, right? And then, because even if there's a lot of renewable energy, IT is not the only reason, or AI is not the only reason where the renewable energy should be used.

There are other human endeavors. So, we have to change our mindset, be more sustainable, more responsible in the way we do IT today, and we do AI today. 

Hope King: Did you want to add to that? 

Prasad Kalyanaraman: Yeah, I would say that, you know, what he said is 100 percent right, which is that, as I said, like, you kind of have to actually think through the entire stack for this.

And you have to start going off for every stack and thinking about how you optimize it. Like, I'll give you an instance about cooling, and I know you wanted to talk about cooling. 

Hope King: Yes, air conditioning for the data centers, and now the liquid cooling that's coming in to try to Exactly. Yes, go ahead. 

Prasad Kalyanaraman: So over the years, what we've actually done is, we have figured out that we don't need liquid cooling for a vast majority of compute.

In fact, even today, like pretty much all our data centers run on air cooling. And what it means is we use outside air to actually cool our data centers. Okay. it, none of our data centers are nearly as cool as what you are in this room, by the way, just to be very clear. 

Hope King: It's not as cold as this room?

Prasad Kalyanaraman: Not even close.

Hope King: What's the average temperature in a data center? 

Prasad Kalyanaraman: It's well above 80 degrees. 

Hope King: Really? Yeah. Okay. what's the range? 

Prasad Kalyanaraman: It's between eight, between 80 to 85. Now, the thing is that you have to be careful about, cooling the data centers too much because you have to worry about relative humidity at that point as well.

And so, we've spent a lot of time in computational fluid dynamics to look at it to say, do we really need to actually cool the datacenters as much? And it's one of the reasons why our datacenters run primarily on air, and outside air actually. Now there's a certain point of time where you cannot do it with just air, that's where liquid cooling comes in.

But liquid cooling comes into effect because some of these AI chips have to be cooled at the microscopic level and air cannot actually get there fast enough. But even there, if you think about it, you only need to liquid cool a particular AI chip. But as I said, AI does not require just the chip, you need networks, you need like the storage and all that.

Those things are still cooled by air. So, my estimate is that even in a data center that has primarily AI chips, only about 60%, 60 to 70 percent of that needs to be liquid cooled. The rest of it is just pure air cooled. 

Hope King: I think that's pretty fascinating, I think, because I think that's been, discussed more recently that liquid cooling is actually more crucial.

I don't know if you saw Elon Musk tweeting a picture of the fans that he has. He like made a pun about how his fans are helping. Whatever, anyways, go check it out. It's on X. thank you for, to, to circle back on the cooling. I think that was definitely, you know, I think a big question for energy use because that, those systems require a lot of energy.

As we come to the last couple of minutes, you know, I want to turn the conversation forward looking. The next two to five years, if I were to be back here with the four of you sitting on this stage, what would we be talking about? And where still do you think, at that point, gaps that we would need to, you know, that we're, that we haven't been fulfilled even, you know, since sitting here now, especially because I think, as you mentioned, right, like, infrastructure is not easily upgradable, like software is, and there will need to be those investments, physical investments, whether it's labor, whether it's land, physical resources, so, so in two years, what do you think we're going to be talking about?

Prasad Kalyanaraman: I'll start. I think you're going to, we're already starting to talk about that, so I expect that we'll talk more about those things. There's going to be a lot more innovation, generative AI will spur that as well in terms of how to think about renewable sources and how to actually run these things very efficiently.

Because some of the things that are realities is that generative AI is actually really expensive to run. And so, you're not going to actually spend a lot of money unless you actually get value out of it. And so there's going to be a lot of innovation on the size of these models, there's going to be a lot of innovation on chips, we're already starting to see that, and there's going to be a lot of innovation on, of course, nuclear, will be a part of the energy equation.

I think the path from here to, at least in our minds, our path to get to, net zero carbon by 2020. It's going to be very non linear, it's not going to be this one size fits all, it's going to be extremely non linear. But I expect that there will be a lot more efficiency running in it. It's one of the reasons why we harp so much on efficiency, on how we actually run our infrastructure.

One, it actually helps us in terms of our costs, which we actually translate to our customers. But it's also a very responsible thing for us to do. 

Hope King: David, I actually want to jump over to you just for a second on this because, you know, a lot could happen in the next couple of months when it comes to the administration here in the US is there anything that you see in the next two years, politically, that could change the direction or the pace of development when it comes to AI, infrastructure building, investments from corporations, maybe even pulling back, right, pressures from investors to see that ROI on the cost of these systems.

David Isaacs: Yeah, I'm hesitant to engage in speculation on the political landscape, but I think things like the CHIPS Act, US leadership in AI are things that enjoy bipartisan support and I think that will continue regardless of the outcome of elections and short term political considerations. You know, I think getting to your question on what we'll be talking about a few years down the road, I'm an optimist, so I think we'll be talking about how we have a more resilient supply chain for chips around the world.

I think we'll be, enjoying the benefits of some of the research investments that propel innovation. One additional point I'd like to raise real quickly is, soft infrastructure and human talent. I think that's an important challenge and, at least in the US, we have a huge, skills gap in, among the workforce.

Whether it's, you know, K through 12 STEM education or, you know, retaining the foreign students at our top universities, and so, and that's not just a semiconductor issue, that's all technology, advanced manufacturing throughout the economy. So I think that will be a continuing challenge.

Hope King: Are you seeing governments willing and interested to increase funding in those areas? 

David Isaacs: On a limited basis, but I think we have a lot of work 

Hope King: to do. What do you mean by limited? . 

David Isaacs: I think there's a strong interest in this, but, you know, I'm not sure governments are willing to step up and invest in the way we need to as a society.

Hope King: Sanjay, 

To Two years. we're talking again. What's the, what's gonna be on our minds? 

Sanjay Podder: So we have not seen what gen AI will do to us. We are still like in kindergarten talking about infrastructure. It's like the internet boom time, right? You know, and then we know what happened with that.

You know, our whole lives changed. With gen AI, enterprises will reinvent themselves. Our society will reinvent themselves and all that will be possible because of a lot of innovation happening in the hardware end as well, the software layer as well. What we need to, however, keep in mind as we transform.

None of us in this room know how the world will look like. It will look very different. That's what's certain. But one thing that we need to keep in mind is this transformation should be responsible. It should be keeping a human in the center of this transformation. We have to keep the environment in the ESG, all three very important, so that, you know, our AI does not disenfranchise communities, people.

I think that is going to be the biggest innovation challenge for us, because I'm very certain that we will, human ingenuity is such that we will build a better world than what we have today. There'll be a lot of innovations in all spheres but in the journey, let's make sure that responsible AI becomes the central aspect, sustainable and responsible AI become a central theme as we go through this journey, right?

So I'm as eagerly looking forward to it as all of us here, how this world will look. 

Hope King: Yeah. No digital hoarding. Lastly, Neil, we didn't talk about the last mile customization problem, today, but I don't know if you, if that's something that you're looking for in the next two years to be solved. What other things?

And you can speak to the last mile too, if you want. 

Neil Thompson: Sure. So, so for those who don't know, you know, the, idea of the last mile problem in AI is that you can build, say, a large language model and say, this works really well, in general, for people asking questions on the internet, but that might still not mean that it works really well for your company for some really specific thing.

You know, if you're interacting with your own customers, right, on your own products, with those specific terms, with those things, that, that system may not work that well. And you have to do some customization, that customization may be easy, you just need a little prompt engineering, or you feed it a little bit of information from your company.

Or it could be more substantial, it could, you could actually have to retrain it. And in those cases, that's gonna slow the spread of AI. Because it's going to mean that we're going to get lots of improvement in one area, but then you say, okay, well, think about all of the different companies that might want to adopt.

They have to figure out how they can adapt the systems to work for the things they do, and that's going to take time and effort. And so that last mile is going to be one that I think is going to be really important, because I think it's very easy to say, I read on the newspaper, or I saw a demonstration that said, boy, AI can do amazing things, much more than it could do even three months ago.

And that's absolutely true. But that's, then there's also this diffusion process and that's going to take a lot longer and so I think what we should expect over the next two years in terms of this is more of this sort of wedge of there are going to be some folks who are leading and these resource things that we've been talking about are being incredibly salient for them and they're going to be people who are behind for whom the economics of customization don't still work and they're going to be in the model that they were in ten years ago.

And so that divide, I think, is going to get bigger over the next two years. 

Hope King: Alright, David, thank you for joining us, Sanjay, Neil, Prasad, and for all of you, hopefully you found this as informative as I did. Thanks, thanks everyone. 

Prasad Kalyanaraman: Thank you. 

Sanjay Podder: Hey, everyone. Thanks for listening. Just a reminder to follow CXO Bytes on Spotify, Apple, YouTube, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. Thanks again, and see you in the next episode.




Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 year ago
45 minutes 46 seconds

CXO Bytes
Greening Digital Sustainability with Dr. Ong Chen Hui
Welcome to the first episode of CXO Bytes! Join host Sanjay Podder as he talks to leaders in technology, sustainability, and AI in their pursuit of a sustainable future through green software. Joined by Dr. Ong Chen Hui, Assistant CEO of Singapore's Infocomm Media Development Authority (IMDA), the discussion focuses on Singapore's comprehensive approach to digital sustainability. Dr. Ong highlights IMDA's efforts to drive green software adoption across various sectors, emphasizing the importance of efficiency in data centers and the broader ICT ecosystem. So join us for an intriguing and though provoking conservation about the critical role of government and industry collaboration in achieving sustainability goals amidst the growing demand for digital technologies.

Learn more about our people:
  • Sanjay Podder: LinkedIn
  • Dr. Ong Chen Hui: LinkedIn

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

Resources:
  • IMDA [01:20]
  • Government Technology Agency | Singapore [01:54]
  • Singapore Green Plan 2030 [02:55]
  • IMDA and GovTech unveil new initiatives to drive digital sustainability | IMDA - Infocomm Media Development Authority [10:19] 
  • Your Guide to the Gartner Top Strategic Technology Trends in Software Engineering [10:56]
  • Asia Tech x Singapore [23:37]
  • Software Carbon Intensity (SCI) Specification Project | GSF [25:11] 
  • Welcome to Impact Framework {33:31]
  • Green Software Foundation [34:00]
  • Digital Sustainability Forum | ATxSummit [37:33]

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
Connect with us on Twitter, Github and LinkedIn!

TRANSCRIPT BELOW:

Sanjay Podder: Hello and welcome to CXO Bytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting chiefs of information, technology, sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.
I am your host, Sanjay Podder.
Hello everyone. Welcome to CXO Bytes. This is our inaugural podcast on how do you use green software for building a sustainable future. This is a new podcast series and the whole idea behind it is, you know, embracing a culture of green software, it needs to come from the top. And we therefore want to talk with decision makers, with business leaders, with leaders who are running nation states like Singapore, for example, at sea level.
You know, how are they driving this culture change when it comes to digital sustainability and green software, for example? 
Today I am super excited to invite Dr. Ong. She is the Assistant CEO of IMDA, which is the Infocomm Media Development Authority of Singapore. And we are going to chat on how IMDA is championing digital sustainability as well as green software. Welcome, Dr. Ong. 
Dr. Ong Chen Hui: Thank you for having me on your inaugural podcast on green software.
Sanjay Podder: And you know, I had my own selfish reason for inviting you because while the Green Software Foundation has been interacting with many, many large businesses across the world, IMDA and Singapore GovTech, these are two members of Green Software Foundation who represent the government, right?
And we all know the very important role that government will play in sustainability in general. So I wanted to understand from you, you know, how are you looking into this space? So we will talk a lot about that. The other aspect is probably to begin with, for our audience, a perspective on what is IMDA.
You know, what is your specific remit, what you are trying to do in Singapore, if you can give us, you know, a few insights into that.
Dr. Ong Chen Hui: Okay, so here in Singapore, of course, climate change is actually something that is a bit of a existential thing for us, us being a small nation state and we're also an island, to us, climate change and the associated rising sea level is a matter of concern. Right? So, as a result, we have put in a green plan that states our, sustainability goals by the time we reach 2050. And this is actually a whole government effort. So, I don't think it is a case where it's one ministry or one agency that's responsible for the whole world. It is about the whole of government working together in order to make sure that we meet the goals of our Green Plan.
Now, what are some of the things that we are doing? Many things, for example, the National Environment Agency is actually rolling out some of the regulations. We have things like e-waste management, for example. Just now you mentioned GovTech, which is our sister agency. GovTech is also rolling out green procurement when they're actually procuring software solutions. Within IMDA, we are responsible for some of the industry development. We're also what we call a sectoral lead of the ICT sector. So, our own green strategy, comprised broadly of three different strokes. The first is about greening ourselves as an organization.
The second is really about greening the sector that we are responsible for, that we are leading. So, in that case, there will be things like the telecommunications sector, the media sector. And the third thing we want to do is to enable our ICT solution providers to provide green solutions to the broader economy so that we can scale the adoption, we can ease the friction out there in the ecosystem.
So essentially, that's greening ourselves, greening the sector, as the lead. And the third is really to kind of provide solutions through the ecosystem so that the wider community can actually benefit.
Sanjay Podder: Now this is really a full 360 degree kind of approach and it is phenomenal. And, I was, I was wondering, you know, and you mentioned briefly on Singapore being an island state. I was thinking, why digital sustainability? What will happen if Singapore decides not to do it, for example, right? Do you have a point of view, say, because, you know, there are many different levers of, sustainability, you know, I can understand the larger sustainability, but what is the importance of digital sustainability?
Do you think it's an important enough lever or maybe you can look at nature biodiversity or something else, right? So specifically for digital sustainability. What is it that triggers IMDA that this is a important initiative? And I'm, I'm seeing this is my second year in Asia Tech that, you know, this is something you give a lot of importance to.
Bringing in leaders from various organizations. Doing deep deliberation. I also remember last year, you brought out your new data center standards, I think increasing the temperature by one degree that has an implication. If you could throw a little bit more light on digital sustainability in particular,
Dr. Ong Chen Hui: Mm hmm.
Sanjay Podder: why do you feel that's a very important lever for a country like Singapore and maybe for many other countries around the world?
Dr. Ong Chen Hui: Yeah. Well, I think you're actually exactly right that when we are trying to drive sustainability, actually there are many different strokes. Some of it includes looking at energy sources and all that, which actually is also very important for Singapore because we are small. We do, have to look at, different kinds of energy sources and how we can potentially actually import some of them, right?
Now, when it comes to digital sustainability, actually our journey, I would say started many years ago. Maybe more than a decade ago, when we started looking at, some of the research work within the research community about, making sure that our data centers, can operate more efficiently in the tropical climate.
Now, data centers, comprise of almost a fifth, of the ICT carbon emissions. And because they are such a huge component of the carbon emissions, of course, their efficiency has always been top of the mind. Now in the tropical climate like ours, a large part of the energy sometimes is attributed to the cooling systems, right?
The air conditioning that's actually needed to bring the temperatures down. So as you rightly pointed out, what we found out is that actually if you were to increase the temperature by one degree, that can lead to a savings of between two to five percent off. Carbon emissions. So, and that as a result, we have been investing in research within our academia, funding some of the innovation projects with our ITC players, in order to look at what actually works and what doesn't.
Because I think in Singapore, regulations always need to be balanced with innovation. So that have kind of, led to what happened last year, which was that we released the first, standards for tropical data sensors. But we wanted to go a lot more, right, because some of those standards, around cooling and all that, that's kind of like looking at how efficient the radiators are in a car.
But we also need to look at how efficient the engines are. And the reality is that, if you look at the trends of ICT usage of software applications. I mean, so much of our lives, whether it is watching videos, watching TikTok, right, our education, around all that, most of this have moved to become, to be enabled by digital technologies.
And when we look at the consumption of, data centers and the kind of workload in it, it is increasing year by year. Now, with the explosion of AI, we know that the trend is probably that there will be more consumption of digital technologies. And those are the engines that sits withinssb the data centers.
And we need to make them efficient. And as a result of that, we have decided that we need to also get onto this journey of greening the software stack. And greening the software stack means a few things. The first is, of course, I think this is still a fairly nascent area. How do we make software more measurable, so that there's a basis of comparison, so that we can identify hot spots that I think is important.
The second part that I think is important is also, given all the trends today, GPUs, CPUs all needing to work together, how do you make them work efficiently? How do you process data efficiently? How do you make sure that the networks and the interconnects within the data centers are efficient.
I think all of these are worthy problems, to look at. Some of it will rightfully stay, still in the research stage. So we'll be funding, research programs, called the Green Computing Funding Initiative around it. But at the same time, we also think that there are some practices that may be a bit more mature already, and we should encourage companies to actually innovate on top of it.
So we're also conducting green software.
Sanjay Podder: I've heard about that, you know, and that's so innovative. I myself try to engage with all my clients in embracing green software. It is not a trivial
Dr. Ong Chen Hui: Yes.
Sanjay Podder: challenge, you know, because I feel that it's a new way of doing things. In fact, I just read a Gartner report on top, software engineering trends in which they say green software engineering is one of the five.
10 percent of the organizations they surveyed, they have green software or sustainability as one of the non functional requirements of software, but they believe in the next three years, by 2027, 30 percent of organizations will have green software engineering as a requirement for software development.
So this is. Indeed, growing very rapidly. But having said that, you know, the, there are a lot of adoption challenges because people are, even if they want to do it, there are a few challenges here. First of all, people may not want to do it thinking, "Oh, this is a small problem, not worth solving. I will procure a lot of renewable energy, or I will buy offset and my problem is gone," right?
And then there will be people who will say, "Oh no, this is... offset is not the answer. Renewable energy is not the answer. We have to inherently lower the emission or the energy required for software, make software carbon efficient." But then, where are the standards? How do we do it? Our people do not know how to do it.
And I'm talking about organization. You're talking about country. It's a very big problem. Now, the question to you, therefore, is, you know, how are you getting people excited in Singapore? And you also mentioned small, medium business, ecosystem. So it's a diverse ecosystem entity. So what has been your approach to make people excited, enable them?
And what would be your North Star? Like, what will make you super happy that I've done my job, you know, this is what I wanted do. So, how do you look at it, Dr. Ong? Okay,
Dr. Ong Chen Hui: Okay, I guess, maybe I'll answer the question of the North Star first. A country like Singapore, we do have a very limited carbon budget. Right? To me, if we can create some carbon headroom for Singapore, so that, we can have more options for, For different aspects of our economic growth, I think that will be, the best outcome that we can actually aim for.
That's a great point. Yeah. Now, in order to be able to do that, you're right. It isn't just about the government waving a flag and saying that that's very important. It may not just be sufficient for a few companies to go about doing it. What we want to be able to drive this across the entire ecosystem.
Now, of course, there will be some companies that are very much more forward looking, and may have already embraced many of these practices. They may even have, taken stock of what is it that they have, where they can actually improve, in terms of their carbon emissions. And there will be others who are still a bit more tentative and on the fence, right?
And the question in my mind is, so what can we help this first group? How can we help that group in the middle as well? I think for those in the first group, and we have seen some which are very advanced, right? They are talking about the fact that maybe they already have a very large, applications footprint.
E-commerce, all that, for example. And they are looking, constantly looking at how to refresh a stack. Because they need to perhaps drive down the cost of operations. And this particular group, they sometimes have very advanced needs. They may be talking about increasing the level of control so that they can dynamically schedule their tasks and bring down efficient, bring up the efficiency of the entire system.
They may be talking about advanced partnerships, with some of their vendors in order to make sure that, they continue to leverage on the best and most efficient, from their supply chain. And for some of these, what we have, tried to do with them, is to, orchestrate, or we call it matchmaking, together with, our academia, right, to look at what projects that they can actually do together so that they can create new IP in this area so that they can continue to be a forefront and leverage on all these,, ideas, solutions, that perhaps the researchers are better equipped to provide.
But there's a group in the middle, I think, who may want to see something a bit more concrete. They want, they may have read that there are things that they can implement, but they're not quite sure where to actually invest their time. And if we think a bit about it from an organization point of view, it's not like they can experiment indefinitely, right?
So I think they want to be a bit more targeted. I think for this particular group, the question is, "how can we actually encourage innovation?" Are there, solution providers who may, know a bit more in this area that can increase awareness and kind of bring a bit more focus to the innovation? Are there best practices, guides, frameworks that we can put out there that can actually encourage the innovation and allow some of these companies to explain to their C suite that this is, how should I say, a systematic approach to innovation. And so for the former, what we wanted to do from the Green Software Trust is a little around that, right? To increase awareness, bring the solution providers to work together with the companies, let them see that actually some of this green software world, digital sustainability world, it's not just about doing good, but there's a possibility to do well as well.
Right? In that you're actually, improving your bottom lines and all that. And the other part is really to work with organizations like Green Software Foundation in order to make sure that things like, best practices, guides, broader ecosystem awareness, as well as standards, is something that we can actually collaborate together as a whole ecosystem in order to make sure that over time, this being a journey of innovation, over time, we'll be able to mature many of these practices.
And actually, maybe perhaps reduce some of the risk that organizations may perceive, that they want more clarity about how things ought to be done.
Sanjay Podder: I think that's, that's a very nice, comprehensive, you know, approach to what you're doing. You mentioned a few things, DR. Ong, that caught my attention. One was, you mentioned about AI, right? And the whole world is talking about AI now. Which is good. It's almost magical what we are seeing with LLM.
But then there is a dark side to it. And the dark side is, when you look at, some of the reports around, the impact of large language models on the environment. You know, there was a very recent study, from Hugging Face that says every time you generate an image in your LLM, it consumes a full charge of an iPhone, for example, right?
So, and as consumers, we don't realize that, you know, we must be, we are like generating images, which we may not even look at it the next time, but we don't realize behind the scene. You know, so much energy was used, and these are like, we are talking about 176 billion parameters, models, and there's a mad rush everywhere in the world to create these large models, the bigger the better.
But the bigger also means more energy needed.
Dr. Ong Chen Hui: Yeah.
Sanjay Podder: Every time you do an inference, the whole machine gets fired up. And, and then the, interesting bit is, the environmental impact because you need so much energy. Many parts of the world, you know, there is still fossil fuel used to generate those
Dr. Ong Chen Hui: Hmm.
Sanjay Podder: You need some of the biggest data centers to be built for our new AI world, right? And then there's impact to water, for example. Another very interesting study that pointed out that every 30 to 50 friends is half a liter of water for cooling, for example. So, while there is no doubt that generative AI is a magical technology that is going to change our world, I'm sure as, as a government body, as a regulator for media and info, this is something probably you're watching very closely, you know, how do we respond to this?
challenge that this magical technology brings. What has been your, approach to Gen AI? I'm sure Asia Tech, you're going to find out a lot of answers, but yeah.
Dr. Ong Chen Hui: I think this thing about greening of AI is a very important problem. When it comes to greening of AI, I think there are a few different dimensions to it. One is, can we actually design the AI a little bit differently? So that the training of it, doesn't take as much energy. Just like you mentioned about inference, right?
Generating the, an image. But I was reading some of the statistics about, the training of AI models using, previous generation of transformer technologies and already that it may be equivalent to, the carbon emissions that a few cars, actually make in their entire lifetime. So I think when it comes to AI, while we refer to AI actually perhaps some of the thinking that we are hearing from both the industry as well as academia around us is that we may need to look at different phases of AI.
So the training itself may be one kind, and it may, require a certain technology stack. Today, the inference technology stack is exactly the same as the technology stack for training. And perhaps that may need to specialize, so that you have a far more efficient kind of technology stack, that will be used for inference.
And if there can be a more customized, more, targeted kind of technology stack. Perhaps that will lead to some kind of energy savings as well as reduction in emissions. But this, I think, is still very early because we are talking about specialized AI chips, right? I think some of it, may still be very much ideas in research or in early stage startups.
Yeah. And then there is, of course, the other point, which is really about how much generative AI is actually consuming in terms of data and because of the amount of data that actually needs to be consumed, the kind of efficiency in data processing and all that, that also actually takes up, quite a bit of emissions.
So around that, I think there may be a need to look at, different kinds of architectures that can make do with less data. And as a result of making do with less data, they, the footprint may be a lot smaller and the amount of energy usage may be a lot smaller. Yeah, but that's it. I think these are all, because of how nascent it is, we are, looking at, how the research community can actually, participate in this and perhaps develop and mature some of the sciences around us.
So that, that can actually lead to, perhaps more innovation down the road. I'm fully cognizant that AI being so hot now, a lot of people are also talking about, the kind of environmental impact about AI. So I would imagine that perhaps next year when we have a, when we meet again at Asia Tech X, perhaps we can then compare notes and see where we actually see the whole ecosystem heading towards.
Sanjay Podder: Absolutely. There's so much action happening on the custom silicon side as well, right? As you rightly pointed out, specific, specific chips for inferencing, for training...
Dr. Ong Chen Hui: even for data processing.
Sanjay Podder: yeah, this is going to be... you also spoke about the dark data problem and data itself, right? Because so much of data today is never used.
It's dark, but you're, you're still storing it. And I sometimes suspect, with, Gen AI, people will store it even more. So, there is just so much, you know, some of the challenges get amplified in the process. It's a year since you joined, I think, Green Software Foundation. How has the experience been?
Dr. Ong Chen Hui: I think it's been great. Certainly I think having that wider community that my team can actually tap on, bounce ideas and figure out what the, new wave of innovation is, I think has been very, very helpful. And it's something that we really want to be able to continue doing.
And we want to be able to bring, some of our other partners within government, within academia in together as well. Because I think in areas like this, where we all have a shared responsibility to protect the environment,, We really should tap on all the best ideas that are actually out there.
The second part, and I really want to congratulate you on this, is the, your ability to push for the SCI into, the standards. Actually on this, perhaps can I, tap on your brains a little bit on what sparked, the need for SCI and what do you want, for the standards next?
Sanjay Podder: I think when we, in fact, last week was our, third year of existence, like we announced Green Software Foundation in Microsoft Build 2021, Accenture, Microsoft, GitHub, ThoughtWorks, and a few of us, Goldman Sachs, we came together, we announced the Green Software Foundation. To be very candid, I didn't, I I think that we'll get this kind of response.
Today we have more than 60 members and some of the top companies from around the world, but right at the beginning we were very clear that this area is so new that there was absolutely no standard. There was no language to express the challenge. There was no training for people, right? And, and no organization, unless you're, of course, a government, but no organization can say that this is a standard.
You know, you have to have a consensus. And, to me, the whole idea about the foundation was to build those consensus, to have a platform to deliberate, to share the challenges, to find an answer to the problem. The software carbon intensity, I think, is an amazing way of expressing, in a very simple way, you know, what are the dimensions of a software that you need to look into when you want to measure the carbon intensity?
And this was like something that people were asking for, saying, how do I, how do I measure? How do I express? What is the language? The whole idea about embodied carbon is very important. People forget about it. People only think about the carbon emission during usage. But if you, I think, earlier you also pointed out e-procurement for example.
You know, the whole idea about looking at it holistically from the factor of embodied carbon so that you don't lose sight of because by the time a laptop comes to you, a bulk of its emission, lifetime emission has already happened during manufacturing state. So, you know, how do you look at it holistically?
Embodied carbon. Then things like, it's not a carbon offset discussion or a renewable energy discussion. It's about making software, inherently carbon efficient, which means looking into it. From a language stack, from architecture stack, the whole, you know, aspect of a software. So if you look at SCI, it brings in, for example, embodied carbon. It brings in, the carbon intensity of electricity. So that your software is more carbon aware, right? So the same software run in Singapore and run somewhere else, will have very different emission levels. And then, you know, per unit, what we call the R. So I think it's a very actionable way of expressing and what we therefore saw was rapid adoption of, SCI.
In fact, some, startups have started incorporating SCI. So we are SCI compliant. Large organizations are embracing SCI and, the best thing that happened was when we were blessed with, as an ISO standard. So SCI is now an ISO standard.
Dr. Ong Chen Hui: Yeah.
Sanjay Podder: Looking into the future, I think, SCI is just the right analytical approach to think about how do you model emissions from, LLMs and generative AI.
So that's going to be another area of, research. Exploration for us, how do we further build upon SCI to give a very actionable way of looking into emissions coming out of AI, both during training and inferencing. So I think it's one journey that we are super proud of. And all the members that came together to contribute and that's what we want to replicate, you know, creating more such, actionable deliverables from the Green Software Foundation for people to make this a reality, this space a reality, right?
So, so I agree with you. Congratulations to you as well as being a member of the GSF. It's something we all share and feel very proud about.
Dr. Ong Chen Hui: Yeah. Yeah, like I said, I think this is, certainly a course that IMDF is very, passionate about. And I guess, from the Green Software Foundation, it's certainly something that you have, been driving for the past three years, right? So beyond the SCI, right?
The Software Carbon Intensity, do you actually see a need to work together with regulators of the world in order to get this adopted? Or do you feel that perhaps there may be something else that may, that may be the next priority for Green Software Foundation?
Sanjay Podder: Yeah, I think there are a couple of things here. One is, as I mentioned sometime back at the start of the podcast, I think it's a culture change. Green software is a culture change. While the developers may want to do it, if there is no adequate support from the top, it is not possible to make your organization adopt green software in a very systematic fashion.
So one of the area, and the podcast is a part of that, is to. Spread this awareness among organizations, that, you know, they need to have this in their over digital sustainability and green software in their, in either in the net zero goal or the larger scheme of things, I do recall, you know, just after you joined Green Software Foundation, you created your digital sustainability policies and things like that.
I think that was great. So that, that to me is important. We obviously have to work, with the regulators by giving them inputs as and when we are called for as to what do we think about whether a greening of AI or what is possible so that, and an example would be that I am personally here, on, you know, invitation from IMDA to participate in Eurasia Tech.
Very important to be part of these conversations, understand how this space, is going to evolve. How can we, contribute to solve some of these challenges? Because we all know that, as an example, generative AI is here to stay, the adoption will only increase. What can we do? You know, we cannot just sit here and give a doomsday, you know, scenario.
We are here to solve the problem and say, "okay, so what can we do about the emissions? Is there a way to solve it? Is there a way to measure it? Are there best practices you can bring? Can we enable the ecosystem?" So I think, as I always say, we are the solution seekers, right? And to me, from a Green Software Foundation perspective, the members that we have, which is so diverse, not only governments, large organizations, nonprofits, academia, big businesses across industry.
That is our strength because we are getting that unique support to find solutions to these very difficult problems. So very excited. I think I would love to focus on green AI. That's what we are trying to do. Love to focus on how do we give frameworks and enable organizations to embrace green AI and transform themselves and green software and transform themselves.
So those are some of the... very recently we did the impact framework. Oh, yes. Again, the idea was. Very science based, complete transparency on how do you measure emissions. We did a carbon hack around the world. We saw immense participation from, you know, the developer community coming up with unique solutions.
So there is a lot of excitement in this space. And, that's what the Green Software Foundation will continue to champion with all the members' support. And look out for our GSF Summit. It's coming soon. In October, we will do in all major cities around the world, including in Asia, we are going to have, so we look forward to all your support to further champion this cause.
Dr. Ong Chen Hui: You mentioned about cultural change, right? And that certainly is also something that's very much top of the mind for us. When it comes to, cultural change, perhaps, let's exchange some notes around this. Do you find that it's more effective to have targeted conversations, let's say from Green Software Foundation to the C suite?
Or is it, more important to equip internal teams, right, with measurable, changes that are possible when they adopt green software or green AI practices.
Sanjay Podder: Yeah, no, I think it's a good question. And I think the answer is both. So if you look at our own journey in Green Software Foundation, we started first with. You know, actionable, measurable tools, practices that we want to empower the developer community. Very important. But then, you know, unless the organization embraces this as a priority, your, your sustainability priority is always competing with some other priority in the organization, which means that, you know, it will not receive the value or encouragement.
So it is very important that, Given that most big organizations, most organizations today have some kind of sustainability commitment, net zero or otherwise. How do we create, you know, in a very systematic fashion, that intervention right from the top? Because when there's a, you know, support coming from the top leadership, everything
Dr. Ong Chen Hui: of, yeah, everything falls in place, right. And all the innovation that you actually need, right. Yeah. Both from a internal processes point of view, as well as providing visibility to your supply chain partners. All that happens.
Sanjay Podder: All that happens, right? Even beyond, as you have rightly said, beyond the four walls of your organization, supply chain, everything, right? This is important. Climate change is real. This is important. And as you know, I look at it as a triangle, right? What I have seen is there is the whole thing of climate change, emissions, which appeal to people, people are concerned.
And then there is the whole energy crisis. So much energy. And the energy bills. And then finally, once you embrace green software practices, green AI practices, that show up in your bottom line. And you'll see, you know, at the heart of it, green software practices, green AI practices are green, are great software engineering practices.
Which for some reason we have forgotten, given the era of abundance we have been. Right, with the falling cost of compute, storage, nobody's, you know, people will store everything they have without even wondering, you know, how, what do I really need? So these are great practices when you bring all together.
You know, we find the answer to some of the pressing problems. So this has been a great conversation, Dr. Ong, you know, something that fits the very first podcast of the series. I hope some of the messages you gave today, you know, convey to other CXO leaders that how important this topic is and how you can make it a reality.
How you can champion it, and, every year I'm so delighted to see the Digital Sustainability Forum in Asia Tech, you know, takes a very prime spot, and that shows a commitment from the top, you know, and, and that's what we have to do in every organization so that digital sustainability, which is fast growing, especially green software, is one of the major sources of greenhouse gas emission.
We can control it right now rather than allowing it to snowball. So thank you so much for your time. Thank you for inviting me to Singapore. And as Green Software Foundation, we want to be a place of action, right? In this case, in Singapore, talk to the leaders, understand how we can collectively solve the problem.
So, super excited with this conversation. And, thanks for joining us in the CXO bytes Podcast. Thank you.
Dr. Ong Chen Hui: Thank you very much for having me. Thank you.
Sanjay Podder: Thank you. 
Hey, everyone. Thanks for listening. Just a reminder to follow CXO Bytes on Spotify, Apple, YouTube, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. Thanks again, and see you in the next episode.



Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 year ago
39 minutes 7 seconds

CXO Bytes
CXO Bytes Trailer
Tech leaders, your balancing act between innovation and sustainability just got a guide with the Green Software Foundation’s latest podcast series, CXO Bytes. In each episode, Sanjay Podder, Chairperson of the Green Software Foundation,  and host of CXO Bytes will be joined by industry leaders to explore strategies to green software and how to effectively reduce software’s environmental impacts while fulfilling a drive for innovation and enterprise growth.

So join us for an invigorating chat that will keep you both informed and entertained on CXO Bytes. Just search for CXO Bytes wherever you get your podcasts. 

Find out more about the GSF:
  • The Green Software Foundation Website 
  • Sign up to the Green Software Foundation Newsletter

If you enjoyed this episode then please either:
  • Follow, rate, and review on Apple Podcasts
  • Follow and rate on Spotify
  • Watch our videos on The Green Software Foundation YouTube Channel!
  • Connect with us on Twitter, Github and LinkedIn!


Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 year ago
52 seconds

CXO Bytes
Tech leaders, your balancing act between innovation and sustainability just got a guide with the Green Software Foundation’s latest podcast series, CXO Bytes hosted by Sanjay Podder, Chairperson of the Green Software Foundation.In each episode, we will be joined by industry leaders to explore strategies to green software and how to effectively reduce software’s environmental impacts while fulfilling a drive for innovation and enterprise growth.

Hosted on Acast. See acast.com/privacy for more information.