How can sustainability leaders navigate the risks and opportunities of using AI?

AI can help to solve sustainability problems. But when used unwisely, it risks stripping away context and outsourcing judgment in decision-making. Dr Louise Drake of the Cambridge Institute for Sustainability Leadership argues that critical thinking and systems insight are essential as business leaders navigate an AI-driven world.

University of Cambridge Institute for Sustainability Leadership postgraduate students during a workshop exercise. Source: CISL
University of Cambridge Institute for Sustainability Leadership postgraduate students during a workshop exercise. Image: CISL

Three years after the launch of ChatGPT, the environmental, social and governance (ESG) risks and opportunities of Artificial Intelligence (AI) are rapidly coming into focus for businesses. 

From better climate risk analysis and supply chain transparency to rocketing data centre emissions and exacerbated gender inequality, AI presents companies with what one Postgraduate Certificate student at the Cambridge Institute for Sustainability (CISL) describes as a “jagged frontier” in terms of its sustainability outcomes. The term, coined by Ethan Mollick, a professor at Wharton School, captures how AI can be unbelievably effective at some tasks, and less so at others.

CISL’s programme director for postgraduate education, Dr Louise Drake, said in an interview with Eco-Business that the use of AI by business leaders is now inevitable. In the United States, for instance, 82 per cent of big-company executives use generative AI every week, a 2025 study found.

But she urges both caution and wisdom in how it is deployed – particularly if it starts to replace rather than simply enhance human judgment.

“Judgement is where we ultimately take a stance and make decisions. If that process is handed over to systems whose internal workings are not transparent, it becomes difficult to maintain ownership, accountability, and critical oversight,” said Drake. 

CISL’s programme director for postgraduate education, Dr Louise Drak

Dr Louise Drake, programme director for postgraduate education, Cambridge Institute for Sustainability Leadership

Using AI to make sustainability decisions – or indeed any complex business decision – can be problematic if the datasets on which AI relies lack context, that is essential background information that provides real-world grounding. Context is important because it helps reveal systems interdependencies – how environmental, social, and economic factors interact in specific places, Drake noted.

“Paying attention to context makes business leaders more systems-intelligent, allowing them to see how actions in one area can ripple through others,” she said, adding that ignoring it can result in “systems-foolish” decisions. “It provides the situational awareness for intuition and wise judgement.”

Drake has been testing the capabilities of generative AI in her own research and teaching, comparing its outputs with her own analysis and treating the process as a series of live experiments. But she has also seen the pitfalls, including what she calls a “conceptual smoothie” effect in some AI-generated writing and analysis.

“The words may all be there, but the texture of the knowledge has disappeared,” she said. “The nuance and the relationships between ideas can get flattened.”

The rapid pace of AI development and fear of missing out are pushing many organisations to adopt the technology quickly, often without reflecting deeply on what the technology should actually be serving.

Dr Louise Drake, programme director, postgraduate education, Cambridge Institute for Sustainability Leadership

In this interview, Drake discusses how AI is reshaping sustainability leadership and the mistakes businesses are making when using the technologies. She also talks about why business leaders will need stronger skills in critical thinking, systems insight and storytelling to navigate a business landscape turned upside down by AI.

How are you and your students using AI?

My colleagues and I have been trained to be critical thinkers, so we don’t adopt new technologies simply out of fear of missing out. But we recognise that AI adoption is inevitable and already having a profound impact. The field of AI is not a single technology but a broad and rapidly evolving spectrum including specialist applications and generative AI capabilities. My approach is to engage with generative AI proactively to understand where it can genuinely enhance thinking and action, and also its potential blindspots and limitations.

In my professional work, I am testing the capabilities of AI in different aspects of teaching, research and writing, for example identifying relevant literature, accessing and analysing data, and producing summaries. I compare what generative AI can do in light of my own and colleagues’ domain expertise, and treat the process as a series of live experiments.

I’m very aware of the risks of AI in education. Students now have access to many AI tools that could enhance learning but can also undermine it. I sometimes see work that appears to rely heavily on AI, including references that are clearly hallucinated. In an educational setting, such outputs are not only heavily penalised and risk failing academically; the quality of insight they create is much poorer.

I’ve noticed what some call a “conceptual smoothie” effect in AI-generated text when poorly conceived prompts are used and over-reliance is placed on AI ‘analysis’. The words may all be there, but the texture of the knowledge has disappeared. The nuance and sense of how ideas relate to one another can get flattened.

These risks are equally relevant for professional and business settings. AI offers the potential to provide fast and efficient access to a vast body of knowledge which, if thoughtfully deployed, has the potential to improve decision making by time-pressed practitioners. The challenge is avoiding the tendency to accept at face value whatever emerges from the ‘black box’ and having the domain expertise to evaluate and use it effectively.

Our students recognise the importance of this expertise as well as critical thinking and leadership skills, which is why they join our programmes. A number of our students are directly researching the sustainability impacts of AI in a business context and are trying to avoid the binary debates we often see in public discussions, where people are either enthusiastic advocates of AI or deeply sceptical of it. Instead, they recognise that the challenge is to engage with it critically: understanding where AI can better augment human judgement and outcomes, and where its limitations mean we need to be more cautious and intentional.

They point out that while technological change is happening incredibly quickly, the surrounding systems – regulation, governance, economic systems, financing and organisational processes – are lagging behind. One student uses Prof Ethan Mollick’s concept of the “jagged frontier” to describe the ability of AI to contribute to sustainability outcomes, meaning there are areas where it clearly enhances decision-making and other areas where it may erode or undermine our capabilities – and we’re not always sure which is which yet.

Companies have been cutting back on junior- and mid-level sustainability talent as they tap into AI to perform automatable functions such as sustainability reporting. What is your take on this dynamic and the need for sufficient expertise in the talent pipeline?

In many sectors, role redesign is lagging behind technological change in AI augmented workforces, as one of our Master’s in Sustainability Leadership students is researching. Organisations are understandably experimenting with what these technologies can do and where they can add value, as well as where human oversight is still needed. But the redesign of roles and career pathways hasn’t fully caught up yet.

I was at a gathering with C-suite leaders in Europe a few weeks ago where we were discussing the future of sustainability professional roles. The view emerging from that discussion was that, if sustainability professionals are going to have real impact, their work needs to shift toward enterprise-level thinking. Alongside AI literacy, that includes skills in futures thinking – identifying multiple plausible futures and interrogating business strategy against them, developing stronger commercial acumen, and building “systems intelligence”, meaning the ability to see interdependencies between ecological, social, and economic trends.

Sustainability professionals have been quite caught up in the industry of reporting. This has sometimes left less time and space to develop the skills that would allow them to shape strategy and influence proactive value creation. In that sense, AI advances may actually create an opportunity to recast the sustainability role so that professionals spend less time producing information and more time interpreting it, connecting it to strategy, and supporting long-term decision-making.

They [students] recognise that the challenge is to engage with it critically: understanding where AI can better augment human judgment and outcomes, and where its limitations mean we need to be more cautious and intentional.

One idea I’ve heard discussed is reframing the sustainability function as an analytical capability, bringing in and training people who can ensure AI-generated outputs are robust, develop insights, and connect them to systems thinking, futures analysis, and business strategy. The challenge is where the training ground for those capabilities will come from, which is why targeted education plays a key role in accelerating that upskilling.

So there are opportunities here, but they will require intentional reflection about what sustainability functions are actually for, what skills are needed, and how individuals and organisations build those capabilities.

What are the most common mistakes business leaders are making when using AI to guide sustainability decisions?

One basic issue, particularly with generative AI, is that outputs are only as good as the inputs – the familiar “garbage in, garbage out” problem. At the same time, context – the essential background information and situational awareness that is often implicit but also critical to human intuition – is often missing from the datasets used to train AI, because it hasn’t been codified in the data. Contextual insight therefore doesn’t always make its way into the analysis, which means that analysis can lack real world grounding. Another risk is outsourcing judgement entirely to AI. Judgement is where we ultimately take a stance and make decisions. If that process is handed over to systems whose internal workings are not transparent, it becomes difficult to maintain ownership, accountability, and critical oversight.

Specifically in sustainability, one of the biggest risks relates to how we define and pursue goals. Humans tend to be more comfortable setting sub-goals – such as eliminating child labour or reducing deforestation – than thinking carefully about ultimate goals and the wider systems those objectives sit within. If goals are pursued in isolation, we can lose sight of the broader web of interdependencies between ecological, social, and economic factors.

This is why I stress the importance of systems intelligence when it comes to understanding sustainability ambitions. A lack of systems intelligence is a risk whether we use AI or not, but because AI significantly enhances our analytical capabilities, it makes it even more important to be clear about what ambitions and aims those capabilities are being used for.

AI often produces content that is plausible but in some way flawed. How do you teach students to critically interrogate AI outputs rather than take them at face value?

Critical thinking skills are key to effective leadership. The habits you develop when using academic rigour can strengthen your ability to lead. At the heart of academic inquiry is a simple question: how do we know what we know? That means asking where knowledge comes from, how it is created, and what assumptions underpin it. Understanding those questions is crucial when using AI.

One way we develop this capability is through an interdisciplinary approach in all our Master’s level programmes. Students engage with diverse perspectives such as psychology, business, engineering, politics, and the natural sciences. Each discipline has different methods of generating knowledge, and learning how those approaches differ helps students better evaluate claims and evidence.

We also place a strong emphasis on understanding context. Context is essential because leadership is fundamentally about making meaning, and meaning can only emerge when information is understood within its specific circumstances. For sustainability leadership, context is key because it helps reveal systems interdependencies – how environmental, social, and economic factors interact in specific places. Paying attention to context makes business leaders more “systems-intelligent”, allowing them to see how actions in one area can ripple through others. Ignoring it can result in “systems-foolish” decisions, such as destroying natural capital for short term gains, which then reduces long-term productivity or removes natural defences against environmental shocks.

Over the past few years we’ve been encouraging students to reflect more deliberately on questions such as: Where does this knowledge come from? Who produced it? Whose voices are included, and whose are missing? Those questions are especially important when working with AI. Generative AI draws on existing datasets without discernment, unless prompted otherwise, which means it reflects the biases and dominant perspectives embedded in those data. For example, there can be a bias toward Western viewpoints or toward the “average” perspective. As a result, AI outputs often overlook the diversity of experiences and contexts that exist at the margins – yet that diversity is often where the most valuable insights lie.

You have cited American computer scientist and philosopher Norbert Wiener, who warned that we must be sure the goals we give machines are the goals we want. What are the risks when sustainability leaders deploy AI without clear purpose?

The rapid pace of AI development and fear of missing out are pushing many organisations to adopt the technology quickly, often without reflecting deeply on what the technology should actually be serving. While some companies are exploring how AI might support human and planetary flourishing, much of its deployment is driven simply by the fact that it is possible, or the quest for short-term efficiency, rather than longer-term goals. This is resulting in growing concerns around impacts on jobs, inequality and concentration of power.

One of our students is working on enterprise scale adoption of AI, and argues that AI is fundamentally an optimisation technology. If it is deployed without care, it may simply optimise existing unsustainable systems rather than transform them. What matters is why, how, where and for whom we use AI capacity and for what ultimate goals.

What leadership capabilities will become essential as AI continues to reshape global systems?

I’ve mentioned already the importance of critical thinking, context awareness and systems insight. These are important because they ensure that AI supports rather than undermines wise leadership judgement.

Leadership is also a mechanism of influence – it’s the ability to shape how people think about who they are, what they belong to, and what they want to support. Some interesting work by Prof Alex Haslam at the University of Queensland and colleagues explores the importance of social identity in leadership – and how leaders create, advance and embed a sense of shared identity that people feel part of and want to support.

We’re seeing a lot of this play out in the world right now through identity politics, where questions of belonging and alignment strongly shape leadership dynamics. AI can amplify these dynamics significantly, because it can reinforce echo chambers and deepen divisions.

Because of that, sustainability leaders will need to be very thoughtful about how AI is used. There’s a risk that it simply reinforces blind spots, existing narratives, or “us versus them” divides. One important capability will be using AI intentionally in ways that build new patterns of connection and collaboration, rather than amplifying polarisation. That requires critical thinking and a clear awareness of how influence works.

I also think sustainability professionals sometimes assume the challenge is mainly technical or data-driven, when in reality leadership is often about relationships, influence, and understanding what people care about. That’s why storytelling is such an important leadership skill – crafting narratives that people feel part of, that connect with our everyday realities and priorities.

In a world being shaped by technology, we need to revisit the essential qualities that make us human. Providing the space and support to develop these capabilities must be a priority for every individual and organisation seeking to ensure that they, and the economies, societies and environments that they depend on, are thriving and fit for the future.

Like this content? Join our growing community.

Your support helps to strengthen independent journalism, which is critically needed to guide business and policy development for positive impact. Unlock unlimited access to our content and members-only perks.

Terpopuler

Acara Unggulan

Publish your event
leaf background pattern

Transformasi Inovasi untuk Keberlanjutan Gabung dengan Ekosistem →

Organisasi Strategis

NVPC Singapore Company of Good logo
First Gen
NZCA