AI access may not always be unlimited as ESG risks mount – are businesses ready?

Companies need to stress-test their plans to ensure business continuity in a disrupted world where access to Artificial Intelligence could be scarce or expensive.

An engineer works on data centre plan
Businesses and investors should not only focus on the great investment opportunity AI adoption brings. They should consider the obvious risks in doing so. Image: CommScope, CC BY-SA 3.0, via Flickr.

Businesses are placing massive bets on Artificial Intelligence (AI) on the assumption the investment will surely pay off. Their leaders’ decision making conflates AI with clean air – always available, in whatever quantity and quality their business needs.

Businesses and investors should not only focus on the great investment opportunity AI adoption brings.

They should consider the obvious risks in doing so.

Chief sustainability officers and heads of government affairs have much to offer in this regard. Experience in addressing climate risk, managing supply chain challenges and understanding how the downsides from globalisation impacts their business offer models on how to consider future proofing for AI adoption.

Today, leaders build strategies, restructure operations, and make workforce decisions under the implicit assumption that AI will remain just as accessible and affordable for the coming years.

Companies undertake risk scenario planning to ensure there will be business continuity for many contingencies, from natural disasters to cyberattacks. This exercise allows them to test and improve their strategy and guide planning.

Yet few, if any, apply the same rigor to AI.

Ignoring the Responsible AI trilemma – environmental harm, job loss and rising inequality – will lead to material constraints for access to AI.

These could include rising costs for electricity and/or water from environmental impact, growing public pressure from job losses or increasing income inequality, and explicit government regulations limiting access.

Boards and investors need to ensure management teams plan for a world without the AI access they have today. 

Undertaking the exercise of exploring plausible future scenarios will enable companies to build resilience by testing strategies against various circumstances.

Baseline scenario not the only scenario

The same resilience incorporated for other significant risk factors needs to also be built for AI.

For investors, this exercise can open investment themes they had not considered.

Many boards undertake this analysis when it comes to climate scenario planning, where firms prepare for a range of outcomes. The exercise doesn’t ask business leaders to predict the future. It gives them a model to stress-test against futures they may not have considered. And it enables better business planning, providing information for a company to ensure more resilience.

Climate risks that can impact a business financially include physical risks such as rising temperatures and more intense storms, and transition risks such as the implantation of carbon taxes and shifting consumer and employee behaviour.

The risks most significant to a given business are then stress-tested against scenarios, which could be: present day (1.5°C–2°C); middle of the road (2°C-3°C); high physical and transition risk (3°C+). 

Similarly, three AI scenarios exist for which businesses should be considering to future proofing their business: limitless and affordable, available but expensive, and, rationed or sovereign.

The baseline scenario assumes today’s operating environment: AI remaining limitless and affordable. It may prove correct. But the political and societal forces building against it suggest otherwise. Those business operating under the assumption are most in need of stress testing.

A second scenario: AI remains available but expensive. In this future, compute costs spike. Energy and water constraints bite. Here, AI becomes more akin to a luxury good – big firms keep access while the mid-market and small to medium-sized businesses gets priced out. Planning for this scenario requires one to rethink competitive positioning if or when costs multiply significantly.

The worst-case scenario: AI becomes rationed or sovereign. Governments step in with controls on availability for economic and/or national security reasons. Data localisation policies fragment the market. To avoid causing disruption under this scenario, businesses need to think critically about their AI supply chain and ensure that where they source their AI will remain available to them.

Businesses know how to conduct supply chain due diligence. They now need to extend that to AI, under any scenario.

Productivity at the cost of continuity

Businesses that optimise AI for only efficiency and productivity and ignore the potential costs from doing so put business continuity at risk.

If AI becomes available but expensive and thus out of their price range, businesses which counted on AI as part of their workforce will face a double hit – higher bills and the inability to replace the institutional knowledge they let go.

If AI becomes rationed or sovereign, a business model built around always-on, ever-affordable AI may not be feasible at all.

The companies most aggressively adopting AI are the most exposed, with storm clouds already forming. Data centres are being paused around the world due to energy consumption and water scarcity. Projects are getting blocked due to local opposition. Unions oppose the adoption of AI which causes job losses, such as opposing AI-enabled driverless vehicles.

All the above will converge to threaten operations and the long-term health of businesses.

Safeguarding the license to operate

Business leaders should be addressing the responsible AI trilemma within their own operations, and their investors should ensure they are doing so.

This will both de-risk businesses and have an immediate financial return by lowering costs. It can also insulate themselves against the growing backlash.

At the same time, they need to prepare for a different future.

Scenario planning does not demand a crystal ball. It requires identifying which parts of your business are most fragile under a plausible future.

Boards should be asking: What does our strategy look like if compute costs double? Which operations become unviable if we lose access to a country’s models? How will our employee and customers look at us if we adopt AI without any thought to its societal consequences?

Companies that treat AI access as a strategic risk variable – like carbon, water or supply chain security – will be better positioned than those that treat it as a given.

If each firm understands and mitigates the business and public interest risks associated with adopting AI, they will reap the rewards of better business while doing their individual part to mitigate the brewing opposition to AI without checks.

Otherwise, the worst-case scenario could soon be the base case.

Steven Okun is CEO of APAC Advisors, a Singapore-headquartered consultancy focused on geopolitics and responsible investing. Megan Willis is APAC Advisors’ senior advisor and Noemie Viterale is manager.

Like this content? Join our growing community.

Your support helps to strengthen independent journalism, which is critically needed to guide business and policy development for positive impact. Unlock unlimited access to our content and members-only perks.

最多人阅读

专题活动

Publish your event
leaf background pattern

改革创新,实现可持续性 加入Ecosystem →

战略组织

NVPC Singapore Company of Good logo
First Gen
NZCA