Could text-to-video AI spell a new era for greenwashing?

High quality AI-generated productions will soon be a click away for conservationists – and polluters.

AI generated woolly mammoth
A pair of AI-generated mammoths. Image: Sora

Last week, OpenAI, the creators of ChatGTP, announced the arrival of a new text-to-video tool called Sora that has since flooded social media feeds with ultra-realistic high-definition clips created entirely by Artificial Intelligence (AI).  

Sora, nicknamed after the Japanese word for “sky”, can create 60-second videos of exceptional quality from users’ text prompts — and can also make still images come alive in hyper-realistic video or extend or recreate existing video footage. 

The company announced it had opened access to Sora to only a selected few researchers and video creators who are testing Sora’s terms of service, which prohibits “extreme violence, sexual content, hateful imagery, celebrity likeness, or the intellectual property of others”. 

OpenAI shared reels of incredible videos, from woolly mammoths to podcasting dogs, a recreation of California during the gold rush, and golden retrievers playing in the snow.

This is a before-and-after moment for factual videos and films.   

Once Sora is fully released, we will enter an era in which real and AI-generated video and images are indistinguishable. 

Real versus imagined

We will not be able to tell the difference between real and imagined scenes of people, habitats, and animals – and this could have profound implications for content creation in the sustainability sector.

We will see a rise in imagined “fake-factual films” and simulated environmental and sustainability documentaries and videos. BBC Earth quality productions will be a few clicks away for conservationists and polluters alike.

While OpenAI announced that “the videos bear a watermark to show they were made by AI”, these will be easy to circumvent either by cropping the video or using another AI tool to remove them.

It’s a matter of time before the internet is flooded with simulated full-length videos with complex editing and storylines. 

Remember that this is the worst the technology will ever be — and the quality is already incredible.

While the first generative models of text-to-video surfaced in 2022, early examples from Google, Meta and Runway were glitchy and identifiable by the eye as AI-produced. But Sora’s announcement ushers a new dawn of hyper-real video with complex camera movements and life-like human and animal behaviour created by a few prompt lines. 

AI and greenwashing  

A few text-to-video prompts from the Sora launch highlight the implications for the sustainability sector, specifically conservation.  

While animal habitats are diminishing worldwide, it’s increasingly important for scientists, climate change experts and journalists to rely on accurate, verifiable reporting – which includes video documentation. Greenwashing is already on the rise fuelled by polluters and their financiers. With tools like Sora, debunking individual cases of greenwashing will become increasingly difficult.

The flooding of fake-factual videos on to the internet will create confusion among viewers, and a true picture of what is happening to our environment will become harder to discern. We are entering the age of unverified sustainability video. 

AI arms race

A recent report by Europol estimates that as much as 90 per cent of online content may be synthetically generated by 2026.

As we have seen with the rise of deep fakes and recent examples of the UK Prime Minister, London Mayor Sadiq Khan, late Indonesian dictator Suharto during the Indonesian 2024 elections, and Volodymyr Zelensky, the president of Ukraine, announcing a surrender –  legislators worldwide are struggling to keep up with the speed of AI’s proliferation. 

A big problem is the speed at which fake videos spread. 

Video verification (at present) is cumbersome, slow and requires human investigations, even for the New York Times Visual Investigation Team and MIT researchers. As we’ve seen with recent deep fake examples, often the damage is done (reaching thousands or millions of views) before the videos can be debunked. 

In 2018, United States Congress raised fears over the impacts of deep fakes and “hyper-realistic forgeries” in a letter to the director of national intelligence, which stated: “By blurring the lines between fact and fiction, deep fake technology could undermine public trust in recorded images and videos as objective depictions of reality.”

A side effect of fake-factual videos will be a decline in public trust in legitimate journalism and sustainability reporting because the credibility of all video content will be called into question. 

Adapt or die

Fake-factual video, visual and written content will be so quick and cheap to produce that it will hard to resist for budget-constrained sustainability teams.

Full articles, reports, data visualisations, videos, animations – even op-eds – can be easily produced by AI.

Executives tasked to produce sustainability content will begin to use Sora for stock video purposes, editing artificially generated scenes into real human stories.

In sustainability journalism, clear and evolving editorial guidelines over attribution and usage will be needed for ethical use. And they need to be written now. 

One key question we should be asking is: What can’t AI do, or do well, and how could it replace traditional forms of sustainability communication?  

Purpose-focused journalism and humanistic storytelling will be needed with a strong ethical grounding to separate fact from fiction, truth from simulation.

The public will want authentic human narratives that they connect with emotionally. It’s a time for filmmakers, journalists and those in the sustainability sector to leverage AI technology for repetitive tasks while doubling down on human-focused storytelling and reporting. 

While AI may be able to create stunning visual simulations, it cannot replicate our human need to express ourselves and share human true narratives. 

Fraser Morton is a documentary filmmaker, visual journalist and educator. He is the founder of Far Features and executive producer at Eco-Business. 


Acara Unggulan

Publish your event
leaf background pattern

Transformasi Inovasi untuk Keberlanjutan Gabung dengan Ekosistem →