How education and social media regulation can combat science denial

It’s ironic that as scientific evidence on climate change and vaccines have become more compelling, public opinion on social media appears more divided.

TikTok_Booth_Social_Media_Regulation
A woman poses at a TikTok pop-up booth at a convention in Qatar. Image: , CC BY-SA 3.0, via Flickr.

Five billion people use social media globally, and millions of posts are shared on Meta (Facebook) every minute.

While a virtual life can be fun and informative, it can also be harmful. Misinformation about climate science is rife. So, too are conspiracy theories and disinformation about vaccine science, as we witnessed during the Covid-19 pandemic.

This misinformation not only undermines trust in science but actively threatens public health: The World Health Organization has named vaccine hesitancy, along with air pollution and climate health, among the top 10 global health threats of our time.

The evidence shows that vaccines save lives, and that climate change is strongly linked to rising water temperatures, sea level rise, shrinking polar ice caps, loss of biodiversity and more extreme weather events. 2023 was the hottest year on record. Greenhouse gas emissions hit record highs. The Earth is sounding its distress call.

But ironically, as the scientific evidence has become more compelling, public opinion appears to be more divided.

This hesitancy and denial appear to be founded in a misconception of how science works.

Vaccines save millions of lives every year, and with improved technology, the accepted view on climate change has changed, with unequivocal data showing increases in CO2 and higher temperatures are linked to the burning of fossil fuels.

This is science in action.

But opponents on social media, and other outlets, claim the scientific evidence for both is undecided and conflicting. Oppositional viewpoints are often supported by a few vocal scientists in white coats spreading misinformation, uncertainty and doubt.

The fossil fuel industry also plays a part in undermining science, by emphasising this so-called “dissent” within the scientific community.

In today’s society, where digital information is ubiquitous, and changing rapidly, we need to embrace robust debate, listen to the experts, verify what they say, understand the methods they used, and not be swayed by the uninformed or naysayers. 

This tactic is not new: for decades, the industry has created shell organisations and companies to fund climate science-denial and muddy public understanding of climate science. 

More recently, the fossil fuel industry and other major polluters have used social media to gain new audiences. One analysis found that 16 of the world’s biggest polluters were responsible for placing more than 1700 false and misleading ads on Facebook in 2021.

There will always be a few scientists who challenge the accepted view. However, more than 97 per cent of professional medical and climate scientists, having reviewed the existing evidence, agree that vaccines are safe, and that human activity is increasing CO2 and warming the Earth’s surface.

The key message the public needs to hear is: What matters is the preponderance of evidence, not one person’s opinion. 

Facts are considered true because they are based on the preponderance of evidence, whereas opinions may be true or not.

It is easy to see how social media — which platforms opinions and untruths, and does away with traditional gatekeepers of knowledge such as news editors — can facilitate an impression that the science around climate change and vaccines is unsettled or “controversial.”

This is especially the case because 80 per cent of social media platforms do not have a content moderation policy that includes a comprehensive, universal definition of climate misinformation, according to a September 2023 report by the Climate Action Against Disinformation Coalition.

Artificial intelligence (AI) can further support high-quality oppositional opinions in new ways to deceive and cast doubt. 

AI algorithms can mean someone who clicks on one anti-vax post is then fed a steady diet of anti-vax and other anti-science content. Similarly, AI can be used to target misinformation to vulnerable audience sectors. And AI programs trained to write text using vast data sets have automated fake news generation with increased persuasive potential.

Change begins with education

Bridging the consensus gap between science believers and deniers requires education.

Learning about the scientific method is important, but — in an age of conspiracy theories, misinformation and “alternative facts” — so too is critical literacy and philosophical inquiry.

Broadening school curricula to help students navigate complex ideas and virtual digital realities, and to make sense of their increasingly complex world, could help. 

Since children engage with social media from a young age, it makes sense for our education system to help them develop new skills in assessing truths and falsehoods when confronted with new information.

Education from preschool to Year 12 could include age-appropriate exercises on conspiracies commonly discussed on social media, ranging from vaccination scare tactics, to climate denialism, to flat Earth theory, to claims the moon landing was a hoax, and the Earth is 6000 years old.

If started early enough, this education could align with a child’s development and transitioning thinking skills from concrete representational to more abstract learning, a process that begins around six years old.

A ‘World Social Media Organisation’ could be a solution

Social media platforms have been essentially permitted to self-regulate for decades. That’s a problem, because they can make money off companies that pay to promote “fake news” and other forms of anti-science misinformation.

There have been some moves to regulate social media platforms in recent years, with varying degrees of success.

In mid-2023, the Australian government proposed legislation that would boost the powers of the Australian Communications and Media Authority to pressure tech companies into fighting online misinformation. But the draft bill hasn’t passed, and Australia remains in the early stages of responding to fake news and disinformation.

Individual platforms have their own mechanisms to reduce misinformation.

For example, X (formerly Twitter) has introduced a “community notes” feature where under misinformation tweets you can see “readers added context” with the corrected facts. 

But X has been named as the worst offender in terms of spreading climate misinformation, according to the 2023 Climate Action Against Disinformation Coalition report. Meta, TikTok and YouTube, despite having commitments to address climate misinformation on their platforms, are lacking on policy enforcement.

Rather than relying on governments to pass laws in isolation — or expecting social media platforms to effectively regulate themselves — some form of international regulation is urgently required to prevent the constant barrage of misinformation and harm pervading the online world.

Project Liberty, formed in 2021, has mobilised a global alliance of technologists, academics, policymakers and citizens to build a safer, healthier internet and social media. The organisation supports work on misinformation flagging, interventions to reduce deepfakes, and teaching online news.

The European Commission has also tackled online disinformation with a high-level group of experts who made key recommendations, including developing tools for educating platform users. 

In 2018, an International Grand Committee on Disinformation comprising the UK, Argentina, Belgium, Brazil, Canada, France, Ireland, Latvia and Singapore — met for the first time. The committee has since met in Ottawa and Dublin, and co-hosted a series of seminars in the US.

These initiatives are a start. They show that cross-border collaboration on tackling the global problem of internet disinformation is possible.

But given the magnitude of the disinformation problem — especially with growing concerns around AI — the international community needs to go one step further. 

Social media companies should be no different from global drug companies which are tightly regulated, for example, by the FDA and TGA, to produce drugs for humans that cause no harm.

One solution could be the formation of a World Social Media Organisation, comprising world partners from each country, to oversee harmful content. 

Like the WHO to support global health, the organisation could be dedicated to advancing the responsible dissemination of knowledge, and preventing harm and hate speech, to promote the wellbeing of society.

In today’s society, where digital information is ubiquitous, and changing rapidly, we need to embrace robust debate, listen to the experts, verify what they say, understand the methods they used, and not be swayed by the uninformed or naysayers. 

We need to be digital citizens protecting our rights, not digital subjects being manipulated by social media.

If society fails to regulate social media, historians 100 years from now may write: “the people of the early 21st century became so overwhelmed with digital information that they failed to develop the skills and systems to sufficiently process its content to the detriment of their society”.

Dr Geoffrey Dobson is a Professor in the College of Medicine and Dentistry and Australian Institute of Tropical Health & Medicine at James Cook University in Townsville. He holds a Professorial Chair in the Heart, Trauma and Sepsis Research Laboratory.

Originally published under Creative Commons by 360info™.

Like this content? Join our growing community.

Your support helps to strengthen independent journalism, which is critically needed to guide business and policy development for positive impact. Unlock unlimited access to our content and members-only perks.

Terpopuler

Acara Unggulan

Publish your event
leaf background pattern

Transformasi Inovasi untuk Keberlanjutan Gabung dengan Ekosistem →