Asian innovators fight online hate, lies as tech giants fall short

Tech tools in Asia tackle online abuse and misinformation in local languages amid criticism of Facebook, Twitter and WhatsApp.

Phone_Social_Media_Singapore
Tech firms such as Facebook, Twitter and YouTube face growing scrutiny for hate speech and misinformation. Image: Roberto Trombetta, CC BY-SA 3.0, via Flickr.

Fed up with the constant stream of fake news on her family WhatsApp group chats in India - ranging from a water crisis in South Africa to rumours around a Bollywood actor’s death -Tarunima Prabhakar built a simple tool to tackle misinformation.

Prabhakar, co-founder of India-based technology firm Tattle, archived content from fact-checking sites and news outlets, and used machine learning to automate the verification process.

The web-based tool is available to students, researchers, journalists and academics, she said.

“Platforms like Facebook and Twitter are under scrutiny for misinformation, but not WhatsApp,” she said of the messaging app owned by Meta, Facebook’s parent, that has more than 2 billion monthly active users, with about half a billion in India alone.

“The tools and methods used to check misinformation on Facebook and Twitter are not applicable to WhatsApp, and they also aren’t good with Indian languages,” she told Context.

WhatsApp rolled out measures in 2018 to rein in messages forwarded by users, after rumours spread on the messaging service led to several killings in India. It also removed the quick-forward button next to media messages.

Tattle is among a rising number of initiatives across Asia tackling online misinformation, hate speech and abuse in local languages, using technologies such as artificial intelligence, as well as crowdsourcing, on-ground training and engaging with civil society groups to cater to the needs of communities.

We’ve seen the harm that hate speech can cause. But Big Tech tends to focus on English and its users in English-speaking markets.

Timothy Liau, founder, Empathly

While tech firms such as Facebook, Twitter and YouTube face growing scrutiny for hate speech and misinformation, they have not invested enough in developing countries, and lack moderators with language skills and knowledge of local events, experts say.

“Social media companies don’t listen to local communities. They also fail to consider context - cultural, social, historical, economic, political - when moderating users’ content,” said Pierre François Docquir, head of media freedom at Article 19, a human rights group.

“This can have a dramatic impact, online and offline. It can increase polarisation and the risk of violence,” he added.

Local initiatives vital

While the impact of hate speech online has already been documented in several Asian countries in recent years, analysts say that tech firms have not ramped up resources to improve content moderation, particularly in local languages.

United Nations rights investigators said in 2018 that the use of Facebook had played a key role in spreading hate speech that fuelled the violence against Rohingya Muslims in Myanmar in 2017, after a military crackdown on the minority community.

Facebook said at the time it was tackling misinformation and investing in Burmese-language speakers and technology.

In Indonesia, “significant hate speech” online targets religious and racial minority groups, as well as LGBTQ+ people, with bots and paid trolls spreading disinformation aimed at deepening divisions, a report from Article 19 found in June.

“Social media companies … must work with local initiatives to tackle the huge challenges in governing problematic content online,” said Sherly Haristya, a researcher who has written a report on content moderation in Indonesia with Article 19.

Indonesian non-profit Mafindo, which is backed by Google, runs workshops to train citizens - from students to stay-at-home mothers - in fact-checking and spotting misinformation.

Mafindo, or Masyarakat Anti Fitnah Indonesia, the Indonesian Anti-Slander Society, provides training in reverse image search, video metadata and geolocation to help verify information.

The non-profit has a professional fact-checking team that, aided by citizen volunteers, has debunked at least 8,550 hoaxes.

Mafindo has also built a fact-checking chatbot in the Bahasa Indonesia language called Kalimasada - introduced just before the 2019 election. It is accessed via WhatsApp and has about 37,000 users - a sliver of the nation’s more than 80 million WhatsApp users.

“The elderly are particularly vulnerable to hoaxes, misinformation and fake news on the platforms, as they have limited technology skills and mobility,” said Santi Indra Astuti, Mafindo’s president.

“We teach them how to use social media, about personal data protection, and to look critically at trending topics: during COVID it was misinformation about vaccines, and in 2019, it was about the election and political candidates,” she said.

Abuse detection challenges

Across Asia, governments are tightening rules for social media platforms, banning certain types of messages, and requiring the swift removal of posts deemed objectionable.

Yet hate speech and abuse, particularly in local languages, often goes unchecked, said Prabhakar of Tattle, who has also built a tool called Uli - which is Tamil for chisel - for detecting online gender-based abuse in English, Tamil and Hindi.

Tattle’s team crowdsourced a list of offensive words and phrases that are used commonly online, that the tool then blurs on users’ timelines. People can also add more words themselves.

“Abuse detection is very challenging,” said Prabhakar. Uli’s machine learning feature uses pattern recognition to detect and hide problematic posts from a user’s feed, she explained.

“The moderation happens at the user level, so it’s a bottom-up approach as opposed to the top-down approach of platforms,” she said, adding that Tattle would also like to be able to detect abusive memes, images and videos.

In Singapore, Empathly, a software tool developed by two university students, takes a more proactive approach, functioning like a spell check when it detects abusive words.

Aimed at businesses, it can detect abusive terms in English, Hokkien, Cantonese, Malay and Singlish - or Singaporean English.

“We’ve seen the harm that hate speech can cause. But Big Tech tends to focus on English and its users in English-speaking markets,” said Timothy Liau, founder and chief executive of Empathly.

“So there is room for local interventions - and as locals, we understand the culture and the context a bit better.”

This story was published with permission from Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers humanitarian news, climate change, resilience, women’s rights, trafficking and property rights. Visit https://www.context.news/.

Like this content? Join our growing community.

Your support helps to strengthen independent journalism, which is critically needed to guide business and policy development for positive impact. Unlock unlimited access to our content and members-only perks.

Most popular

Featured Events

Publish your event
leaf background pattern

Transforming Innovation for Sustainability Join the Ecosystem →