Asia demonstrates least transparency on AI safeguards for workers: report

Only 7 per cent of more than a thousand firms surveyed in the region are using artificial intelligence with adequate safeguards to help workers adapt to the technology’s impact, according to new analysis by AI Company Data Initiative.

An office worker uses an AI bot for his project
Corporate safeguards for workers using artificial intelligence include having a dedicated AI complaint mechanism, according to AI Company Data Initiative. Image: Tirachard via Deposit Photos

Asia-based companies are showing the weakest transparency on how they safeguard workers from artificial intelligence (AI) risks, with only 7 per cent of 1,279 firms disclosing any measures, according to a new responsible AI benchmark study.

Asia lags other regions in publishing information on how companies protect workers from AI-related harms such as surveillance, algorithmic bias or automated decision-making, according to the report by AI Company Data Initiative (AICDI), a global dataset backed by Thomson Reuters Foundation and United Nations Educational, Scientific and Cultural Organisation (UNESCO).

By contrast, a higher share of companies in Europe and North America report at least some safeguards, suggesting a widening transparency gap as global supply chains digitalise, said the report, which analysed publicly available information from almost 3,000 global companies, mostly from industrial, information technology, chemicals, metals, and mining sectors.

Similarly, the presence of an AI-related complaint mechanism is higher on average in the United Kingdom and the rest of Europe, it added.

“The absence of such dedicated mechanisms [in Asia] can limit organisations’ ability to detect and address early signs of harm, undermine employee trust in AI deployments and reduce transparency around how AI-related risks are escalated and managed. It might also increase the likelihood that any potential issues go unreported and unresolved if and when they occur,” said the study.

Asia demonstrates least transparency

Only 7 per cent of Asian companies disclose how they safeguard workers from AI risks. Image: AICDI

The survey also pointed to a structural factor behind the region’s poor showing, revealing correlation between company size and how much data is publicly shared on AI governance, including safeguards for workers.

Large‑cap companies are significantly more likely to report that they have formal AI oversight bodies and dedicated AI governance resources in place than small‑cap firms, the study noted. In AICDI’s sample, half of small-cap firms surveyed are headquartered or primarily located in Asia. 

“This indicates that AI uptake is shaped not only by firm size but by local ecosystems and industry structure, with leadership in larger firms concentrated in North American technology-intensive sectors, and a more diffuse profile among smaller companies,” it said. 

Not enough AI reskilling in rest of the world

Although European companies demonstrated the most transparency on AI safeguarding of workers, there are still fewer than one in three companies worldwide preparing their employees for the upheaval that AI is expected to bring to the workplace, the analysis added.

Under a third of firms said that they offer any AI-related training to staff, leaving many workers with only a piecemeal understanding of how AI tools work and how they may reshape roles in the future. The report added that even in the 31 per cent of companies where training programmes exist, these are only limited to leadership roles.

“This non-standardised, unstructured approach to AI training can heighten risks for workers by leaving those most exposed to AI-driven change, particularly frontline and non-technical staff, without the baseline awareness, practical guidance and support they need to use AI tools safely,” it read.

The study also highlighted the ethical impacts of AI, which it says is poorly governed as companies are sharing limited information publicly. There is also a governance gap, with fewer than half of companies reporting any formal AI strategy or guidelines.

AICDI’s researchers cited global consultancy PwC’s 2025 study, which underscored a growing demand for information on how companies are adopting AI, with 42 per cent of investors saying they want more transparency on companies’ AI investment and another 42 per cent wanting clearer information on AI returns and cost savings. 

“The same systems that can improve speed, cost and personalisation can also create new, scaled risks often silently and unevenly when governance doesn’t keep pace,” AICDI said.

“Responsible adoption is what turns AI from a short-term productivity lever into a sustainable capability, by ensuring systems are designed and deployed with clear purpose, human oversight, data protection, monitoring and documented accountability.”

Paling popular

Acara Tampilan

Publish your event
leaf background pattern

Menukar Inovasi untuk Kelestarian Sertai Ekosistem →

Organisasi Strategik

NVPC Singapore Company of Good logo
First Gen
NZCA