UN report urges stronger measures to detect AI-driven deepfakes

UN Agencies Urge Robust Standards for Deepfake Detection

A new United Nations report is calling for significantly stronger measures to detect and combat AI-driven deepfakes, warning that manipulated multimedia poses growing risks to the integrity of information—especially on social media platforms.

Declining Trust in Social Media

The International Telecommunication Union (ITU), a key UN agency, highlighted a notable drop in public trust across social media channels due to the proliferation of realistic fakes made possible by Generative AI tools. Bilel Jamoussi, ITU’s Chief of Study Groups, commented, “Trust in social media has dropped significantly because people don’t know what’s true and what’s fake.” Combatting deepfakes has become a pressing challenge as increasingly sophisticated AI models allow for the fabrication of convincing false images, audio, and video[1].

Digital Verification Tools Recommended

The ITU recommended that content distributors—specifically social media platforms—implement digital verification tools designed to authenticate images and videos before they are shared online. These verification mechanisms could help confirm the origin and authenticity of digital content, thereby supporting users in discerning between genuine and manipulated media[1].

  • Content distributors are urged to adopt digital verification tools for multimedia.
  • Advanced AI detection tools should be employed to flag or block misinformation.
  • Users should be provided with clear provenance information to assess trustworthiness.

Industry Solutions and the Importance of Content Provenance

Leonard Rosenthol, from the digital editing leader Adobe, emphasized the critical role of establishing provenance—that is, the origin and history—of digital content. Since 2019, Adobe has been at the forefront of addressing the deepfake threat. Rosenthol noted, “We need more of the places where users consume their content to show this information... When you are scrolling through your feeds you want to know: ‘can I trust this image, this video...’” [1]

Call for a Global Approach

Dr. Farzaneh Badiei, founder of the digital governance firm Digital Medusa, urged for coordinated international action, pointing out that at present there is no single global watchdog focused exclusively on detecting manipulated content. The report suggested that companies, especially those hosting information with public impact, need to take greater responsibility by deploying advanced AI tools to tackle rampant misinformation and reduce deep fake content, which is especially crucial ahead of major elections and global events [2].

Latest AI News

Stay Informed with the Latest news and trends in AI