Pangram Logo
pangramlabs

Meta will start identifying AI generated content

Ashan MarlaMay 14, 2024

Image by Meta

Overview

In response to new AI regulations passed in Europe, we published our analysis where we predicted that leading companies would start to take action on AI transparency. Last month we've started to see that prediction come true - Meta announced in a blog that it "will begin labeling a wider range of video, audio and image content as 'Made with AI'".

This announcement is hopefully a sign that other companies will begin to realize the risk AI generated content poses especially as it relates to misinformation and deepfaked images and videos. In the post, the company announced several actions it is planning on taking:

Meta is planning to build internal tools to identify AI generated content at scale when it appears on Facebook, Instagram, and Threads.

The content to be labeled will include AI-generated content from Google, OpenAI, Microsoft, Adobe, Midjourney, and more.

It will add a way for users to voluntarily disclose when they upload AI-generated content, while also penalizing accounts that do not disclose shared AI content.

These actions are a great start and were informed by a mixed group of stakeholders through Meta’s content oversight board. After completing consultations with international policymakers and surveying over 23K users, Meta found that an overwhelming majority (82%) favored these AI disclosures especially for “content that depicts people saying things they did not say”.

Zooming Out

In advance of the 2024 elections, it’s clear that Meta has learned the right lessons from the US elections of 2016 and 2020, and is investing in the right systems to better protect its users against a growing wave of AI-scaled misinformation. It’s also no coincidence that Meta is looking to implement these policies ahead of when the EU’s AI Act goes into effect in May.

Meta’s decisions and leadership will definitely influence how other companies begin to think about their own platform risks. On a panel at WEF, Meta’s President of Global Affairs, has already called the effort to detect artificially generated content “the most urgent task” facing the tech industry today.

So how can other companies follow this guidance and both protect their platforms from misinformation abuse and stay compliant with new EU regulation? Here are a few ideas:

  • Platform leadership should set content policies compliant with regulation to support the identification and adjudication of AI generate content.
  • Develop product features to enable users to disclose AI-generated content, or at the very least flag AI-content to mitigate its spread and threat to other users.
  • Use Pangram Labs' AI-generated content detection products to proactively flag AI-generated content for review.

Pangram Labs is building the best AI detection systems so that companies can operate safe and responsible online platforms. If you’re looking to take proactive steps and lead the way in platform integrity and regulatory compliance, get in touch with us at info@pangram.com!

Subscribe to our newsletter
We share monthly updates on our AI detection research.