Photo by Angel Bena.
Last week, the European Parliament passed the Artificial Intelligence Act (AI Act), a comprehensive framework for EU member states to follow in regulating AI products and services.
The AI Act is intended to serve as consumer safety legislation, taking a “risk-based approach” to products or services that use artificial intelligence - the riskier an AI application, the more scrutiny it faces. The risk levels are broken down into different categories, including:
Both producers (like Open AI/Google) and users (companies that consume those applications) will be required to pass tests for accuracy and transparency so that end-users (consumers) are aware they are interacting with AI. Companies that are unable to comply with these new requirements face fines of up to €35 million or 7% of their global annual revenue.
Any business, regardless of where it is based, that has end-users in the EU and leverages AI systems in its product will be subject to the AI Act. While this is a broad reach with some ambiguity, it is clear that online platforms using or publishing AI content will need to comply with these new regulatory obligations.
With many user-generated content platforms seeing an increase in AI-generated content being posted, most online sites (e.g., publishers, marketplaces, and social media sites) are likely to have to build some transparency and moderation controls including:
Labeling, and by proxy detecting, AI-generated content will now be a requirement if such content is disseminated on content platforms. This applies to any form of content - text, audio, images, and videos:
"Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated."
While we support the EU's legislative intention of promoting safe AI systems, we also recognize that it may be confusing for businesses to interpret and comply with this law. These obligations may seem rather ambiguous, and the AI Act does not detail specifically whether secondary types of content (like user reviews or comments) are subject to these requirements, or what compliance checks businesses will need to incorporate.
The good news is that companies will have time to develop solutions to enable compliance. The Act is expected to become law by May, and its provisions will start taking effect in stages, with many of the transparency requirements applying only starting a year from now.
That said, we believe there are certain actions companies can take now to preempt any enforcement of the law. Notably, companies should look to develop:
Already, we are seeing some companies take action - some platforms such as Youtube and Instagram are asking users to self-report when uploading “realistic” AI generated content. It’s unclear if this “trust system” will be sufficient to comply with the Act, but this is a strong signal that industry leaders are taking this legislation and AI safety seriously.
At Pangram Labs, we are working hard to build the best AI detection systems so that companies can operate safe and responsible online platforms. We are encouraged by the EU's goal of internet transparency and look forward to working with researchers and policymakers to flesh out these important standards.
Want to get in touch with us? Send us an email at info@pangram.com!