Pangram Logo
pangramlabs

Statement on Biden's AI Safety Executive Order

Max Spero and Bradley EmiOctober 31, 2023

NOTE: We've changed our name to Pangram Labs! See our blog post for more details.

Today, the Biden administration issued new standards for AI safety and security, including this directive on AI content detection:

The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic — and set an example for the private sector and governments around the world.

Joe Biden, from ai.gov.

Checkfor.ai fully supports the Biden administration’s commitment to the safe and responsible deployment of large language models. We believe that the development of guidelines and policy around AI content authentication is an important step in the right direction for AI safety and is aligned with our mission of protecting the internet from spam, and other malicious content generated by large language models. The world will be better off, as both consumers and users of AI technology, if we as a society invest more heavily into detecting AI generated content. Today, we’d like to release a statement on our position as a company on content authentication, watermarking, and detection.

  • We believe no current detection solution is sufficient to solve the problem right now, and we must invest more resources into solving AI detection specifically.
  • We believe that watermarking is not the solution to this problem, as it promotes a false sense of security and has been proven limitations when a user has access to model weights or ability to edit the output (https://arxiv.org/pdf/2305.03807.pdf, https://arxiv.org/pdf/2301.10226.pdf). Powerful and capable unwatermarked open-source models already exist, and bad actors will always be able to locally finetune their own versions of these to evade detection. Moreover, we believe the detection problem is solvable even on unwatermarked models.
  • We support regulation on AI safety and development of large language models, with the key provision that all regulation must support the open source ecosystem. Open source provides checks and balances on powerful large tech companies, transparency for consumers, and allows the public to freely evaluate, critique, and benchmark these models.
  • We believe that the government’s role in promoting AI safety and investing in AI detection research should focus on funding both academic and industry projects in AI detection.
  • To advance effective research in AI detection, we must establish benchmarks and evaluation criteria, such that consumers can be aware of the efficacy and limitations of available AI detectors. We are committed to developing and contributing to industry-wide standards in AI detection, so that larger and more powerful language models can be safely and responsibly deployed.

At Checkfor.ai, we are working hard to build reliable AI detection systems so that next generation language models can be deployed safely and productively. We are excited to see the Biden administration’s commitment to differentiating human- and AI-generated content and look forward to working with researchers and policymakers to develop these important standards.

Subscribe to our newsletter
We share monthly updates on our AI detection research.