NOTE: We've changed our name to Pangram Labs! See our blog post for more details.
Today, the Biden administration issued new standards for AI safety and security, including this directive on AI content detection:
The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic — and set an example for the private sector and governments around the world.
Joe Biden, from ai.gov.
Checkfor.ai fully supports the Biden administration’s commitment to the safe and responsible deployment of large language models. We believe that the development of guidelines and policy around AI content authentication is an important step in the right direction for AI safety and is aligned with our mission of protecting the internet from spam, and other malicious content generated by large language models. The world will be better off, as both consumers and users of AI technology, if we as a society invest more heavily into detecting AI generated content. Today, we’d like to release a statement on our position as a company on content authentication, watermarking, and detection.
At Checkfor.ai, we are working hard to build reliable AI detection systems so that next generation language models can be deployed safely and productively. We are excited to see the Biden administration’s commitment to differentiating human- and AI-generated content and look forward to working with researchers and policymakers to develop these important standards.