Summary of Key Points:
- Proposed bill seeks to combat misuse of AI deep fakes through watermarking
- Bill aims to provide transparency and creator protection for AI-generated content
- Federal Trade Commission to oversee enforcement of the proposed COPIED Act
- Rise in frauds and scams using Deepfake content, particularly in the crypto space
- Scammers leveraging AI to impersonate prominent personalities and dupe users
Combatting Deep Fake Misuse with Watermarking
A group of bipartisan Senators have introduced a bill to address the misuse of AI deep fakes by mandating watermarking of such content. Known as the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED), this bill aims to establish standardized methods for watermarking AI-generated content to protect creators and control the training of AI on certain types of content.
Enforcement of COPIED Act by FTC
If passed, the COPIED Act will be overseen by the Federal Trade Commission (FTC), with violations treated as unfair or deceptive acts. The bill seeks to offer transparency into AI-generated content and empower creators, such as journalists and artists, by putting them back in control of their content. AI service providers like OpenAI will be required to embed information about content origin in a machine-readable format that cannot be bypassed using AI-based tools.
Rise in Deep Fake Frauds and Scams
The proposed bill comes at a time when there has been a significant surge in frauds and scams using Deepfake content, particularly within the crypto space. Scammers are leveraging AI to impersonate well-known figures like Elon Musk and Vitalik Buterin to deceive users. Recent incidents, such as a sophisticated deepfake hack leading to a $2 million loss on a crypto exchange, highlight the need for robust measures to combat deep fake frauds and scams.