Congress has handed a invoice that forces tech firms to take motion in opposition to sure deepfakes and revenge porn posted on their platforms.
In a 409-2 vote on Monday, the U.S. Home of Representatives handed the “Take It Down” Act, which has obtained bipartisan help. The invoice additionally obtained vocal help from celebrities and First Lady Melania Trump. The invoice already handed the Senate in a vote final month.
The Take It Down Act will now be despatched to President Donald Trump, who is anticipated to signal it into legislation.
First launched by Republican Senator Ted Cruz and Democratic Senator Amy Klobuchar in 2024, the Take It Down Act would require that tech firms take fast motion in opposition to nonconsensual intimate imagery. Platforms could be required to take away such content material inside 48 hours of a takedown request. The Federal Commerce Fee might then sue platforms that don’t adjust to such requests.
Mashable Mild Pace
Along with focusing on tech platforms, the Take It Down Act additionally carves out punishments, which embody fines and potential jail time, for individuals who create and share such imagery. The brand new legislation would make it a federal crime to publish — and even threaten to publish — specific nonconsensual pictures, which would come with revenge porn and deepfake imagery generated with AI.
Digital rights teams have shared their considerations concerning the Take It Down Act. Activists have mentioned that the invoice may very well be weaponized to censor legally protected speech, and that authorized content material may very well be inaccurately flagged for removing.
Regardless of these considerations, the Take It Down Act even received support from the tech platforms it seeks to police, akin to Snapchat and Roblox.
Congress is not completed addressing AI and deepfakes this 12 months both. Each the NO FAKES Act of 2025 and Content material Origin Safety and Integrity from Edited and Deepfaked Media Act of 2025 have additionally been launched this session. The previous seeks to guard people from having their voice replicated by AI with out their consent, whereas the latter appears to be like to guard unique works and require transparency round AI-generated content material.