If there’s any silver lining to Taylor Swift becoming the victim of A.I. pornographers, who began circulating explicit, so-called “deepfake” images of her on social media last week, it’s Swift’s remarkable ability to inspire pressure on Washington. On Friday, even White House Press Secretary Karine Jean-Pierre responded, in between questions about the war in Ukraine and the crisis at the border, calling on Congress to pass legislation that would crack down on abusive or unlawful A.I.-generated images. Jean-Pierre’s outrage was quickly echoed by SAG-AFTRA, the actors’ union, which fired off its own statement arguing that individuals’ likenesses need to be protected.
Swift’s predicament isn’t easy. She is reportedly contemplating legal action over those deepfakes, but her options are limited. While she possesses viable claims against the anonymous perpetrators, a lawsuit probably wouldn’t achieve much. Sure, she could possibly win money damages from the pornographers, if she could find them. But there are major roadblocks to her ability to compel takedowns and halt the dissemination of the images under Section 230 of the Communications Decency Act, which generally provides immunity to tech platforms that host user content.