Fake explicit Taylor Swift images show victims bear the cost of big tech’s indifference to abuse

As usual, Taylor Swift was the subject of some of the biggest stories in the world last week. Unfortunately, it wasn’t just for her appearance on the sidelines of her boyfriend’s Super Bowl appearance-clinching win in the NFL.

AI-generated sexually explicit images of Swift went viral on X, formerly known as Twitter, last week. Photorealistic images of the world’s most famous singer in compromising positions at an NFL game remained up for the better part of a day, garnering tens of millions of views

With Elon Musk’s company sitting on its hands despite widespread media attention about the images, Swift’s own fan army took matters into their own hands and tried to bury the images by posting their own content using keywords and hashtags that had promoted the AI-generated images. Eventually, X took down the images and blocked the search term “Taylor Swift” all together. 

It marked the first time that an AI-generated non-consensual image had broken through to the mainstream like this, but it’s certainly been a long time coming. As noted by US-based misinformation researcher Renee DiResta on Threads, this exact scenario was predicted years in advance.

What made last week’s fake Swift images possible was the confluence of factors. Tools making it simple to create realistic images of people have been getting more powerful over the past few years: easier, quicker and more convincing. Photoshop has existed for decades, deepfakes were first reported in 2017, and now generative AI has supercharged this trend even further. Now anyone with a computer can generate completely original images of real people doing anything. Earlier this month, we reported on a thriving online community that is facilitating people creating non-consensual sexual imagery of real people, from celebrities to members of the public. 

The second part is the dereliction of responsibility by X to moderate content that it’s distributing on a mainstream, popular platform. Whether it’s firing 80% of his staff working on site moderation or his rolling back of policies aimed at protecting users, Musk has actively worked to make the platform into a more toxic place that’s become home to neo-Nazis. Like frogs in slowly boiling water, many users — including much of the news and political class — remain on the platform where they’ve been exposed to content such as the Swift images. While not at the same level, we’ve seen other tech companies also begin to loosen their moderation too. 

If you squint hard, a silver lining could be a new push in the United States for new legislation to stop fakes like this. The reality is laws against image-based abuse already exist in many US states and in Australia. Similarly, it’s nice to see Swift’s fans take action into their own hands but ultimately demoralising to know that even someone of her stature can’t get action from a platform in a timely fashion, not to mention that the rest of us don’t have stans to protect us.

The key problem with this issue is enforcement. In a world where the ability to hurt people with technology becomes as easy as pressing a few buttons, trying to protect people by offering them justice through the slow and arduous criminal justice system isn’t enough. In Australia, the eSafety commissioner has powers to respond to image-based abuse, but again these depend on individual reports on a case-by-case basis. Using this to defeat this problem is like trying to play Whac-A-Mole. 

Trying to regulate technology is also not realistic. Although the Swift images were produced using a generative AI-product from the world’s most popular company, Microsoft — an under-covered aspect of the story — similar tools are also freely available online due to open source technology. 

A significant focus needs to be on the platforms who allow the distribution of this content. While these images emerged on Telegram, a less popular platform with near zero moderation, X’s size combined with its poor moderation made this possible. The eSafety commissioner has put out draft rules for platforms such as X which requires them to act quickly on problems such as this — it remains to be see whether this can have a material effect. 

One final note to end on: X’s Taylor Swift search term blocking was a crude solution, presumably to limit the already incredible amount of damage that it had done to Swift but not capable of doing anything more targeted. While being blocked on X is unlikely to affect Swift’s standing, it also reminds us how victims are often the ones who bear the brunt of efforts to protect them.

Another misinformation researcher, Nina Jankowicz, mentioned she can’t promote events, post photos or share her real-time location because of the abuse she’s faced from trying to combat online bullshit. It’s a reminder of how easy it is to abuse others, and how it’s the targets who ultimately pay the price.

Read More