Battling the Deepfake Epidemic: Will the UK’s Online Safety Bill go far enough?

“Victims of this kind of abuse have to consider changing their entire identity, their literal name, in order to escape or cope with the fact that their agency has been robbed from them”, The Monitor discusses deepfakes, the UK’s new Online Safety Bill, and lessons we can learn from Australia with deep fake porn activist, policy researcher, and lawyer Noelle Martin.

Since their inception in 2017, “deepfakes” have exploded across mainstream media—both in terms of their volume and quality. The term may bring to mind viral videos of a hyper-realistic Morgan Freeman, Mark Zuckerberg claiming he controls all Facebook users, or President Zelensky surrendering to Russia. But whilst a deepfake technically refers to any AI-manipulated video of someone that is deliberately misleading or false, a 2019 study suggested that around 96% are pornographic in nature. What was once a relatively fringe danger that lurked in the confines of Reddit and 4chan, deepfake porn has since become a platform-wide endemic, found on adult sites to more mainstream platforms such as Instagram, Twitter, and Facebook. What’s more, it is currently completely legal in the UK to create and distribute such content without the depicted individual’s consent. In order to tackle this issue, the UK Online Safety Bill, which is due to become law later this year, will introduce a “false communication offence” that is specifically concerned with the sharing of non-consensual deepfakes. But how effective is this legislation likely to be?

Since the start of 2023, there have been several high-profile deepfake scandals. Kat Tenbarge from NBC News reported 230 different sexually explicit images across Meta platforms last month, which advertised an AI face swap app, displaying realistically manipulated images of celebrities such as Emma Watson and Scarlett Johansson. Yet, whilst famous women are typically the victims of deepfake abuse (due to the high demand for sexualised content of them, and a large pool of pre-existing media to pull from), it is certainly not a problem that is strictly reserved for celebrities. The growing accessibility of deepfake creation software apps and their ease of use means that “normal” people are increasingly becoming victims. Indeed, a 2020 study across Australia, New Zealand, and the UK found that 1 in 3 participants (n = 6 109) had been victims of revenge porn or deepfake, known collectively as “image-based sexual abuse”.

Australian activist Noelle Martin was a “normal” 18-year-old girl when she discovered that her social media pictures were being used without her consent to create deepfake pornography 6 years ago. She was then subjected to years of further cybersexual abuse as a result of speaking out on the issue in an attempt to get the content removed. In an interview with The Monitor, Ms. Martin explained that whilst the emotional distress of deepfakes is the same for celebrities and “ordinary” victims like her, the latter face additional problems and barriers:

Noelle Martin Headshot

“If people don’t know who you are, they’re potentially going to question your credibility and your character. People know who Emma Watson is, they know this could come with the territory, they know this isn’t something she would likely do. Ordinary people don’t have the benefit of that established identity or profile, I guess mitigating the harms of this. Celebrities are also more likely to have the platform to debunk it.” 

Australia had no laws in place back in 2017 to protect victims like Noelle. However, after years of campaigning, her work helped change state and federal laws to include a specific provision on deepfakes within a broader bill that was criminalising “revenge porn”. Australia was a pioneer in this regard, and the UK’s upcoming bill emulates their model by introducing prison sentences and fines for those caught distributing non-consensual deepfake content. Ms. Martin, now a researcher for the UWA Tech and Policy Lab, is, however, sceptical as to how successful the legislation has been:

“Where you have perpetrators that are known to the victim and in the same jurisdiction as the victim, the laws are much more effective. In circumstances where the perpetrators are not known to the victim, and they could be halfway around the world, then I would say there’s next to no effectiveness of these laws.”

Only a fraction of deepfake perpetrators have been prosecuted since the Australian laws came into force in 2021, suggesting that the UK may well face similar issues in translating policy into action. This speaks to the persistent problem of policing the internet: it is very difficult for individual states to implement laws on global and nebulous platforms. Consequently, the burden of cracking down on deepfakes falls onto the online platforms themselves.

Deepfake content has been banned by most major platforms since 2020, including Facebook, Instagram, Twitter, Reddit, TikTok, and PornHub. In 2021, Meta revealed that they were creating “reverse engineering generative models” in collaboration with Michigan State University to “facilitate deepfake detection and tracing in real-world settings”. However, whilst their algorithm is admittedly only in its initial stages, it appears unable to consistently and accurately detect more advanced deepfakes, which are specifically trained to evade these detection models. Differentiating between harmless filters and malicious AI-manipulated content has also posed a challenge for algorithms. In the previously mentioned incident involving Emma Watson and Scarlett Johansson, for instance, it was only after a student uploaded a screen-recording of the ads to her Twitter that they were finally removed from Meta’s platforms, suggesting that, in certain cases, the human eye remains a more precise instrument for detecting manipulated content than Meta’s algorithm. But whilst the major players in the tech and media industries purport to be working hard against this issue, Noelle Martin believes that there is little substance behind the promises being made: 

Pornhub warning message

“Consider how long they’ve been able to operate with impunity. We have left them to their own devices to self-regulate for two decades and they’ve not self-regulated. They’ve not implemented the safeguarding tools or the moderating that they need to; yet this phenomenon is happening, and it’s enabled by their platforms.”

Instead, the policy researcher advocates for tough external regulators to enforce compliance from platforms like Meta. Indeed, under the new UK Online Safety Bill, Ofcom will be given a host of regulatory powers to hold large tech companies accountable for the content shared on their platforms, referencing their “duties of care” to users. Mirroring the Australian model, the new legislation allows Ofcom to impose fines of up to £18 million, block websites from operating in the UK, and even impose prison sentences for consistently non-compliant companies and execs. Still, what might look like a promising step on paper has been sorely underutilised in the Australian context. Ms. Martin explains, “To my understanding, they’ve never issued a fine to any perpetrator or any social media company for perpetrating or failing to respond to these issues, instead choosing informal warnings.” 

Why regulators have been slow to make use of their authority can only be surmised, but the vast financial costs and commercial risks involved in taking on a major tech firm may play a role. Even when regulators have taken action to remove deepfakes, the scope of their capabilities is limited. Not only is the sheer volume of content difficult to contend with, but the process appears to be largely reliant on victims self-identifying and reporting the offending material. This is particularly problematic given that many victims, like Noelle, are not even aware that they are being targeted in the first place. She describes this uphill battle:

“There is a huge divide here in Australia between what they say they are doing and what they are actually doing. For example, they’ll say things like “we have a 90% success rate on taking down intimate images from the Internet.” But what that means is that they can take it down at one time, at one location, at one URL that you have to send them, and then they will claim that as a successful takedown. But these images crop up a week later, months later, on 20 different websites.”

Pornographic movie

Despite being ahead of the curve on policy, there is little indication that the Australian legislation has had a significant impact on reducing the prevalence of deepfake pornography. Perhaps then, the UK should not be treating the Australian model as one to emulate, but as a test run to suggest how the UK might do better. A good place to start would be to ensure that “victims and survivors are at the table, consulting on the decisions that will directly impact them, that are about them, that are for them”.

Undoubtedly, it is encouraging that both the UK government and major tech platforms are taking steps to address the rapidly escalating issue of deepfakes, and maybe this will result in some genuine changes. But after speaking with Noelle Martin, it is clear that the assertions made by regulators and the tech industry alike are simply not reflected in tangible results. If we are to learn anything from the Australian context, it is to not be lulled into a false sense of security by the promise of reform and protection. Many key issues surrounding deepfake porn remain unsolved by the UK’s newly proposed Online Safety Bill: how do we bring perpetrators from other jurisdictions under UK law? How do we ensure that tech platforms comply rather than seek loopholes? How can we expedite the process of improving detection algorithms in order to keep up with ever-advancing deepfakes? How do we ensure that Ofcom fully utilises its regulatory powers? And most importantly, how do we ensure that the voices of victims aren’t lost in a sea of tech talk?

Molly Golding & Robbie Boyd

This is a co-written piece by the Editors, Molly (she/her) and Robbie (they/them)

Previous
Previous

How would you feel about using brain-dead people as surrogates?

Next
Next

Outright AI-Generated Art Misses the Mark