‘Deepfake Porn’ Is Getting Easier and Easier to Make

What does it mean for consent as deepfakes become increasingly accessible?

In Depth
‘Deepfake Porn’ Is Getting Easier and Easier to Make

Researcher Henry Ajder has spent the last three years monitoring the landscape of deepfakes, where artificial intelligence is used to swap people’s faces into videos. He spends a lot of time “going into the dark corners of the internet,” as he puts it, in hopes of keeping tabs on the malicious uses of synthetic media. Ajder has seen a lot of disturbing things, but a few months ago, he came across something he’d never seen before. It was a site that allowed users to simply upload a photo of someone’s face and produce a high-fidelity pornographic video, seemingly a digital reproduction of that person. “That is really, really concerting,” he says.

Ajder alerted the journalist Karen Hao, an artificial intelligence researcher. Last month, she wrote about the site in the MIT Technology Review, bringing attention to the specter of free, easily-created deepfake porn. “[T]he tag line boldly proclaims the purpose: turn anyone into a porn star by using deepfake technology to swap the person’s face into an adult video,” wrote Hao. “All it requires is the picture and the push of a button.” The next day, following an influx of media attention, the site was taken down without explanation.

But the site’s existence shows how easy and accessible the technology has become.

Experts have warned that deepfakes could be used to spread fake news, undermine democracy, influence elections, and cause political unrest. While deepfakes of Russian president Vladimir Putin and North Korean leader Kim Jong-un have been produced—ironically, for political ads underscoring threats to American democracy—the most common application is porn, and the most common target is women. Deepfakes first stepped onto the public stage via videos mapping celebrities’ faces onto porn performers’ bodies and have since impacted non-celebrities, sometimes as a doctored version of “revenge porn,” where, say, exes spread digitally created sex tapes that can be practically indecipherable from the real thing. Resources like the website Ajder discovered make sexualized deepfakes infinitely easier to produce and, even when individual domains are taken down, the technology lives on.

Jezebel spoke with Ajder about the implications of what he calls “deepfake image abuse,” and the complex challenges in addressing this genre of nonconsensual porn. Our conversation has been edited for clarity.


JEZEBEL: You prefer to use the phrase “deepfake image abuse” instead of “deepfake porn,” can you explain?

HENRY AJDER: Deepfake pornography is the established phrase to refer to the use of face-swapping tools or algorithms that strip images of women to remove their clothes, and it comes from Reddit, where the term first emerged exclusively in this context of nonconsensual sexual image abuse. The term itself, deepfake pornography, seems to imply, as we think of with pornography, that there is a consensual element. It obscures that this is actually a crime, this is form of digital sexual harassment, and it doesn’t accurately reflect what is really going on, which is a form of image abuse.

this is form of digital sexual harassment

In laymen’s terms, how is deepfake image abuse created?

When this form of deepfake image abuse first emerged in late 2017, it was spawned from a form of software that used deep-learning algorithms to swap women’s faces into pornographic footage. This community sprung up on Reddit, but as time has gone on, the technology has evolved rapidly and new technology has emerged for different kinds of deepfake image abuse.

There are still tools for swapping women’s faces into pornographic footage and they’re becoming more accessible and becoming less data intensive, so you need fewer images to train that algorithm. But then you also have apps that are almost gamifying this form of image abuse by allowing you to synthetically strip women of their clothes from a still image with just the press of a button.

Before you’d have to have some good computer hardware and knowledge of programming to make the software work well. Now, many tools are emerging which make it very accessible and make it a much bigger problem. More people can use it who don’t have that expertise. You start seeing more everyday women being targeted. That’s roughly how it’s evolved from this quite technically restrictive form of technology to one which is increasingly accessible and gamified through friendly user interfaces and also with increasingly realistic results.

Can you tell me about how you discovered the deepfake site—or “Y,” as you call it—that was recently taken down?

Since early 2018, I’ve been doing landscape mapping around deepfakes to understand how they’re being used, maliciously, who’s being targeted, and how the tools themselves are changing. Back in 2019, I wrote a report, which provided the first proper mapping of that landscape. That is where I found, quite shockingly, that 96 percent of deepfakes were this form of intimate image abuse and almost all of the victims were women.

Since doing this research, I’ve continued to map that landscape. Last year, I discovered a bot on the messaging app Telegram which allowed people to do this synthetic stripping of women’s image. It was really disturbing because it made it so much more accessible. These women being targeted weren’t the celebrities that were being targeted back in 2017. It was everyday women’s photos from their social media pages.

I found this website a few months ago and have monitored it and I reported it when I found the functionality was evolving and becoming increasingly accessible. It was the first of its kind that provided a library of footage already. All you had to do was upload a picture of someone’s face, choose the video from their pre-selected videos, press the button and it would generate the output.

What were your concerns when you found that site?

What I look for when I’m evaluating whether something is a significant new threat or development are things like: How realistic is the output? A lot of deepfake image abuse isn’t very realistic. In this context, it doesn’t really matter, because it if looks enough like you, it can still be hugely embarrassing and traumatizing. Nonetheless, people are trying to develop more realistic models for image abuse.

I look at how efficient it is, also. I look at how much data the user needs to provide, such as images or videos, to create an output. Before, you’d normally need hundreds, if not thousands, of images to generate something, whereas now there are pre-trained models which only require a few images, or even one image. Then I’m looking for accessibility. How easy is it to use? Is it something that has a really friendly user interface, like a smartphone app? Is it something taking a lot of the heavy-lifting of creating this content and automating it? And then it’s how it’s being used and who is being targeted.

Do these sites ever truly disappear? Can their coding or technology really disappear once they’ve been shared?

It’s a great question and, unfortunately, it’s one where the answer isn’t so optimistic. This bot on Telegram that I discovered was a kind of a Frankenstein mutation of the same tool which was released in June of 2019. That tool went down because it got so much traffic after some reporting on it and people just cloned the software and it sprung up in many different forms. It’s easily accessible in many different forms. You can access it as the raw code, you can access it as a super user-friendly web tool, and you can access it as a website as well.

The problem is you can’t regulate mathematics. People know how to replicate this now. In many cases, the software which is used to create these tools comes from perfectly legitimate uses of it and is being perverted by bad actors. Unfortunately, it is very difficult, near impossible, to ever really remove this stuff entirely. When one goes down, others spring up to take its place.

There are things we could do to help, to drive it underground. Internet service providers could help, potentially. Responsive action from hosting services, for example. Making sure app stores are all aligned. Ultimately, if someone wants to find the techniques and tools, they’re gonna find them somewhere. We can make a difference by making it as hard to find as possible. Friction is a big thing.

Are there effective legal approaches to trying to stem the spread?

There is quite a lot of action going on around the world right now to think about what we can do about deepfake image abuse in different countries. Also on the state and federal level in the US. There are many states introducing legislation to criminalize the use of nonconsensual fake pornography. The key thing there is it’s criminal. The state would prosecute the offender, the individual would not have to bring that case themselves and pay for that as a civil trial.

In the UK, there’s a review going on into intimate image abuse laws, with deepfakes in the crosshairs. You’re seeing in South Korea a big push from fans of K-Pop girl groups, who are one of the biggest groups targeted. That’s one of the most surprising findings of my reporting back in 2019, which was that 25 percent of the victims were South Korean K-Pop singers. In South Korea, there’s a lot of social action trying to get this explicitly outlawed.

There is still a good chance that if someone created something like this and was identified and reported to police, they could be charged with harassment or indecent communications, but we probably do need specific laws that acknowledge the specific harms that this new technology can cause. If you can identify who is creating this material on the internet, where anonymity is ubiquitous, chances are there may be recourse in the legal system. But identifying who they are is a big challenge. They may not be in the same jurisdiction, which makes it even harder. The law can do very little to stop the proliferation of the tools for people who really want to find them.

Is it possible to create the tools for consensual deepfake pornography without them being inevitably misused in a way that causes more harm than good?

Is there any space for consensual deepfake porn? Are there creative consensual applications? In 2018, the porn company Naughty America announced that it would provide custom, consensual deepfakes for customers.

The question isn’t so much whether consensual deepfake pornography is possible. Of course, it’s possible. More pressing: Is it possible to create the tools for consensual deepfake pornography without them being inevitably misused in a way that causes more harm than good? That tool that I discovered framed itself as a way to put yourself into pornographic footage, but obviously it had no guardrails and I think it was disingenuous. I think they knew exactly what it was going to be used for.

The Naughty America thing was a PR stunt, I think. Maybe there’s a way to have that service, but then do you need to have a know-your-customer style verification service? How do you confirm consent as having been granted? Who do you need the consent from—the performer, whose body you’re being swapped onto? If you want to scale that technology, is it possible to do that in a way where women or men are not going to be targeted and cause a lot of harm? I think it will be very hard, unless you’re doing a very bespoke service with contracts being signed and passports and video calls. There’s a lot of layers of security that will be needed to make sure that everything is OK.

There’s a really interesting question of whether making deepfake pornography without sharing it should be considered bad. Obviously, sharing this stuff, to many people, is the primary offense, right? But there’s a really interesting debate: Should there be a simple making offense? This is the idea if you just make a piece of deepfake intimate imagery for your own consumption and you have no intention of sharing it, should that still be considered a criminal act?

There’s a guy called Carl Öhman who is a philosopher of technology who coined a term for this: the pervert’s dilemma. One way of looking at it is saying, well, you’re trying to regulate and police fantasy. People fantasize about other people all the time. The other side of it says: By bringing into existence a piece of footage of that nature, even just in the act of making it, you’re violating someone’s dignity in a way. What’s more, you’re bringing into existence something that could do a great amount of harm. I definitely am inclined to fall into the latter camp, which is that, if not explicitly made criminal, it should certainly be highly discouraged and is ethically very dubious.

I’ve seen dire warnings about the implications of deepfakes on, say, American democracy. What has been overhyped and what has been under-hyped, in terms of the implications and dangers?

This is something I work on consciously. I try to build that nuance into the discussion of synthetic media and deepfakes. There is a lot of promise for this technology, but it needs to be done responsibly. We also need to get our facts straight on malicious uses.

There has not been a significant case that I’m aware of where deepfakes have been used in that smoking gun way, the realistic fake video of someone saying or doing something they’ve never done. That’s never happened. Headlines talk about deepfakes causing World War III or “you can never believe what you see anymore, this is gonna break democracy.” Maybe those hypotheticals at some point in the future could be more tangible, but right now it’s just not the case. The biggest impact we’ve seen in disinformation in the democratic sphere is in places like Myanmar and Gabon in Africa, where real videos have been dismissed as fake using the idea of deepfakes. It almost poisons the well, introducing this plausible deniability. You don’t just say that fake things can look real, but you dismiss real things as fake. That concept’s referred to as the liar’s dividend.

In terms of the understated sides of things, one of the frustrating things for me is seeing the immediate problems that we’re seeing right now. The millions of women being targeted right now are often not treated with the same urgency or immediacy in the political realm as hypothetical concerns in the medium term. People are talking about the impact on disinformation and cyber security, which is a growing area, but they often don’t seem to acknowledge that this is already a problem for a certain group of people, and that group of people is mostly women.

38 Comments
Oldest
Newest
Inline Feedbacks
View all comments
Share Tweet Submit Pin