³Ô¹ÏÍøÕ¾

AI ‘nudify’ sites are being sued for victimising people. How can we battle deepfake abuse?

Last week, the office of the San Francisco City Attorney issued a landmark lawsuit. It’s accusing of violating United States laws in relation to non-consensual intimate images and child abuse material.

Author


  • Nicola Henry

    Professor & Australian Research Council Future Fellow, Social and Global Studies Centre, RMIT University

“Nudify” sites and apps are easy to use. They let anyone upload a photo of a real person to generate a fake but photorealistic image of what they might look like undressed. Within seconds, someone’s photo becomes an explicit image.

In the first half of 2024, have been visited more than 200 million times. One of the sites says: “imagine wasting time taking her out on dates, when you can just use [redacted site] to get her nudes”.

These sites are also . Since the start of this year, there has been a of nudify apps or sites on social media.

What can victims do?

Even if the images look fake, deepfake abuse can cause . It can damage a person’s reputation and career prospects. It can have detrimental mental and physical health effects, including social isolation, self-harm and a loss of trust in others.

Many victims don’t even know their images have been created or shared. If they do, they might successfully report the content to mainstream platforms, but struggle to get it removed from private personal devices or from “rogue” websites that have few protections in place.

Victims can make a report to a digital platform if fake, non-consensual intimate images of them .

If they’re in Australia, or if the perpetrator is based in Australia, the victim can report to the , who can work on their behalf to have the content taken down.

What can digital platforms do?

Digital platforms have policies prohibiting the non-consensual sharing of sexualised deepfakes. But the .

Although most nudify apps have been , some are still around. Some “only” let users create near-nude images – say, in a bikini or underwear.

Tech companies can do a lot to stop the spread. Social media, video-sharing platforms and porn sites can ban or remove nudify ads. They can block keywords, such as “undress” or “nudify”, as well as issue warnings to people using these search terms.

More broadly, technology companies can use tools to detect fake images. Companies behind the development of to prevent the creation of harmful or illegal content.

and are important – but not very effective once images have been shared. can also prevent the future sharing of non-consensual content.

Some platforms already use such tools to address deepfake abuse. They’re part of the solution, but we shouldn’t rely on them to fix the problem.

Search engines play a role, too. They can reduce the visibility of nudify and non-consensual deepfake sites. Last month, on deepfake abuse. When someone reports non-consensual explicit deepfakes, Google can prevent the content appearing in search results and remove duplicate images.

Governments can also introduce to address deepfake abuse. This can include to nudify and deepfake sites, although VPNs can bypass blocked sites.

What does the law say?

In Australia, there are criminal laws on the non-consensual sharing of intimate images, or making threats to share intimate images against adults.

There are also federal offences for accessing, transmitting, soliciting or possessing child abuse material. This includes fictional or fake images, including drawings, cartoons or .

Under Australian state and territory laws, an “intimate image” of an adult is defined broadly to include digitally altered or manipulated images. Currently, it’s only a crime to share or make threats to share non-consensual, synthetic, intimate images. An exception is Victoria, where there’s a separate criminal offence for producing intimate images, including digitally created ones.

In June, to create a standalone offence for the non-consensual sharing of private sexual material. The maximum prison sentence would be six years. The bill expressly mentions it’s irrelevant whether the photos, videos or audio depicting the person are in “unaltered form” or have been “created or altered using technology”.

The bill also includes two aggravated offences, including a maximum of seven years’ imprisonment if the person who shared such images also created or altered them.

Laws are helpful, but can’t fully solve the problem. often have limited resources for investigation. Working across jurisdictions, particularly in other countries, can also be difficult. For victim-survivors, pursuing the criminal justice path can take a further emotional toll.

Another option are civil remedies under the federal . Administered by the eSafety Commissioner, civil penalties include formal warnings and hefty fines for users and tech companies that share or threaten to share non-consensual images.

We must improve our digital literacy

It’s getting increasingly difficult to tell real and fake images apart. Even when images look “fake” or are labelled as such, people can still be led to believe they are real.

Investing in is therefore crucial. Digital literacy means fostering critical thinking skills so people can assess and challenge misinformation.

Other measures include raising awareness on the harms of deepfake abuse and better education on respectful relationships and sexuality. Another one to tackle is to improve critical thinking on the subject that .

Perpetrators who engage in deepfake abuse, tech developers who enable the tools, and tech companies that allow its spread must all be held accountable. But detecting, preventing and responding to this abuse will ultimately involve creative solutions across the board.

If this article has raised issues for you, or if you’re concerned about someone you know, call 1800RESPECT on 1800 737 732 or visit the for helpful online safety resources. If in immediate danger, call 000.

The Conversation

Nicola Henry receives funding from the Australian Research Council (ARC), Google, and the Victorian Attorney General’s Office. She is also a member of the Australian eSafety Commissioner’s Expert Advisory Group.

/Courtesy of The Conversation. View in full .