Taylor Swift’s at the hands of deepfake pornographers has spotlighted the ugly, misogynistic practice of humiliating women () by creating and distributing fake images of them performing sexual acts.
If you haven’t heard of them, deepfakes are images, videos or audio that falsely present real people saying or doing things they did not do.
Deepfakes are created by an artificial intelligence (AI) technique called deep learning that manipulates aspects of existing material (or even uses generative AI to create ‘new’ footage) of things that never actually happened.
And the results can be quite convincing.
One of the deepfake images targeting Taylor Swift totalled almost 47 million views before . Even the White House got involved, .
AI has been used to create non-consensual pornographic and sexualised images of celebrities, politicians and ordinary individuals. It’s wrong on every level and can cause severe psychological, relationship and career harm.
And the products just makes this easier.
Deepfakes can be used to blackmail and intimidate the person falsely represented. Australia’s that school-aged children are being bullied using deepfake images.
Deepfake images, may also be used for or to undermine the and .
But are they illegal? And does the legal system offer any redress to targets in the increasingly Wild West of the internet?
It has been reported that Taylor Swift is in response to the non-consensual and sexually explicit deepfake images of her recently released online and that US lawmakers are considering to tackle the issue.
In Australia, state and territory criminal law (other than in Tasmania) contains specific offences for , which may capture deepfake images.
In Victoria, for example, it’s an offence to intentionally , or an intimate image depicting another person where the image is “contrary to community standards of acceptable conduct”.
Of course, criminal law often fails to provide justice to the victims of deepfake images because the people who created and shared them – the perpetrators – cannot be found or traced.
Another response is to ensure the images are removed as quickly as possible from social media or websites. This was done in the Taylor Swift case, when using her name following the pressure of thousands of her fans.
But most victims aren’t Taylor Swift and even those who are famous, are .
In Australia, victims of offensive deepfake images that platforms and websites remove the images, although they will not usually have a massive public movement behind them.
The E-Safety Commissioner to demand they be taken down. But by then it may be too late.
Notably, even in the Taylor Swift case, the offending images were purportedly shared millions of times before this happened.
The (Cth) imposes civil penalties on those who fail to comply with take down orders or post intimate images without consent, including deepfake images. The penalty is up to or $AU156,500.
Civil penalties are a kind of fine – money paid by the wrongdoer to the Commonwealth. The payment is aimed at simply by making the cost of the conduct prohibitively high.
But the penalty does not get paid to the victim, and they may still wish to seek compensation or vindication for the harm done.
It is unclear if Taylor Swift will sue or who she will sue.
In Australia, deepfake victims have limited possible causes to seek damages via civil action. Again, in most cases, the victim will not be able to find the wrongdoer who created the non-consensual pornographic image.
This means the most viable defendant will be the platform that hosted the image, or the tech company that produced the technology to create the deepfake.
In the US, digital platforms are from this kind of liability by , although the limits of that immunity are still being explored.
In Australian law, a platform or website can be directly liable for . Non-consensual may be classed as defamatory if they would harm the reputation of the person being shown or expose them to ridicule or contempt.
There is, unfortunately, still a question around whether a deepfake which is acknowledged as a ‘fake’ would have this effect in law, even though it may still humiliate the victim.
Moreover, Australia is now introducing to in these scenarios.
This immunity is subject to including that the platform have an “accessible complaints mechanism” and “reasonable prevention steps”.
In cases where deepfake images of celebrities are used to promote , particularly , the conduct occurs in ‘trade or commerce’.
Victims of this fraud may be able to claim compensation for the harms caused to them by misleading conduct under the or .
Of course, as we have already seen, the perpetrator is likely to be hard to find. Which again leaves the platform.
Test case litigation by the (ACCC) is currently testing the possibility of making digital platforms, in this case Meta, liable for misleading .
The because it actively targeted the ads to possible victims. The ACCC is also arguing that Meta should be liable as an accessory to the scammers because it failed promptly to remove the ads, even after they were notified that they were fakes.
And what about the technology producer who put the generative AI tools used to create the deepfake on the market?
The legal question here is whether they have a legal duty to make those tools safe.
These kinds of ‘‘ might include technical interventions to prevent the tool responding to prompts for creating deepfake pornography, more robust content moderation or identify fake and authentic images.
Some may be doing this voluntarily. There is talk of introducing mandatory ‘safety’ obligations in Australia and new in the EU. However, currently, the producers of generative AI are unlikely to owe a legal duty of care that would oblige them to take these actions.
And none of the methods are foolproof, and may introduce .
We should remember that the core harm of sexually explicit deepfake images arises from a lack of consent and social beliefs that tolerate the weaponisation of intimate images.
Sure, people are entitled to create and share sexualised images for their own interest or pleasure. But this should never be confused with the use of non-consensual explicit deep fake images to threaten, exploit and intimidate.
Right now, Australia’s laws offer victims little in the way of genuine and accessible redress through the legal system. There needs to be a multifaceted response – embracing technical, legal and regulatory domains, as well as community education, including about the offence of intimate image abuse.
It is not just celebrities who are the victims of deepfake pornography, but the Taylor Swift case may be a wake-up call to action for the law to catch up.
is an academic conference for scholars discussing the impact of Taylor Swift. It runs at the University of Melbourne from 11-13 February 2024 with on Sunday 11 February and recordings of the keynote presentations available online after the conference.