Reports of to the eSafety Commissioner increased by 55 per cent in 2021-22 to more than 4,000 reports, with significantly more males making complaints than females.
eSafety Commissioner Julie Inman Grant said there has been a surge in both the number and proportion of males reporting image-based abuse: from around 1,300 reports (49 per cent of all reports) in 2020-21 to almost 2,500 reports (59 per cent of all reports) in 2021-22.
“This jump is being driven by a surge in reports. In 2021-22, 55 per cent of image-based abuse reports concerned some form of sexual extortion, with most of these – 76 per cent – from men, typically aged 18 to 24 years,” Ms Inman Grant said.
“We know from our international partners that this type of blackmail is gendered and mainly operated by offshore organised crime, rather than by solo opportunists.
“These criminals use proven emotional tactics and fake profile pictures of attractive women to ensnare young men in online chat which quickly turns sexual, then elicit intimate images before flipping the conversation into a nasty game of chicken. They’ll flood their target’s phone with aggressive messages, threatening to share their intimate images unless they pay up.
“We’ve heard from people who’ve been blackmailed into paying more than $10,000. But these are professional criminals, so once they get a bite, they will keep coming back.
“Therefore, we really need to get the message out there that the best response is to not engage. Do not pay or negotiate – it will lead to more threats and demands. Instead, screenshot the evidence, including the blackmailer’s social media usernames, bank account details and any URLs, then report in-app and block them.
“If your intimate content has been shared, whether that’s part of a sexual extortion scam or by someone you know, report it to . Last financial year, our investigators had an 88 per cent success rate in having URLs taken down.”
When a person under 18 years is targeted, this becomes a criminal issue designated as child sexual exploitation and should be reported to the Australian Federal Police-led . In consultation with law enforcement, eSafety assists with the removal of such content.
“We’re working closely with social media services to alert them to accounts that are being used to elicit, share or threaten to share intimate content, either as part of a sexual extortion scam or other types of image-based abuse. Last financial year, we identified over 3,500 accounts and 75 per cent of them were subsequently deleted.”
The new figures are part of eSafety’s 2021-22 Annual Report, which also reveals that complaints about continue to rise. This type of content includes child sexual exploitation material and pro-terror content and increased by 45 per cent in 2021-22 compared to 2018-19.
“The pandemic supercharged demand for this horrific content, taking it from the dark web onto sites any of us might stumble across. During 2021-22, we took action that resulted in the removal of more than 11 thousand URLs for prohibited online content,” Ms Inman Grant said.
“Almost all of these web pages – 99 per cent – provided access to child sexual exploitation material and were referred to our global partners for removal, and to law enforcement. But the volume of reports we’re dealing with reinforces the urgent need for systemic, industry-wide change to get this content proactively taken down.
“We’re currently reviewing information on measures they’re taking to stem the flow of this material as part of our new powers under the .”
The number of cyberbullying reports (1,542) to eSafety in 2021-22 increased by 65 per cent, with the majority concerning girls (63 per cent) and those aged 12 to 16 years (78 per cent). eSafety’s requests to online platforms to remove serious cyberbullying material were successful in 88 per cent of cases.
“We’re pleased that more and more young people feel confident to seek help and advice from eSafety. It reinforces the value of our education programs, which reached more than 1.2 million educators, students, parents and community members last financial year,” Ms Inman Grant said.
This annual report is the first time that eSafety has publicly reported on the , which came into effect on 23 January 2022.
“The majority of reports did not meet the deliberately high threshold outlined in the Online Safety Act, which requires that the material is both intended to cause serious harm, and is menacing, harassing or offensive in all the circumstances. However, we did make 212 notifications to platforms for terms of services breaches and 82 per cent of that content was removed,” Ms Inman Grant said.
“eSafety’s image-based abuse, cyberbullying and adult cyber abuse schemes are the only legislated complaints mechanisms dealing with these types of harms anywhere in the world. Australians come to us in great distress; and we know the faster we remove the content, the more relief we provide the victim-survivor. We’re proud of our harms minimisation approach and our high rate of success.
“We also encourage anyone who has had an upsetting or stressful experience online to talk to family, friends and loved ones about the impact and how you feel. If you need extra support, talk to your doctor or call