Some of the world’s biggest technology companies are not doing enough to tackle child sexual exploitation on their platforms, according to a new from Australia’s eSafety Commissioner which highlights inadequate and inconsistent use of technology to detect child abuse material and grooming, and slow response times when this material is flagged by users.
In August, eSafety sent the first legal notices issued under Australia’s new to Apple, Meta (Facebook and Instagram), WhatsApp, Microsoft, Skype, Snap and Omegle, requiring them to answer detailed questions about how they were tackling the issue.
eSafety has compiled their answers in a world-first report which will be used to lift safety standards across the industry.
“This report shows us that some companies are making an effort to tackle the scourge of online child sexual exploitation material, while others are doing very little,” eSafety Commissioner Julie Inman Grant said.
“But we’re talking about illegal content that depicts the sexual abuse of children – and it is unacceptable that tech giants with long-term knowledge of extensive child sexual exploitation, access to existing technical tools and significant resources are not doing everything they can to stamp this out on their platforms. We don’t need platitudes, we need to see more meaningful action.
“As a regulator, we’ve previously sought information from these online service providers about how they are tackling child abuse material without getting clear answers. With the introduction of the and the Basic Online Safety Expectations, we are able to compel companies to provide that information. This means we finally have some answers – and what we’ve learned is very concerning.
“However, they say that sunlight is the best disinfectant, and we believe that compelling greater transparency from these companies will help lift safety standards and create collective will across the industry to meaningfully address this problem, at scale.”
eSafety’s report includes confirmation from Apple and Microsoft that they do not attempt to proactively detect child abuse material stored in their widely used iCloud and OneDrive services, despite the wide availability of PhotoDNA detection technology. PhotoDNA was developed by Microsoft and is now used by tech companies around the world to scan for known child sexual abuse images and videos, with a false positive rate of 1 in 50 billion.
Apple and Microsoft also reported that they do not use any technology to detect live-streaming of child sexual abuse in video chats on Skype, Microsoft Teams or FaceTime, despite the extensive use of Skype, in particular, for this long-standing and proliferating crime.
The report also unearthed wide disparities in how quickly companies respond to user reports of child sexual exploitation and abuse on their services, ranging from an average time of four minutes from Snap to two days for Microsoft - and 19 days when these reports require re-review.
“Speed isn’t everything, but every minute counts when a child is at risk,” Ms Inman Grant said.
“At least Microsoft offers in-service reporting. There is no in-service reporting on Apple or Omegle, with users required to hunt for an email address on their websites – with no guarantees they will be responded to.
“Fundamental to safety by design and the Basic Online Safety Expectations are easily discoverable ways to report abuse. If it isn’t being detected and it cannot be reported, then we can never really understand the true scale of the problem.”
eSafety’s report also identifies problems in preventing recidivism – where a user banned for sharing child sexual exploitation and abuse material is able to create a new account and continue to offend.
Meta revealed if an account is banned on Facebook, the same user is not always banned on Instagram, and when a user is banned on WhatsApp, the information is not shared with Facebook or Instagram.
“This is a significant problem because WhatsApp report they ban 300,000 accounts for child sexual exploitation and abuse material each month – that’s 3.6 million accounts every year,” Ms Inman Grant said.
“What’s stopping all those offenders creating new accounts on Facebook or Instagram and continuing to abuse children?”
Snap and Omegle recently announced policy changes in relation to child safety tools, including by Omegle in relation to the minimum age of participants in livestreamed video chat.
“While we welcome the policy changes, they need to be backed by effective age assurance and verification processes and technology. Without measures to detect and prevent under-age users from engaging in these co-mingled environments, these serious risks to children and young people remain,” Ms Inman Grant said.
The report contains responses to a range of questions from all seven companies, which were given 28 days to respond to eSafety’s legal notices or risk fines of up to $550,000 a day.
To read the report, visit