CHILD SEX ABUSE MATERIAL SPREADING ON SOCIAL MEDIA

Child sexual abuse material is spreading “exponentially” over social media via networks of paedophiles who groom thousands of children, sometimes within seconds of contact.  Almost a million Twitter accounts were suspended for posting child sexual exploitation material last year, the company said.  Facebook and Instagram said they removed 21 million child nudity and exploitation posts in the nine months between July 2018 and March 2019. And YouTube reported taking down 1.9 million videos due to “child safety” issues between September 2018 and March 2019.

Twitter says it finds 96 per cent of child abuse accounts before users report them, and Facebook boasts a 99.2 per cent rate of catching child-exploitation posts before users report them.  But there are reasonable doubts about these detection rates, according to Dr Michael Salter, associate professor of criminology at University of NSW.  “I’ve just come back from two months in North America discussing these issues with some of the key players who are taking these child-abuse material notifications from the public,” he said.  “And I find that number of 99 per cent detection very difficult to believe.

“We need a clear set of eyes on the scale of this problem, the origin of this problem and the level of oversight and independent scrutiny,” Dr Salter says.  “We have exponentially increasing notifications of child-abuse material in every known agency and authority that takes these reports that are independent of industry.”  Instagram, which is owned by Facebook, reportedly became the leading platform for child grooming in the United Kingdom this year, with more than 5000 local reports emanating from its site there.  The Australian Federal Police (AFP) deal with about 1000 reports of child exploitation material each month.

“The AFP Assessment Centre is now receiving more and more reports of children aged as young as four producing sexually explicit material and uploading this material to social media platforms,” an AFP spokesman said.  A child can be groomed in seconds and some grooming operations use multiple platforms, with multiple identities, the spokesman said.  Offenders often use apps as a gateway, and then direct children to other platforms to elicit photos and facilitate meetings, the AFP said.  Police report offenders targeting parents or carers for access to children through social media with offers of free products or requests for children to model clothing.

Responding to claims its platform is used to facilitate child exploitation, YouTube said it disabled comments from “tens of millions of videos that could be subject to predatory behaviour” in February.  And last month YouTube banned live streaming by minors unless “clearly accompanied by an adult”.  Youtube is reportedly being investigated in the United States for allegedly violating children’s privacy.  More than 400 hours of video are uploaded to YouTube every minute and despite it not being meant for anyone under 13, the platform regularly “terminates” accounts the company believes to be held by someone under 13 as it violates their policy.

University of Melbourne Research Fellow Dr Gemma McKibbin said it was common to hear about grooming on these platforms but that children were also groomed playing online games.  “We don’t give children cars and expect them to drive but we do give them Ipads and phones and expect them to negotiate adult content and potential grooming,”  Dr McKibbin said.  Perpetrators often show pornography to children to desensitise them and then manipulate them to believe they’re in a boyfriend/girlfriend relationship, which means if adults recognise the grooming, children may protect the perpetrator and won’t make statements against them, Dr McKibbon said.

Adult nudity on Facebook and Instagram last year reached its highest level since reporting began in 2017, with about 19 million posts removed in the three months to March 2019.  Messenger and other platforms are hoping to nullify the legal threat posed by illegal content on their services by moving to encrypted communication methods, Dr Salter said.  “Facebook claims that encrypted Messenger is in response to the Cambridge Analytica scandal and is linked to concerns for user privacy,” Dr Salter said.  “With encryption kids will be far less safe because Facebook is unable to tell whether an illegal activity is taking place on the Messenger platform,” he said.

“Once they are notified of abuse material they remove it.” Salter said.  Facebook employs about 15,000 people to review content and enforce “community standards” but a bug in its detection software meant they found less child nudity and exploitation content in the first three months of 2019 (5.4 million posts) than the previous quarter (6.8 million).  The social networks said they report all abuse to relevant authorities and will continue to develop solutions to block and remove exploitative content, as well as prevent grooming online.

Source: Compiled by APN from media reports

Print This Post Print This Post