Pioneering chatbot reduces searches for illegal sexual images of children
A major 18-month trial project has demonstrated that a first-of-its-kind chatbot and warning message can reduce the number of online searches that may potentially be indicative of intent to find sexual images of children.
A new report published today by the University of Tasmania has found people looking for sexual images of children on the internet were put off, and in some cases sought professional help to change their behaviour, following the intervention of a ground-breaking chatbot trialled on the Pornhub website in the UK.
How does the chatbot work?
The reThink chatbot engages in conversation with users attempting to search for these images. Alongside a static warning page, the chatbot tells users these images are illegal and signposts them to Stop It Now where they receive help and support to stop their behaviour.
This is the first project of its kind to use chatbot technology to intervene when people use search terms that suggest an interest in finding sexual images of children. It then tries to help them stop, or not start, offending.
Sexual images of children are prohibited on Pornhub – one of the most visited websites in the world – and attempts at searching for illegal material on the site yield no results. The Aylo-owned platform provides the ability to intercept people seeking out this content and redirect them to proven support channels.
The deterrence approach works
The independent review published today found there was a decrease in the number of searches for sexual images of children on Pornhub in the UK during the time the chatbot was deployed.
In addition, the vast majority of users (82%) whose searches triggered the warning and chatbot did not appear to search again for sexual images of children. Pornhub blocks searches based on a constantly evolving, extensive database of banned terms in multiple different languages.
Hundreds of people also clicked through to our Stop It Now website or called our helpline during the trial.
Together, these results show the deterrence approach working in two ways: for some people, instructing them that what they are doing might be illegal is enough to get them to change their behaviour, while others need more in-depth support from Stop It Now’s advisors and online self-help.
The chatbot resulted in 1,656 requests for more information about Stop It Now services, 490 click-throughs to our Stop It Now website and approximately 68 calls and chats to our Stop It Now helpline.
The results were hugely encouraging and that the next phase of the project would look to rollout the chatbot to more websites and apps to see if the same successes could be replicated. We are keen to hear from tech companies who would like to discuss interventions on their own platforms.
Donald Findlater, director of the Lucy Faithfull Foundation’s Stop It Now helpline, said:
“For the first time we’ve shown that this chatbot coupled with the warning page can deter potential offenders before they commit a criminal offence. It’s estimated that hundreds of thousands of people in the UK are viewing sexual images of children and this chatbot is a way to prevent crimes, reduce the demand for illegal material, and protect children.
“Unfortunately, these warnings won’t work for everyone and police activity and arrests will remain necessary. But we also need to intervene with people at the earliest stage to get them to stop offending, or not start in the first place. And these interventions must be in the places where they’re trying to offend.
“That means all tech platforms – social media, gaming, pornography and others – need to step up and show that they’re serious about keeping children safe. For too long they’ve been allowed to avoid backing up their words with actions. This chatbot can play a part in cleaning up the internet and we encourage any apps and websites to test it out and see if it works for them.”
Joel Scanlan, Senior Lecturer at University of Tasmania and independent evaluator of the project, said:
“It is very encouraging to see a large tech company being this transparent, allowing an independent assessment of the effectiveness of deterrence messaging on their platform. Deterrence messaging has been used on several large platforms, but we haven’t ever seen evaluations of their effectiveness made public.”
Dan Sexton, Chief Technical Officer at the Internet Watch Foundation, said:
“The proliferation of child sexual abuse imagery on the internet is a major and growing threat. It is clear something needs to be done about the demand for this material, as well as tackling its availability.
“This trial shows us it is possible to influence people’s behaviour with technological interventions. If a small nudge, like that provided by this chatbot, can prompt someone who may be embarking on the wrong path to begin turning their life around, it suggests it is possible to begin addressing the issue of demand for this abhorrent material.
“I am pleased to see so many people have taken this vital first step. If we can prevent people becoming offenders in the first place, so much of the horrendous abuse we see taking place could be avoided, as there simply won’t be the appetite for it. Pornhub deserves credit for piloting this programme, and it shows what can be done, and the positives that can come from collaborative, innovative approaches like this.”
A spokesperson for Aylo said:
“We are proud of the work we are doing with Internet Watch Foundation and Lucy Faithfull Foundation to detect and deter bad actors who try to use our platform. Pornhub has zero tolerance for child sexual abuse material, and the reThink project has shown that technology like the chatbot has an important role to play in keeping websites free of illegal activity. This research and findings help us better understand what is working and how to best mitigate the ability of bad actors from misusing platforms.
"We cannot turn a blind eye to this issue, and we are glad to see the increase in use of the life-changing services from those who triggered the message. The success of this project really shows what can happen when major players in the tech industry work with external experts and band together. We encourage other tech and social media platforms to follow our lead and implement tools like the chatbot as part of their deterrence strategy.”
Find out more
To find out more about Project Intercept, please contact: intercept@lucyfaithfull.org.uk
Sign up for news & updates
Fill in our newsletter form hereTo support our work to protect children, donate today.
Donate today