Guiding how and when we use artificial intelligence
To best prevent child sexual abuse, we continually innovate and embrace new technologies in our work. One of these is artificial intelligence (AI).
There have been huge recent advances in what AI can do. There are many ways that it could help our work, from improving our efficiency with repetitive tasks, to creating greater capacity on our Stop It Now helpline. However, we also recognise the risks AI brings (such as the dangers of spreading misinformation) and the importance of adopting a principled approach to how we use it, particularly generative AI.
Earlier this year we started an AI working group and began work to create a set of principles that would guide our staff when considering using AI. These principles have now been rolled out and are part of our everyday working practice. They help to promote the safety of our beneficiaries, staff and partners.
Privacy
- We will safeguard the privacy and confidentiality of individuals' data in any work involving AI.
- We will apply data protection regulations and industry best practices to ensure the highest standards of data privacy.
Transparency
- We will be open about the use of AI technologies within LFF.
- We won’t publish any AI-generated content without human oversight.
Accountability
- We take full responsibility for the outcomes of our AI use and are accountable for any consequences.
- We commit to identifying and addressing any biases, errors, and ethical dilemmas when using generative AI, and taking corrective actions to avoid harm to marginalised or vulnerable groups.
- We have a clear process for staff to report concerns or problems related to AI use, with a designated person responsible for addressing these issues.
Empowering staff
- We will provide staff with the necessary training to understand and utilise AI technologies safely.
- We will foster a culture of AI literacy and continuous learning, making sure our staff stay informed about emerging trends and their implications.
Stakeholder engagement
- We will engage with diverse stakeholders including staff, beneficiaries, and partners, to communicate our AI principles and gather feedback where relevant.
Ethical considerations
- Ethical implications of any AI work will be included in our project initiation documents as standard practice.
Ongoing assessments
- We will regularly monitor the use of AI within LFF through internal staff communication channels.
- A six-monthly staff survey will be conducted to gather insights and report on AI usage within LFF.
AI and the tools available are evolving all the time and our AI working group will continue to research and monitor risks and what opportunities the new technologies bring. We will, of course, thoroughly research and test any new technology before adopting it, and be guided by experts in the field.
If you would like to know more, please email intercept@lucyfaithfull.org.uk
Your support helps us keep children safe
Want to support our work to prevent child sexual abuse? Donate now.
Sharing our posts and information helps us reach more people and protect more children. Sign up to receive our emails and find us on Twitter/X, Facebook, LinkedIn, Instagram and YouTube.
Sign up for news & updates
Fill in our newsletter form hereTo support our work to protect children, donate today.
Donate today