Misogyny, AI and Online Safety: what schools need to know about emerging safeguarding risks

The UK is introducing training for teachers to spot early signs of misogyny as part of a wider strategy to reduce violence against women and girls. The plans for schools in England should empower teachers to recognise harmful patterns early on, shifting from reactive to preventative action. Will schools now become the first line of defence for gender respect and safety?

The role of schools and teachers

Teachers, amongst their many roles, are soon to be empowered to recognise harmful patterns early, with experts brought in to pilot new approaches tackling misogyny in schools. Multiple new measures have been announced, including a £20m training package and a specific helpline for teenagers concerned about their relationships. Success will, however, depend on sustained and meaningful work around relationships, behaviour, and values across the whole community.

Multi-agency approaches

Alongside schools, police and social services are expected to receive updated guidance relating to adolescents, including relationships and domestic abuse. In some cases, children displaying harmful behaviours may be referred to behaviour/change programmes.

At present, it remains unclear how these measures will operate in practice, including whether referrals would come directly from schools or whether programmes would be delivered within educational settings.

Balancing safeguarding and school capacity

Equipping schools to identify and address early signs of misogyny in a rapidly changing digital landscape is a welcome development. What is clear is that social media and artificial intelligence are influential factors. Social media algorithms can shape the visibility and framing of content, while AI technologies can lower barriers to misuse or escalation.

AI has added a particularly concerning dimension, with some applications posing risks to children’s safety, dignity, and privacy. As AI technologies develop at an extraordinary speed, their impact on school communities is expected to be significant (see our earlier article on AI for further discussion here).

With the emergence of image generation tools such as Grok, the risks are only likely to increase. There does, however, appear to already be some recognition of this potential risk with action being taken by some countries already in the hope of preventing harm.

In the UK, the Internet Watch Foundation is reported to have discovered criminal imagery created by AI, and there have been calls from government figures, including Liz Kendall (Secretary of State for Science, Innovation and Technology), to block access to these AI tools if they fail to comply with online safety laws. The regulator, Ofcom, has confirmed it is urgently considering its response, and the government has announced that forthcoming criminal legislation will address the creation of intimate deepfake images and strengthen regulatory expectations on technology companies and online platforms, reflecting growing concern about the misuse of AI-generated content.

It is, therefore, a welcome addition to have schools equipped to spot and treat signs of misogyny in this fast-changing world of technology, where more harm can be done online than ever before.

However, this does have the potential consequence of adding more to an already ever-increasing workload, placing a significant amount of additional pressure on our teachers. As such, any expansion of schools’ safeguarding role, although needed, must be accompanied by appropriate training, resources, and sustained support from government and partner agencies.

Risk management

From a risk and compliance perspective, the increasing focus on misogyny, online harm, and AI-enabled abuse has significant implications for schools’ statutory duties. Schools are already required to safeguard and promote the welfare of children, and emerging government guidance in this area is likely to increase expectations around early identification, intervention, and record keeping. We would expect further amendments in this area to be made to the Keeping Children Safe in Education guidance for 2026.

Schools will need to consider whether their existing safeguarding policies, staff training programmes, and reporting procedures adequately address additional concerns such as (but not limited to) online misogyny and gender-based harassment; the misuse of AI tools and image-generation technology; and peer-on-peer abuse occurring in digital spaces beyond the school environment, including social media platforms.

Governing bodies and trustees should ensure that safeguarding arrangements are regularly reviewed, risk assessments are updated to reflect technological developments, and staff are clear on thresholds for concern, referral pathways, and information-sharing obligations. Clear documentation and consistent application of policy will be essential in demonstrating that reasonable and proportionate steps have been taken. Effective risk management will, however, also depend on clear national guidance, coordinated multi-agency working, and meaningful regulatory oversight of online services.

Please note that at the time of writing, the relevant provisions relating to the creation of intimate deepfake images have been announced but are not yet in force.

Like to talk about this Insight?

Get Insights in your inbox

To Top