Register for free to unlock all course modules and resources.
Learning outcomes:
Understand your GDPR obligations when using AI tools and where data sovereignty risks lie
Build safeguarding routines to respond to AI-generated harm and inappropriate content
Ask the right questions to evaluate vendor claims and protect your school community
Create transparent consent and accountability processes for AI use
Prepare for Ofsted with clear documentation of AI governance and oversight
Free resource for module 5
AI Evidence Planner – Ofsted Ready
Download module completion certificate
Celebrate your progress and share your achievement!
Meet the host:
Laura Knight:
Teaher, Digital Education and AI Specialist, TechWomen100 Award Winner 2025
Laura is an experienced teacher and leading voice on AI in education. She combines classroom expertise with deep technical knowledge to help school leaders navigate AI adoption thoughtfully. Laura has trained thousands of educators across the UK and internationally on responsible AI use, always grounding her work in what actually works for teachers and pupils.
More info on this module:
Safeguarding, data protection and governance
Welcome to module five, where we're going to be talking about safeguarding, compliance and data.
In this module, we're going to consider some key aspects of what AI literacy means in terms of responsible leadership and compliance.
We'll cover safeguarding, data protection and GDPR, ethical frameworks, transparency and consent, vendor evaluation, and inspection readiness.
You'll get some practical ideas too, including questions to ask your data protection officer and elements worth featuring in your AI policy.
Let's get started.
Inappropriate content and safeguarding
You'll be aware that AI can produce manipulative or offensive content like deepfakes, bullying or spoof messages, and that no filter can fully prevent them.
AI-generated harm can be subtle: bullying in someone else's voice, manipulated photos, or synthetic news stories all present unique challenges.
Staff across the school community require regular practical training in spotting, reporting and responding to this kind of harmful content.
Pupils need clear information about what to do and who to speak to. The rapid speed at which AI can create and spread harmful content means responses must be fast.
They must also be coordinated and focused on proportionate care, peer support and evidence collection.
Schools that have experienced incidents share that pre-prepared response plans, visible reporting routes and regular communication with families help reduce harm and speed recovery.
Action step:
- Check your existing safeguarding reporting and communication
- Update policy and run whole-staff training on specific AI scenarios each term
- Consider the subtle, cumulative and manipulative ways AI can be used, and how these might affect different people in your school
GDPR, data privacy and data sovereignty
AI is shaking up the world of school data. AI literacy in the context of data protection is about leadership responsibility, ethical oversight and informed decision-making.
Schools must comply with GDPR, protect student privacy, and keep control of information — this is essential to earning the trust of families and the wider community.
AI can process and share huge amounts of personal data in seconds. Even simple uses like lesson planning or feedback can inadvertently push sensitive data outside safe boundaries.
Under UK GDPR and the Data Protection Act, personal data must be handled lawfully, openly and securely. Most AI tools weren’t built for schools or these strict rules.
Staff can easily share more than they should, often without realising.
Common mistakes
A common mistake is entering names, work examples or pupil details into open AI platforms. This may send data outside the UK or allow it to be stored and used for model training.
Many free tools store conversation histories by default, which breaches data minimisation and purpose limitation.
Be proactive with vendors, choose tools with clear privacy safeguards, and make sure staff, parents and pupils understand what data is used, why and with what protections.
What good practice looks like
Third Space Learning designed Skye with data protection built in. They collect only the minimum data necessary — GDPR’s data minimisation principle.
Crucially, no student data is used to train their AI model. Skye is trained solely on content created by their academic team.
Pupil responses do not feed back into the system or get reused beyond educational purpose.
Sessions are recorded and transcribed, and schools retain access for safeguarding review.
This doesn’t mean every “GDPR-compliant” tool is suitable. It shows what good practice looks like: built for education, transparent, and accountable.
Leadership responsibility
With generative AI, data protection is a leadership responsibility, not just a technical one.
Leaders must build policy and practice that reflect real-world AI use across the school.
They must understand whether information is truly anonymised (rare), maintain a data map showing how data flows through AI tools, and ensure DPIAs are completed before deployment.
No AI tool should process sensitive data without robust educational justification, clear accountability, and support for individual data rights.
Action step:
- Book 20 minutes with your DPO and ask:
- 1. What AI tools are staff currently using (including shadow IT)?
- 2. Which of these tools process pupil data?
- 3. Do we have a DPIA for each one?
Governance models and ethical frameworks
Sound AI governance is rooted in fairness and human oversight. Over-automated systems must not override professional judgement or equity.
Frameworks like the Ethical AI Governance Framework for Adaptive Learning emphasise human-in-the-loop oversight.
Addressing bias
Algorithms can unintentionally reproduce gender, racial or socioeconomic bias when drawing on historic data.
Ethical frameworks demand bias audits and transparent, explainable AI.
Students and staff must know why recommendations are made and have clear channels for appeal.
Data governance
Ethical frameworks require secure, ethical management of student records.
Schools should demand clarity on storage, retention and access, and provide families with ways to opt out of intrusive data collection.
An AI-literate professional understands not just how tools work, but what principles and checks underpin them.
Action step:
- Draft or update your school’s AI policy to include ethics, oversight and bias checking
- Schedule termly stakeholder-led reviews of new AI deployments
Transparency, consent and accountability
Modern AI systems can only build trust through transparent, comprehensible explanations of how they are used and why.
Families and staff need clear explanations of data use, storage and purpose.
AI should be included in privacy notices and common conversations with parents and staff.
Consent and accountability
Explicit consent should be gathered for new or sensitive uses, with options to opt out.
Accountability must be clear: who oversees AI use, who handles concerns, and how human review can be requested.
Action step:
- Update privacy notices to describe each AI use clearly
- Revise consent forms so parents and pupils can make informed choices
- Provide a clear route for raising concerns
- Hold open discussions to improve understanding and dispel myths
Evaluating vendor claims
Solid AI procurement requires scrutiny beyond marketing claims.
Leaders must interrogate vendor transparency, privacy, data flows, and impact.
Five essential questions
- 1. Where is student data stored, and who can access it?
- Seek specific answers, not vague “in the cloud” responses.
- 2. Is student data used to train the AI model?
- Look for: “No student data trains our models.”
- 3. What happens to data after contract end?
- Ensure clear deletion timelines and certified deletion.
- 4. How do you address bias and accessibility?
- Beware anyone claiming their AI is “unbiased.”
- 5. Who is liable if the AI causes harm?
- Watch for clauses that indemnify vendors entirely.
Action step:
- Require vendors to answer these questions before any trial
- If they can’t answer clearly, don’t proceed
- Create a one-page vendor evaluation checklist and share with budget holders
Inspection and audit readiness
Ofsted does not inspect AI use directly, but it evaluates its impact on safeguarding, curriculum quality, data protection and educational provision.
Leaders must show robust governance over any AI system used.
Other bodies will increase scrutiny in 2026, so schools must treat AI literacy as whole-school development, not a compliance tick-box.
Creating an inspection-ready overview
Create a single overview of AI systems with your DPO and safeguarding lead. Include:
- A current list of AI-enabled tools, their purpose and data flows
- Confirmation of DPIAs or risk reviews, with gaps noted
- How each tool is monitored for impact, accuracy or bias
- Clear boundaries showing which tasks are human-only
Store this with safeguarding, curriculum and data protection documentation as a living map of AI use.
Closing reflections
Before moving to implementation, consider these questions:
- Could you clearly explain to a parent where their child’s data goes when you use AI tools?
- Who in your school has the authority to pause or stop AI use if serious concerns arise?
- Would you confidently recommend your current AI vendors to another headteacher?
RELATED RESOURCES: