Risks and Limitations

How to spot and mitigate the key risks from AI integration: accuracy, integrity, equity and overreliance

Module 4 of 5

Time

15 mins

Audience: Senior Leaders

Locked

Register for free to unlock all course modules and resources.

Learning outcomes:

Understand why AI hallucinations happen and how to build fact-checking routines

Protect academic integrity by redesigning assessment rather than relying on detection

Identify and address equity, bias and inclusion issues in AI tools

Prevent overreliance by embedding 'honourable struggle' in your curriculum

Decide which tasks must remain strictly human to protect relationships and trust

Free resource for module 4

Free resource for module 4

AI Risk Assessment Framework

Download completion certificate

Download module completion certificate

Celebrate your progress and share your achievement!

Meet the host:

Laura Knight:

Laura Knight:

Teaher, Digital Education and AI Specialist, TechWomen100 Award Winner 2025

Laura is an experienced teacher and leading voice on AI in education. She combines classroom expertise with deep technical knowledge to help school leaders navigate AI adoption thoughtfully. Laura has trained thousands of educators across the UK and internationally on responsible AI use, always grounding her work in what actually works for teachers and pupils.

More info on this module:

Understanding where AI can fail


Welcome to module four, where we're talking about risks and limitations.


In this module, you're going to learn about four key themes relating to risk: accuracy, integrity, equity, and overreliance.


In modules one and two, we looked at leadership and culture. In module three, we dug deep on AI in the classroom. By the end of this session, you'll have a clear understanding of how to frame the risks and limitations of AI in your leadership context. Let's dive right in.


AI use in education is inevitable, and responsible leadership means understanding not just what AI can do, but where its risks and limitations lie for our schools.


Leaders who recognise and manage those risks will safeguard both the learning quality and the well-being of pupils and staff.


Accuracy and hallucinations


Large language models are designed to be very good at prediction, assembling the most probable next words to follow a prompt.


Unfortunately, they can blend solid facts with fragments and fill gaps with invented details, then present the results in highly polished prose. That's known as a hallucination.


In schools, this shows up as invented references, muddled causal claims, or confident summaries that miss the crux of a topic entirely.


Generic AI tools hallucinate because they predict plausible text. Skye, in contrast, can't hallucinate lesson content because it doesn't generate it.


Every lesson, question and explanation was written and quality-checked by teachers. The AI adapts delivery — pace, hints, responses — but can only access pre-approved content. This removes AI’s ability to invent content and therefore eliminates hallucination risk.


Action step:


  • Whenever you use AI-generated content for teaching or communication, build in one safety step

  • Require staff to select one important fact or reference from the output and verify it against a trusted source

  • This maintains speed, builds accountability, and reinforces a culture where AI supports — not overrides — truth standards

  • Just because something looks slick doesn’t mean it stands up to scrutiny

Academic integrity and intellectual honesty


Generative tools make it easy to produce polished outputs that look accomplished without requiring intellectual struggle.


Detection tools will always lag behind new generation tools — it's a losing race.


The better solution is to redesign assessments so they reveal thinking, process, and reasoning rather than relying solely on final outputs.


Encourage staff to collect planning notes, working steps, or short viva-style explanations.


Use localised context in tasks — class texts, school data, or community examples — to reduce over-reliance on generic tools.


Teach pupils how to use AI well as a study partner where appropriate, rather than leaving them to develop poor habits.


When AI is allowed, make its use explicit to support transparency and accountability.


The goal is not to catch pupils out, but to build originality, honesty, and attribution — the values that matter in education and beyond.


The risk to professional judgement


Automation introduces a subtler risk: teachers may become deskilled.


Relying too heavily on machine suggestions can erode confidence and essential pedagogical skills.


Leaders must model good practice, clarify expectations, and protect the heart of the teaching craft.


Trusted tools like Skye can support teaching while minimising this risk, because teachers remain firmly in control of the pedagogy.


Action step:


  • Adopt process evidence as a non-negotiable for key assessed work

  • Require students to submit a reflective statement or recording describing their process, decisions, and any AI use

  • This surfaces genuine intellectual effort and supports authentic insight into student thinking

Equity, bias and inclusion


AI can personalise learning and support SEND and EAL learners — but it can also reinforce bias.


If training data lacks diversity, outputs may unintentionally marginalise certain groups.


Speech recognition may misinterpret dialects or non-standard grammar.


Assessment algorithms aligned with narrow curricular frameworks may fail to represent SEND or lower-income students.


Socioeconomic bias emerges when systems favour resourced schools and overlook digital poverty.


Building vigilance into practice


Teachers must remain vigilant and retain ownership of how AI is used.


SEND and EAL leads should be involved in testing tools, and teachers should be empowered to flag inappropriate outputs.


Any provider should be able to explain how they address bias and equity.


Skye’s design intentionally focuses on evaluating mathematical thinking, not accent or fluency — equity was built in from the start.


AI should level the playing field, not deepen hidden disadvantages.


Action step:


  • Form a diverse working group including SEND/EAL leads and classroom staff

  • Audit AI tools by reviewing outputs across subjects and pupil groups

  • Gather feedback from pupils and teachers and check for bias or exclusion

  • Document findings and adjust guidance or tool choices accordingly

  • Repeat regularly as part of ongoing digital equity practice

Overreliance and intellectual offloading


Overreliance on AI undermines AI literacy.


When students avoid academic struggle, they lose chances to practise persistence, reflect, and build confidence.


Metacognition — awareness of one's thinking — is essential for learning.


Skye supports metacognition by never giving answers and prompting reasoning.


Action step:


  • Design a school-wide initiative to embed “honourable struggle”

  • Require one routine task per unit that pupils complete without AI

  • Have pupils reflect on the strategies used and how the struggle felt

  • Celebrate persistence in assemblies or briefings

When should we not use AI?


Some tasks require human contact, trust, and emotional judgement.


Human-only tasks include pastoral conversations, mental health check-ins, safeguarding referrals, exclusion decisions, sensitive parent communication, and performance feedback.


A decision framework


Before adopting AI for any task, ask:


  • Does this task require reading emotional cues or building trust?

  • Is the decision high-stakes or irreversible?

  • Does trust depend on a human making the decision?

AI used unwisely can make learning transactional and erode trust. School leaders must define boundaries and articulate which tasks remain human-only.


Scenario for reflection


A head of year uses AI to draft a summary for a multi-agency meeting. The AI blends attendance periods incorrectly, invents an explanation for behaviour, and uses a cold tone. The SENCO notices missing communication needs. A colleague admits relying on the tool because it feels faster and more professional.


Questions to consider:


  • What are the risks, and which concern you most?

  • What routine could you introduce to prevent similar issues?

Closing reflections


Consider these questions as you move forward:


  • Where are the risks of accuracy, integrity, equity or overreliance most likely to appear?

  • Which workflows benefit from AI, and which require a fully human lead?

  • How confident are you that staff understand boundaries?

  • What’s one conversation you will start this week to reduce risk and build understanding?

Related Resources

RELATED RESOURCES: