How the right to education is undermined by AI
I was inspired to submit a thinkpiece to this UNESCO call by a group of people whose own submissions (I happen to know) are wise, original and inspirational - so I’m truly grateful to have had the opportunity of their support and conversation. We will be continuing that conversation in public and with other participants soon! Meanwhile, this is not my submitted piece (we are asked not to publish them) but some reflections on its development.
The ‘right to education’ is enshrined in the Universal Declaration on Human Rights (UDHR, Article 26). But there are many cultures and definitions of education, as other Declarations and Conventions emphasise. ‘Education 4.0’, according to the World Economic Forum, ‘places the responsibility for skill-building on the learner’, with teachers as ‘facilitators’ of this self-determining enterprise. This doesn’t sound like something ‘rights’ apply to, except perhaps the right of consumer redress if your facilitation-as-a-service teacher fails to satisfy. Nor has the AI industry been slow to promote its own version. Elon Musk encourages students at his privately funded Astra Nova school to address problems such as ‘How should Tesla build out a supercharger network in South America’ - so Musk’s money makes him free to determine the contents of at least some children’s education. But how many people in the world does this ‘right’ extend to, and with what consequences for other people’s rights? Meanwhile JD Vance, VP of AI, rails against ‘teaching’ as ‘brainwashing’ and ‘indoctrination’ and calls for ‘our kids’ to be ‘defended’ from it. In the world of Musk and Vance every self-governing homestead should be free to decide what their young people need to know: that is, after all, what patriarchs have always done.
But the liberal-democratic version of education that Musk and Vance dislike so much is exactly what the UDHR and associated Conventions intend: education is a right of all people as equals and a responsibility of the public bodies who represent them - states, nations and regions. The context can only be a democratic one. And despite the difficulty of accommodating so many different epistemic cultures and pedagogic traditions, the UN’s International Covenant on Economic, Social and Cultural Rights (ICESCR, Article 13) does have something to say about the contents of that education: ‘education shall be directed to the full development of the human personality and the sense of its dignity… to enable all persons to participate effectively in a free society, [and] promote understanding, tolerance and friendship among all nations and all racial, ethnic or religious groups’. The UN Convention on the Rights of the Child (CRC, Article 29) prefers: ‘the development of the child’s personality, talents and mental and physical abilities to their fullest potential’ so that the child may be ‘fully prepared to live an individual life in society’. The Convention Against Discrimination in Education (CADE) adds a host of further Articles reinforcing the equality and universality of these rights.
That expression ‘an individual life in society’ expresses a tension in these universal goals, one that all educators have to confront at some time in our careers. Education aims at the fullest possible realisation of the individual, but on terms given by society: what that society holds to be a life worth living, knowledge worth having, and work worth doing. By equipping young people with what is collectively valued and known, education shapes their structures of thought and developing personalities. But learners must also be granted the agency to respond uniquely, to transform their conditions as well as themselves, and to think differently to the mainstream about what matters.
‘Education changes people; people change the world’ (Paulo Freire 1972)
There is other tensions here: that while a child is a full human being from birth, endowed with human rights, still s/he is not yet a full person. It is the task of education - in the broadest sense - to hold that space of immanence and vulnerability, to give shape and expression to what is immanent, and care to what is vulnerable.
‘Education’ is not the only such ritualised space but it is how the United Nations have designated the support that children and young people need for their emergence into full personhood and social participation. Primary education at least must be ‘free’, secondary and higher education ‘freely available’. This collective investment means that education must be accountable to the rest of society: the purposes of education are held publicly, even if private interests might be involved in realising them. And there is a final tension. While respecting national and cultural determinations about what it matters for young people to learn, the demand that education should ‘promote tolerance and friendship’ between ‘all nationals and… groups’ is a strong sign that it should be outward-facing, inclusive of difference, and orientated towards tolerance and justice.
Data and analytics platforms, including so-called predictive AI, have been criticised for undermining the rights of young people in education through the capture and unregulated use of their data, and through opaque decision-making that exacerbates discrimination in educational settings and amplifies inequality of outcomes (Global Connectivity Report (2022) Ch.9; Williamson et al (2023) for the National Education Policy Centre).
UNESCO’s report An Ed-Tech Tragedy? (2023) found that during the Covid-19 pandemic ‘education became less accessible, less effective and less engaging when it pivoted… towards technology’ (p34); that this was accompanied by ‘a concerning transfer of authority away from teachers, schools and communities and towards private, for-profit interests… the censorship, data extraction, advertising, top-down control, intimidation and surveillance that so often characterize current models of digital transformation have made education less free’ (p.35).
On behalf of the Council of Europe, a Briefing on AI in Education through the lens of Human Rights, Democracy and the Rule of Law (2022) concluded that ‘We need appropriate, robust regulation, addressing human and child rights, before AI tools are introduced into classrooms’ (p.75) and ‘We need to ensure that children are not forced to accept being compulsory research subjects or being compulsorily involved in product development simply by exercising their right to education’ (p.76). Recognising these risks, the Global Digital Compact of the UN commands: ‘digital technology companies and developers to respect international human rights and principles, including through the application of human rights due diligence and impact assessments throughout the technology life cycle’.
None of these regulations, due diligence processes or impact assessments have been carried out. Far from submitting to new requirements, AI corporations have bribed or bullied governments to weaken their existing rules and tax regimes. And so the threats to the individual rights of young learners - which are the focus of most existing legisation - must be set alongside the systemic risks of an industry that has grown vastly more wealthy and powerful than most nation states, let alone their education sectors. At current valuations, OpenAI, Nvidia and Microsoft are all worth as much as the GDP of the entire African continent.
Ricaurte (2022) summarises how digital platforms produce epistemic and cultural as well as economic and environmental inequities. Like others viewing AI from the perspective of (de)coloniality, she concludes that a rights-based discourse must extend beyond classroom infringements of children’s rights to concern itself ‘with the preservation of life and the co-responsibility of AI harms to the majority of the planet’ (n.p).
The risks outlined in these earlier reports were immediately amplified by OpenAI’s release in 2022 of untested, unreliable, unsustainable data architectures in the GPT series and the ChatpGPT interface. Beyond building and commanding an immensely profitable new market, the intent was to capture educational and cultural assets and to do so partly by causing immense disruption to education and to knowledge cultures. Many experts consider general models like these to be unreliable in principle (Luciano Floridi, John Lanier, Ragnar Fjelland, Iris van Rooij for example). Certainly they remain unreliable in practice. But while the technical capabilities of these models are overblown, their capacity to concentrate data, computation, know-how and capital in the businesses that own them is unprecedented. The AI investment surge has further consolidated economic power in a handful of global corporations, power they have used to challenge international laws and human rights frameworks (Amnesty International 2025: pp.23-25).
Generative AI presents a number of immediate threats to learners’ rights, amplifying the known threats from so-called predictive AI (though the two are increasingly used together in educational systems and dealt with together in educational policy). Channelling the knowledge available to young people through anglo-centric, proprietary and normative data platforms is a risk to their cultural and epistemic rights. Releasing AI-generated content into digital platforms and attacking public sites with AI crawlers, agents and deepfakes deprives young people of free access to digital information and culture, while diminishing their own opportunities for cultural production. These harms affect young people of minority languages and cultures to a greater degree. And there is increasing evidence (see below) that the use of chatbots in school work and pedagogic interactions is harmful to young people’s intellectual and social development, and so undermines their right to the fullest development of their potential as laid out in the UN CRC.
Young people in education are therefore subject to three distinct threats to their educational rights:
While the risks have been widely speculated about, there are many actual and documented harms from generative AI to the rights of young people in education. These include:
There is also evidence of harm to the knowledge systems that education and educators rely on.
Children and young people are unequally and inequitably harmed by:
This table of potential risks AI presents to the right to education is a work in progress - you can read and comment here.
So-called ‘AI’ is antithetical to the UN goals of free, equitable access to learning and cultural opportunity. Education leaders should take a human-rights based approach to AI, not only as a class of technologies with known impacts on learners’ data rights, but as a crisis for education systems and their role in global peace and democracy. This crisis has been engineered by a small number of the world’s most powerful corporations in alliance with their state militaries.
The global response should be led by those most immediately affected: people of minority languages and cultures; people suffering from epistemic injustice (particularly at the hands of the digital and AI industries); teachers and education workers threatened with poorer conditions of work; and young people who aspire to the full development of their personalities and intellectual powers.
The AI crisis both amplifies and draws energy from other crises afflicting education and learners around the world, and should be tackled alongside them. Only with solidarity among students and educators, and across education sectors and systems, can the power of AI corporations be resisted.
The ten principles of UNESCO’s (2021) Recommendation on the Ethics of Artificial Intelligence are admirable but have not been adopted into law, let alone enforced as law, by any of its member states. The AI industry has attacked its own ethicists and bullied governments into abandoning reasonable attempts at regulation. Without enforceability, good principles only amplify the harms by appearing to have them under regulatory control.
Similarly, UNESCO’s AI Competency frameworks for Teachers and Students are worthy aspirations that can only be achieved with the use of safe, ethical, reliable, non-extractive, non-exploitative AI systems. Framing these aspirations as individual competences puts an unfair burden of responsibility onto educators and learners, while absolving from blame the powerful corporations that released rights-violating technologies into educational and cultural ecosystems.
Educational leaders should develop a shared action plan, drawing on a wide range of experts (independent of the AI industry) and oriented not on finding new use cases for AI in education (the AI industry will take care of those) but on prevention, mitigation and resilience to rights-based harms. Expertise will not be enough: only international solidarity can prevent governments and educational organisations being picked off one by one.
Responses might be grouped into three areas: ‘repurpose, rebuild and refuse’.
AI technologies, where possible, for projects of authentic learning and human flourishing e.g.:
cultural systems, practices and archives with resilience to synthetic media, e.g.
Build and sustain communities of open AI practice with shared norms, standards and values.
the impoverished, unjust and unsustainable educational future being offered by the AI industry, e.g.
Refuse to use systems or architectures that may be harmful, biased, unsustainable or unjust;
Refuse to provide or collect data that may be used to build harmful data architectures or to make harmful discriminations concerning learners and educators;
Only collect learner data that can genuinely be used by learners and teachers to support their development; ensure it remains under the control of an accountable educational organisation.
Refuse to adapt educational workflows and conditions of work to the use of AI, particularly where this will render education workers overworked, more precarious or less skilled;
Refuse professional tasks that require the use of or contribution to a proprietary data system unless alternatives are available;
Refuse personal and organisational partnerships with or investments in the AI industry
As every digital platform is ‘enhanced’ with AI co-pilots and agents, refusing and rebuilding are acts of ingenuity as well as resistance. Far from being signs of ignorance, maintaining spaces of non-mediated dialogue and cultural expression are rapidly becoming signs of technical and epistemic skill. Educators can foster these skills, confident that their learners will not be ‘missing out’ on AI because AI will always be making itself more useable: indeed, it is already compulsively useable, and it not use but non-use that needs to be actively developed. These approaches also call for solidarity among people, organisations and linguistic and cultural communities.
The outcomes of ‘embedding’ AI into education remain uncertain but they will not be produced by a merging of common interests. Rather they will be the outcomes of a struggle between incompatible goals: mass intellect versus mass thoughtlessness; diverse cultures and ways of knowing versus the normative power of data; participation in communities of learning versus the consumption of bite-sized answers from chatbot companions. In this struggle, UNESCO should weigh unequivocally on the side of children’s and young people’s right to the full development of their humanity.
With thanks to Bryan Alexander, Doug Belshaw, Laura Hilliger, Ian O’Byrne, and Karen Louise Smith for conversations that helped motivate and inform the writing of this piece.