Schooled by AI: The future of Edtech
Over the years, digital technology has brought about significant positive transformations in the field of education. However, the advent of generative AI, large language models, and related technologies heralds a distinct and profoundly transformative chapter in EdTech, with some experts claiming they could ‘spell the end of the traditional school classroom'. It is no longer a viable option for educators to simply ignore or prohibit these technologies within the classroom. If a fundamental purpose of schools and universities is to prepare students for their futures, then they must equip students with the necessary technological skills to thrive not only in future workplaces but in all aspects of life moving forward. While acknowledging the potential benefits of embracing AI in education, it is vital to confront the ethical, social and philosophical questions that arise. Of course, the incorporation of AI into the classroom could take various forms, some more extreme than others, but we must avoid an overreliance on these systems, whilst holding the tech companies that create them to account. Understanding the ramifications of AI for the UK’s education sector is particularly important at this highly vulnerable time, amidst landmark strikes, a recruitment and retention crisis, and the aftermath of the Covid pandemic.
One role that AI could take within education is as personalised ‘AI tutors’. This model was recently demonstrated by education nonprofit Khan Academy, which partnered with OpenAI to create ‘Khanmigo’, an AI-powered personal tutor and teacher’s assistant. In a recent TED talk, founder Sal Khan extolled the virtues of this new platform, which can act as a personalised writing coach and Socratic debater, emphasising how it could be used alongside teachers, helping them rather than replacing them. Speaking to The Guardian, Prof Stuart Russell gestured to a more radical vision of the future of schooling, with each child receiving a personalised AI tutor and ‘fewer teachers being employed – possibly even none’. You don’t have to believe the robots are coming to destroy us all to feel somewhat cautious and suspicious about this prospect. These developments come at a particularly fraught time for the UK education sector, which has been left severely underfunded after a decade of cuts by the Conservatives, and whose students and teachers were massively impacted by the pandemic. An AI-led educational system is a particularly concerning prospect for teachers who are already underpaid, undervalued and unsupported.
In her opening address at the Education World Forum in London, UK Education Secretary Gillian Keegan spoke of AI’s potential to transform teachers’ lives by taking much of the ‘heavy lifting’ out of their workloads. It is easy to feel cynical about Keegan’s intentions, after she has failed to address the key factors behind teaching strikes and shortages and instead shown a lack of empathy or respect for striking teachers, refusing to negotiate further and accusing them of failing their students. The government must first address the immediate issues plaguing the education sector and enter into meaningful dialogue with its struggling teachers, instead of hiding behind the shiny distraction of AI. AI systems performing administrative tasks certainly could benefit overworked teachers, but any strategy will take a considerable amount of time to finalise and implement and, as Geoff Barton, general secretary of the The Association of School and College Leaders (ASCL), stated, the current government’s approach to education thus far has been 'piecemeal and lacklustre'. Meanwhile, teachers' lives urgently need to be materially improved, and in this climate, technology that makes them less essential in the hands of the Conservatives is a disquieting prospect. Ultimately the priority must be to use AI as a supplemental, supportive tool that drives innovation and accessibility in education, rather than displacement.
The use of AI in schools could also exacerbate existing social divides where access to AI-powered education might be limited to certain privileged groups. A digital divide still exists in the UK — 10 million people lack the most basic computer literacy skills, and 20% of young people aged 8-24 have no online access. These digital inequalities were laid bare by the pandemic; as remote online learning became the only option, students without devices or WiFi were left behind, and continue to struggle today. Any discussion around AI in education must acknowledge this first and foremost. Clearly the potential benefits would only reach all learners once the digital divide has been closed, and until then would almost certainly amplify existing inequalities. Any realistic implementation of AI in education would therefore require an unprecedented level of governmental funding and support, to ensure all students have equal access to these technologies. As The Institute for Ethical AI in Education’s 2019 report stated, ‘reforms will not deliver benefit to all learners if the digital divide is not closed decisively and quickly’; the Institute urges ‘all governments to guarantee that every single learner has adequate access to a device and an internet connection [...] Only then will all learners be able to benefit optimally from AI in education.’ After years of the Conservatives’ underfunding of the education sector, it’s hard to believe they will suddenly re-prioritise an area they have historically neglected.
Attention needs to be paid to who is constructing these large language models (LLMs), and how. AI construction will most likely be outsourced to private companies such as Microsoft, Google and Meta, who already have a dangerous monopoly over the digital economy, engage in anti-competitive behaviour and hold a poor track record with data privacy. It is crucial at this nascent stage for strong regulations to hold these mammoth tech companies to account and question the interests at play. Amba Kak, executive director of the AI Now Institute, calls for strong regulation specifically around ‘algorithmic accountability, algorithmic transparency, and data privacy…Everybody’s talking about futuristic risks, the singularity, existential risk. They’re distracting from the fact that the thing that really scares these companies is regulation’. If these companies will be developing the infrastructure for classrooms, they will need to provide total transparency and be subject to rigorous regulatory scrutiny. They will have to provide satisfactory answers to a plethora of questions, including but not limited to — What data is used to train the LLMs? How are companies moderating against discrimination and bias in said data? How will they ensure users’ data won’t be exploited or used without consent? — to ensure young people and their personal and social growth are protected. Rather than evil robots coming to destroy civilisation, it’s the profit-obsessed tech CEOs hiding behind them that we need to worry about.
It is also crucial to consider the pedagogical methods that these systems will be based on, and how said methods may reinforce the marginalisation of certain groups. The current vision for AI in education appears to draw from teaching methods borne out of Western knowledge systems — think of Sal Khan’s focus on Khanmigo’s ‘Socratic debate function’ and individualised learning — which overlook alternative epistemologies and can exclude marginalised communities. Instead of being an innovating force in the education space, EdTech could cement a ‘one size fits all’ pedagogical approach and homogenise what is taught, and how, by discounting alternative styles of learning valued in other cultures. In Western pedagogy, values of individualism and universalism tend to be privileged over collectivism and cooperation. Whilst more individualised, one-on-one, personalised teaching has many benefits, so does communal, people-centred learning, where students collaborate and learn from each other. This point was raised during a roundtable discussion held by The Institute for Ethical AI in Education in 2019 — but the Institute’s subsequent report brushes it off as a minority view:
One participant argued that there are risks to the “hyper-individualisation of learning” as this could undermine the peer-to-peer aspects of education. Another noted that approaches centred on individual learners represents an educational philosophy that is not universally agreed upon; in some parts of the world, for instance, “communitarian” learning is seen as a higher priority. [...] This was not, however, the majority view.
Rather than being dismissed, this perspective should be valued within the discussion around AI in education, reminding us of the dangers of homogenisation and exclusion, and of the lessons we stand to learn from diverse cultural approaches to teaching.
Proactive steps also need to be taken to ensure that students’ abilities to think independently and critically are not weakened by an overreliance on these technologies. In today's media-saturated and ostensibly ‘post-truth’ world, blurred by fake news and extreme polarisation, it has become more crucial than ever for individuals to approach information critically. Especially at this early stage, LLMs are prone to delivering dangerous misinformation and promoting biased perspectives, inadvertently inherited from biased training data. This issue partly stems from a lack of diversity in the field - for example, women account for only between ten and 15% of the research staff at two of the biggest tech companies and black workers represent only between 2.5% to 4% of the workforce at those same companies. When students rely solely on such AI-generated content, they are simply delivered answers, without engaging fully with the topic or with the information’s sources. As a result, genuine intellectual development and independent thinking may suffer. In a controversial 2021 paper which highlighted the possible risks of LLMs, Dr Emily Bender labelled LLMs ‘stochastic parrots’ — good at form, but bad at meaning. With these risks in mind, students must learn to scrutinise a text's source, recognise its biases, and creatively problem-solve rather than passively absorbing answers from chatbots like ChatGPT.
Ironically, these are the exact skills that are fostered by studying the arts and humanities, the very subjects which will be impacted by PM Rishi Sunak’s recent schemes — from ‘Maths to 18’ to his intention to ‘crack down’ on ‘rip-off degrees’. Said ‘rip-off degrees’ appear to be defined as those that do not deliver a ‘decent job’ at the end of them. His narrow definition of both ‘decent degrees’ and ‘decent jobs’ disproportionately favours STEM degrees, and would penalise universities and courses with a high proportion of working-class students. This sort of financially-focused approach to education undervalues the broader benefits of arts and humanities studies and, coupled with the emergence of generative AI, spells frightening times for the creative industries. A significant concern around generative AI is that it could lead to a devaluing of human creativity and jeopardise artists’ livelihoods, if industries choose to supplant human artistry with AI-generated content for reasons of cost and convenience. This possibility is a key driver of the current WGA strike, with writers specifically requesting that ‘AI can’t write or rewrite literary material; can’t be used as source material; and WGA writers’ material can’t be used to train AI’. This is the first large-scale attempt by a labour union to get an industry to regulate, or even ban, the use of AI as a replacement for workers. The humanities and the creative industries are being chronically underfunded and undervalued, in a time when the need for analytical, independent thinking is stronger than ever. Generative AI will not make these practices obsolete; it renders them all the more necessary and valuable, especially as a lens through which the next steps and recommendations regarding AI in education can be assessed. When we reduce education to its financial outcomes and restrict access to certain disciplines, we do a disservice to future generations, who should be able to benefit from a well-rounded education that fosters creativity, collaboration, and equality.
Any plan for AI in education must first prioritise teachers and the invaluable roles that they hold in society, as role-models, nurturers and safeguards for their students. A teacher’s genuine, contagious passion for their subject cannot be replicated by generative AI. Many of us can speak to how the personal connection between teacher and pupil, fueled by shared enthusiasm for the subject matter, played a crucial role in our educational development. As we navigate this new era of EdTech, the UK government’s priority must be to address the immediate material conditions that are harming students and driving people away from teaching. AI could be an empowering educational tool in a rapidly changing world, but its integration will require a collaborative effort among educators, policymakers, and technology developers to uphold thoughtful regulation and inclusive implementation.