In 2019, artificial intelligence will start continuously assessing students, making exams increasingly unnecessary, say Toby Baker and Laurie Smith.
Each summer, schools and students dread the onset of exam season. Across the country, learning is paused as gymnasiums fill with tables and chairs, and cafeterias and common rooms play host to tears, stress and panic.
The problems with exams are well-known to anyone who’s ever sat through one. Students’ futures are often decided by how they perform in a single exam for a couple of hours on a single day. The concept of coursework was an attempt to tackle this, but is seen as vulnerable to cheating and has been largely replaced in GCSEs in recent years by exam-based assessment.
But what if pupils could be tracked continuously over their school careers - from homework assignments and weekly quizzes, to the speed with which they absorb new material? The need for an exam season, and all the anxiety, stress and mental health issues associated with it, would be gone.
As we look back on the 20th birthday of the GCSE, it’s time for a new approach. In 2019, artificial intelligence will lead to a serious rethinking of the necessity of exams.
Normally we hear about AI in the context of driverless cars or Amazon warehouses, but AI in schools is already a reality. Adaptive learning platforms, such as Century Tech, use algorithmic decision-making to deliver lesson content based on a student’s ability and interests. Teachers use ‘AI teaching assistant’ tools to optimise seating plans for behaviour and learning. Even Ofsted, the education regulator, is trialling AI algorithms to predict which schools are likely to fail inspections.
Recent advances make continuous assessment by AI not only possible, but practical on a large scale - even for subjects without binary “right” or “wrong” answers. Advances in natural language processing mean that AI can analyse the content, structure and style of prose in an essay. Instead of examiners spending hours marking one essay at the end of the year, all essays throughout the year could be marked quickly and independently by AI. Already, 60,000 schools in China (a quarter of the country’s total) are part of an ongoing government-sponsored trial in which essays have been quietly marked by an AI algorithm.
What’s more, AI tools can assess in real-time how students are progressing through a particular subject and provide useful feedback that highlights areas for improvement, rather than just a number describing how much of a topic a student hasn’t mastered. Not only is AI-enabled assessment more flexible, it can be more helpful to students too.
As well as how we assess, AI tools could transform what we assess. School leavers need a wide range of knowledge, skills, and attributes. Exams only measure a narrow range of those things - how are parents, colleges, universities or employers supposed to know if students are resilient, independent thinkers? Or if they can collaborate with colleagues, debate or create?
Although in relative infancy, tools can analyse data collected by virtual learning platforms to measure students’ abilities in skills such as problem solving, teamwork and even tolerance. For example, last year’s PISA test results (the OECD’s global ranking of education systems around the world) assessed collaborative problem-solving abilities based on how well students responded to questions when working collaboratively with ‘Abby’, a chatbot.
It’s hard to overstate the importance of exams to our current education system. Not only do they define whether a student has done well at school, but for government, teachers and headteachers, the importance of exam results is key to defining whether our schools have done well.
A rigorous accountability system has many benefits, but the narrow scope of exam-led accountability has negative side-effects too - from teaching to the test and a narrowing curriculum available in many schools, to prioritising whole school results over the needs of individual students and removing or excluding poorly-performing pupils entirely.
Ofsted recently announced that its new school inspections will downgrade the importance of exams in favour of broader measures, including ‘personal development’ and ‘behaviour and attitudes’. This is a golden opportunity for AI tools to prove their value. Just as these tools can transform how we think of individual assessment, they can change how we assess our school system as a whole. We can judge schools on how their students develop, rather than how they perform in exams.
If done right, a shift to continuous AI assessment could allow us to reorganise schools around learning, not league tables. But that opportunity comes alongside great uncertainty about the exact nature of the changes that AI will bring and their wider implications.
There are a host of ethical and practical considerations that’ll come alongside any shift to greater use of AI in assessment. How do we ensure that AI makes decisions fairly and accurately? Who will own and control student data? How will this impact on social mobility and inequality? What does this mean for the role of teachers (and their training)? What kind of tech infrastructure improvements do we need?
Nesta’s ongoing research to explore the future of AI in our school system will attempt to navigate this uncertainty in 2019, providing recommendations for policymakers, teachers, academics, technology companies and parents. But one thing’s for sure: in 2019, exams will look increasingly like a relic of the past.
Toby Baker is an Assistant Programme Manager within the Innovation Lab's Education Team.
Laurie Smith is a Senior Researcher in Nesta's Explorations team.