Date of Publication: 04 March 2019
Most of us have a funny relationship with artificial intelligence.
Hear those words and our minds skip to science fiction. Some unimaginable future where robots are taking over the world. Fear. The unknown.
And yet most of us engage with AI every day without even noticing.
Every time Gmail tries to finish your sentence for you. Every time you ask Alexa to turn the music up. Every time Amazon recommends the next product you should buy.
Probably sounds familiar. But it’s unlikely that any of these events prompts the same fear or futuristic thinking that the words ‘artificial intelligence’ can evoke. There’s a clear tension between the everyday reality of AI and our fears about the future. Fears which become heightened when AI and education are mentioned in the same sentence.
‘…there are real risks and hazards with AI in Education… There are hazards too in driving a car, or taking exercise or playing sport. But this does not and should not stop us from driving or playing. The benefits of AI, if we go into it with eyes wide open… far outweigh the downsides, and we will be much better placed to mitigate for the drawbacks if we start thinking and planning now. A glorious new world of deep education awaits.’
AI is scary in parts. It does come with risks. But AI could also bring a ‘glorious world of deep education’ if we let it; if we pursue it in the right directions with the right parameters around it. Often we let panic or fear become our first responses. Students protest against AI in the classroom, – perhaps quite rightly – when they think computers are going to replace their teachers, or affect their quality of education.
In reality, the shift towards AI in the classroom is in most cases much gentler, much more down-to-earth. In the same way that – actually – Spotify playing us music we like is quite helpful, AI in education is likely to gradually make education more effective. To give a few examples from Nesta’s report:
- Students who crumble under exam pressure might no longer be disadvantaged; instead they are assessed continuously, their future success hanging on their overall performance rather than one sweaty, stressful hour;
- Administrative burdens could be automated and taken off over-stretched, under-resourced teachers, free-ing them up to give the best of themselves to students.
- Students could have access to some of the benefits of personalised learning, tailored to their own needs, while still learning in a socially stimulating environment, alongside their peers.
The machines are not taking over. Not yet, anyway. Instead, AI helps to solve some tricky educational problems; helps educators flourish; helps students teach their potential.
There are, of course, risks. We don’t know what the future holds. At the end of last year the same Sir Anthony Seldon led the founding of the Institute for Ethical Artificial Intelligence in Education. These initiatives are needed. We need to think about ethics. Here at Oxford Summer Courses we take any potential risks to our students seriously.
And, as with all teaching practices, if it is to work well, AI in the classroom must be underpinned by evidence and well-designed. Students will walk out when their learning is compromised. There must be robust pedagogical plans. Effectiveness must be proven. We can’t get carried away with the tech without asking questions. Nesta’s report is clear on this and we wholeheartedly agree.
But here at Oxford Summer Courses we believe the future could be glorious. We’re excited about ed-tech. We want to embrace change.
We’re about learning and students first, everything else second. But if tech can help our students reach their potential, we’re all in.
 T. Baker and L. Smith, Educ-AI-tion Rebooted? Exploring the future of artificial intelligence in schools and colleges, Nesta, London, 2019, p. 4