“Since we do not know what the job market will look like in 2030 or 2040, already today we have no idea what to teach our kids. Most of what they currently learn at school will probably be irrelevant by the time they’re forty,” writes Dr. Yuval Harari in his critically-acclaimed book, Homo Deus: A Brief History of Tomorrow.
Harari’s observation echoes what teachers have been saying for years: while the rest of the world evolves, classroom learning remains much the same as it was 50 years ago, putting students at a significant disadvantage.
Education technology (EdTech), artificial intelligence (AI) and other technologies of the Fourth Industrial Revolution have the potential both to bridge and widen this gap.
Artificial intelligence in education
Industry experts and academics predict that AI can:
- overhaul student assessment systems
- automate administrative tasks, such as gradekeeping
- analyse the steps students take to learn subjects and solve problems
- create bespoke lessons to address individual learning needs
- instantly identify students who lose motivation or focus
- personalise both individual and classroom learning
This is just the start of a laundry list of possibilities, and schools and universities around the world are catching on.
Cornell and MIT are among the household names in higher education offering AI and robotics summer programmes to middle and high school students.
Pennsylvania’s Montour School District recently added an AI programme to their middle school curriculum.
India’s Central Board of Secondary Education added AI as an optional subject for students in classes eight through ten.
Codemao, a coding education company based in China, recently launched the AI Double Teacher Class project, which supplies an online, AI teacher in programming classrooms to reduce teachers’ workloads and help correct student work.
A high school in Hangzhou, China, is also using facial recognition technology to monitor students’ focus and motivation levels.
But such schools are the exception rather than the rule. Few institutions have fully implemented AI in the classroom, perhaps in part due to the fears and misconceptions surrounding this emerging technology.
The dark side of the machine
AI and similar technologies may be the key to modernising our classrooms and preparing students for a new type of workforce. Ironically, AI could also push humans out of that new workforce entirely, perhaps undoing any potential progress.
The Future of Employment, published in 2013 by Oxford researchers Carl Frey and Michael Osborne, predicts that 47 percent of occupations in the US are at high risk of becoming computerised.
“The crucial problem is creating new jobs that humans perform better than algorithms…Very soon this traditional model will become utterly obsolete,” writes Harari.
AI also has the potential to widen, rather than close, the inequality chasm. In terms of education, some fear that affluent students will receive more human interaction in the classroom, while underprivileged students will be offered a largely automated learning experience.
In more general terms, it’s important to remember that control of AI development already rests in the hands of billionaires. The corporations that own the algorithms will only continue to gain power and wealth, “creating unprecedented social and political inequality,” according to Harari.
Don’t fear the algorithm
Grim predictions such as these, though relevant, are only part of the bigger picture of a shared future with AI.
In Homo Deus, Harari posits that humans themselves are “organic algorithms” which aren’t so different from artificial intelligence: “…exactly the same mathematical laws apply to both biochemical and electronic algorithms.”
Besides, most of us interact with and rely on AI daily. Learning a language with Duolingo? Surfing Netflix for a new show to binge? Googling an answer to a question? You’re using artificial intelligence.
Still, when most people hear the term “AI”, they picture self-aware robots terrorizing an obsolete human race.
Artificial intelligence is interdisciplinary
Much of the fear surrounding artificial intelligence stems from misunderstanding. Depictions of AI in popular culture only fuel that fear. Take 2001: A Space Odyssey, for example, or the discrimination toward synthetic humans in dystopian, futuristic video games like Fallout 4.
There are actually four main types of AI, which Elon Musk places in two distinct categories: “general machine intelligence” and “case-specific machine intelligence”, which he believes is not a “species-level risk.”
Case-specific machine intelligence, including reactive and limited memory AI, interacts with the world in the present moment using short-term memory systems. Deep Blue, AlphaGo and Google’s self-driving cars are examples of these.
General machine intelligence, including theory of mind and self-aware AI, can understand human emotion and form representations of other organisms and objects. In 2014, Stephen Hawking warned this type of AI, once fully developed, “could spell the end of the human race.”
Resistance is futile
Though some of the biggest names in tech, including Hawking and Musk, make good points about advanced AI, don’t go conjuring doomsday images of a robot apocalypse just yet.
Even general machine learning is still rudimentary at best, and hundreds of AI researchers remain sceptical that general AI will be perfected within the next 50 years. This means we have some time to establish ethical boundaries for this emerging technology.
A key component of that involves teaching students about AI early on. “Young people need to be prepared to engage in dialogue about technology that is shaping their future,” write Tom Vander Ark, tech expert and co-founder of Getting Smart, and Kyle Barriger, a math and statistics teacher at Castilleja School.
Like all things, AI has its pros and cons, but it’s here to stay. Spreading fear of a future that may never materalise ignores the ways AI can, and already has, benefitted society.
Brett Becker, assistant professor at University College Dublin’s School of Computer Science, nicely sums up this immensely important ethical debate:
“We have a new responsibility to ensure that society as a whole has sufficient AIEd literacy – that is, enough to ensure that we use these new technologies appropriately, effectively, and ethically.”