Researchers from North Carolina State University have designed an artificial intelligence (AI) model that can predict how much students are learning in educational games.
Their paper, Predictive Student Modeling in Educational Games with Multi-Task Learning, was co-authored by James Lester, Distinguished University Professor of Computer Science and director of CEI, NC State’ and Roger Azevedo, Professor in the Department of Learning Sciences and Educational Research, University of Central Florida.
The paper’s authors note that in recent years, there has been growing interest in modeling student knowledge in adaptive learning environments.
Predictive student modeling is “the task of predicting students’ future performance on a problem or test based upon their past interactions with a learning environment”.
It is important for tailoring student experiences in a range of adaptive learning environments, such as intelligent tutoring systems and educational games.
AI learns how students learn
According to the university, the researchers’ improved AI model makes use of an AI training concept called multi-task learning, and could be used to improve both instruction and learning outcomes.
Multi-task learning is an approach in which one model is asked to perform multiple tasks.
“In our case, we wanted the model to be able to predict whether a student would answer each question on a test correctly, based on the student’s behaviour while playing an educational game called Crystal Island,” said Jonathan Rowe, co-author of a paper on the work and a research scientist in NCSUs Center for Educational Informatics (CEI).
Crystal Island is a game-based learning environment for middle-grade science and literacy that teaches students microbiology education.
“The standard approach for solving this problem looks only at overall test score, viewing the test as one task. In the context of our multi-task learning framework, the model has 17 tasks – because the test has 17 questions.”
The researchers had gameplay and testing data from 181 students.
The AI could look at each student’s gameplay and at how each student answered Question 1 on the test.
They add that by identifying common behaviours of students who answered Question 1 correctly, and common behaviors of students who got Question 1 wrong, the AI could determine how a new student would answer Question 1.
This function is performed for every question at the same time; the gameplay being reviewed for a given student is the same, but the AI looks at that behaviour in the context of Question 2, Question 3, and so on.
The researchers found that the multi-task model was about 10 percent more accurate than other models that relied on conventional AI training methods.
The study’s first author, Michael Geden, said this type of model could potentially be used in multiple ways to benefit students.
“It could be used to notify teachers when a student’s gameplay suggests the student may need additional instruction. It could also be used to facilitate adaptive gameplay features in the game itself. For example, altering a storyline in order to revisit the concepts that a student is struggling with,” he said.
“Psychology has long recognised that different questions have different values,” Geden says. “Our work here takes an interdisciplinary approach that marries this aspect of psychology with deep learning and machine learning approaches to AI.”
“This also opens the door to incorporating more complex modeling techniques into educational software – particularly educational software that adapts to the needs of the student,” said Andrew Emerson, co-author of the paper and a PhD student at the university.