The Programme for International Student Assessment (PISA) is a global standardised test for 15-year olds in OECD and non-OECD nations, taken to evaluate their academic performance. Using the test results of over 500,000 students from 72 countries, it aims to provide nations with comparative data to improve their education policies.
The assessment, which takes place every three years, tries to find out whether students can apply what they learn in school to real life situations. Subjects covered include science, mathematics, reading, problem-solving and financial literacy.
It’s since gained a reputation as the ‘Olympics of education’, with countries like Singapore, Japan and Finland emerging as the main victors in 2015, the most recent year for which results are available.
But to what extent can we rely on these results as a measure of educational development? Controversy has dogged the PISA tests since 2000 – here are some of the top criticisms put forward by academics and educators alike:
1. Limited measurement of educational achievement
PISA only measures maths, science and reading skills, and more recently, collaborative problem-solving and financial literacy skills. Then, there’s the format of the test itself. Spanning only two hours, it also relies heavily on a multiple-choice format and rating scale questions. It’s not able to capture a student’s understanding of literacy, democratic participation, soft skills like teamwork or communication, aesthetic or athletic talent – elements that are needed for a well-rounded education.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) defines education goals more holistically:
“Education that promotes economic growth alone may well also lead to an increase in unsustainable consumption patterns. The now well-established approach of Education for Sustainable Development (ESD) empowers learners to take informed decisions and responsible actions for environmental integrity, economic viability and a just society for present and future generations,” according to its report, Education for Sustainable Development Goals: Learning Objectives.
2. Problematic methodology
A focal point of criticism aimed at PISA is the statistical uncertainty of its methodology. It uses a small sample of students. Their results are then adjusted to reflect a whole population of 15-year-old students. Critics say it’s dangerous to judge an entire country based on PISA test score averages, which can obscure the huge variations that can occur regionally within each country.
Writing for The Conversation, Associate Professor, Sarfaroz Niyozov, and EdD student Wendy Hughes of the University of Toronto, said the sample size and the limited number of questions should make us skeptical of PISA rankings.
“The scores therefore include a measure of statistical uncertainty and PISA can only report a range of positions (upper rank and lower rank) where a country can be placed.” Furthermore, the test relies on a small number of questions, which means scores are highly impacted by completion rates.
3. The scores are practically worthless
Diane Ravitch, Professor of Education at New York University, wrote in Huffington Post that despite all the alarm over PISA scores in the US, “Never do they explain how it was possible for the US to score so poorly on international tests again and again over the past half-century and yet still emerge as the world’s leading economy, with the world’s most vibrant culture, and a highly productive workforce.”
Another paper by Keith Baker, a retired researcher at the US Department of Education, entitled Are International Tests Worth Anything? found, “…the nations that scored at the PISA average generally outperformed those scoring above or below average….Mediocre test scores correlate with better, more successful countries than do top scores (or lower scores)”.
Finally, countries with the highest PISA scores aren’t the ones with the highest GDP or most impressive Nobel Prize record either. Instead, it’s the reverse.