Ignoring PISA Results Could be a Mistake
by Stuart Singer, The Teacher Leader
In an essay published in the Outlook section of the Washington Post, John Sener has decided that by successfully making 18 of 20 free throws in a gymnasium he has, using the same criteria used by analysts of standardized testing, successfully proven that he is a better basketball player than Michael Jordan. Using similar logic he dismisses the poor performance of American students on the PISA test as irrelevant.
When in doubt, ridicule
When my former school posted outstanding scores in the state’s standardized tests I was surprised by the number of questions other educational leaders raised about our “ethics”. Mel Riddile would explain to me that when you have poor outcomes you have two options—work harder and smarter or find a way to bring the competition down. Unfortunately, the latter approach appears to be the one favored by Mr. Sener.
His argument is that standardized tests in general and the PISA international test in specific are inaccurate indicators of the quality of a country’s educational system. He begins with sarcasm and then drifts into the surreal.
“Once you truly understand the awesome power of test scores, you will embrace them, as I have done — especially after realizing how standardized testing proves that I am a better basketball player than Michael Jordan.
“Don’t laugh; I have the test results. I read something in a blog somewhere about how MJ recently made 16 out of 20 free throws in a friendly shooting contest. Pretty good, but I thought I could do better. So I went to my local gym and practiced and practiced until I achieved my aim: 18 out of 20 free throws! I’ll send you the video, if you like. (Or you could do what most people do with PISA scores and simply take my word for it.)”
Making the basket; missing the point
Based on his free-throw shooting (real or otherwise) Mr. Sener reaches several conclusions about the PISA test scores in the United States.
“You may argue that it’s not a fair comparison, but that’s what so great about this — simply use the same rules we apply to judging PISA scores, and it’s perfectly fair. So what if it’s not a head-to-head competition? PISA’s not a head-to-head competition. The students take the tests at different times in different places under different conditions. Heck, they take the reading test in different languages.”
His second explanation of the poor performance of U.S. students is their lack of interest.
“…what makes you think that American students take PISA seriously? When I tested my teenage son’s knowledge of the PISA exam, he just looked at me quizzically, since he’d never heard of it…Do you really believe that every student who takes the PISA has the same amount of practice?"
To assess for yourself whether increased practice would affect the outcome of US students’ scores on the PISA tests, go to http://pisa-sq.acer.edu.au/ .
Not all air balls
Mixed into the misguided basketball analogies Mr. Sener does make some excellent points which should be emphasized.
“Standardized tests don’t measure most skills, yet opinion leaders and policymakers constantly tell us how America’s education is going down the toilet based on those scores...There is no place in standardized tests for creativity...You would be wise to ask these questions, even though standardized tests don’t care about curiosity, either.”
Ignored problems do not go away
There is no question that standardized testing does not answer all of the questions of how to measure learning and good teaching. I have long argued that the Standards of Learning (SOL) exams given in my state (VA) did not indicate mastery of a subject and the method of administering the tests was poor. But I also knew that though imperfect this new accountability was a step in the right direction. Prior to such tests there were virtually no quantitative measures of the relative performances of students from classroom to classroom, school to school or district to district. These results clearly indicated discernible patterns that, if used correctly, could be of great value.
While this standardization did not equate to the level of precision that would be optimal, it did offer critical insights into the quality of teaching. In every school the staff forms subjective conclusions as to which teachers are effective and those that are not. During the ten years I observed SOL testing (VA) the results of these exams closely matched these informal evaluations. Based on substantial data, the students of certain teachers routinely outperformed others. While such statistics can and were misused, they did provide a limited amount of quantitative proof of student comprehension, weaknesses and the quality of the work of their instructors.
These outcomes were not enough. The testing methods need to be improved to better reflect the actual knowledge acquisition. They must demonstrate a legitimate understanding of a wide range of material. This process is still in its infancy and far from a finished product. The potential for improvement is present if the willingness to keep an open mind is maintained.
But simply ignoring any measurement that indicates a serious problem in American education is reckless. A country where more than three of every ten students drop out of high school and only 30% attain a college degree is hardly in a position to dismiss a poor global performance with sarcasm and ridicule.
Note: At the high school level, Virginia administers eleven end-of-course (EOC) exams, which are used both as barriers to graduation and to calculate adequate yearly progress (AYP). Only a few states use EOC exams for accountability purposes and as barriers to graduation.