Friday, February 25, 2011

The Perils of Standardized Testing

I was recently speaking with a career college professor who described to me his frustration with a class of students he's teaching. Having given them their first exam, a take-home assignment, he had found that only a couple of the students had written adequate papers. So he returned the students' papers, having duly marked them up with comments, and asked them to revise their submissions and turn them in the following week.

This brought to mind something I read recently, that scarcely more than a third of college graduates can successfully write an essay comparing two newspaper articles. "That's what I asked them to do," the professor said. Not two articles, but two pieces of poetry. "I can't simply ask them to analyze one poem, because most of them will search the Internet and copy what they find. This is the only way I can be sure I'm getting their own work."

A few months ago I read somebody's short-sighted retort to critics of high-stakes standardized testing. He gave an example of a question from a standardized test and asked, "Isn't this exactly what students are supposed to be learning?" But no, although I'm willing to allow for much greater emphasis on basic comprehension in the early grades, reading a few paragraphs and answering a series of multiple choice questions pertaining to that block of text is not a fair test of what students should be learning. High school students should be able to write essays comparing two newspaper articles or poems. College students should not be stumbling over such a basic task. For most college graduates to still struggle? Something is very wrong.

Two things, perhaps. First, it suggests that standardized tests are focused on the easily measurable, which means concrete tasks and concrete thinking, not more complex, abstract tasks or abstract thinking, and if students are unfortunate enough to be in a school district that spends a considerable amount of time "teaching to the test" they may become masters at reading a few paragraphs of text or a simple math problem then "ruling out the two unlikely answers and choosing the most likely of the two remaining answers", but aren't being prepared for more complex analysis, problem solving, critical thinking, or reading longer form materials.

Second, it reinforces the fact that not all students are college material. Some may become college material after a year, two years, or perhaps ten or more years in the working world. Some will be much better served by vocational training, a modern form of apprenticeship, or by simply entering the job market. Even if we ascribe some of the poor analytic performance of college grads to the colleges themselves, if that statistic is accurate it still suggests that about half of the students who presently attend college aren't likely to get much out of the experience - four years of expensive life-postponement. Nobody should view that as acceptable.


  1. No, no. It's four years of partying, not life-postponement. ;)

  2. And let's face it, somebody has to translate those Sumerian tablets. ;)

  3. Testing isn't teaching, but it certainly shows how badly we fail at teaching.

  4. The issue being addressed is whether the manner of testing can make or makes that problem worse.

  5. "The issue being addressed is whether the manner of testing can make or makes that problem worse."

    Which should make for a short discussion. Any time you create (successful) incentives you impact behavior. The message you send when you place all (or nearly all) of your emphasis on these tests is that teachers had better show improvement on the tests.

    The teachers/administrators aren't bad people (necessarily), but they are rationale. If you tell me my job depends on the kids doing well on the test - I'm teaching the test.

    The probelm isn't the idea of testing. It's what you are testing and how. We don't put essays on the tests. Therefore, we don't emphasize teaching our kids to write. Is that because we don't think comosition is important? No, it's because it's to hard to grade essays on a standardized test.


  6. We don't put essays, or even sentence-sized answers on the bulk of these "high stakes" tests. Or even "fill in the blank".

    It actually would be possible to have students read two short articles or poems and ask multiple choice questions that required a comparison. No, nowhere near as good as having them write an analysis, but better than "read this single article and answer the questions". But we don't do that on the standardized tests, either.