Wednesday, March 14, 2012

The False Premise of Standardized Testing

In the 2008 CAPT test, one editing and revising question asked the following:
What is the best change, if any, to make in the sentence in line 8 (The showdown occured in a local tavern.)?
a. Insert a comma after showdown.
b. Change occured to occurred.
c. Change local to locally.
d. Make no change.
Some grammatical knowledge—or an eye trained by lots of previous reading—would enable a student to knock off A and C very quickly. However, unless he happens to know the spelling of "occurred," a student is reduced to guessing between B and D. Why is this the kind of measurement we want to make more important in our schools?

I heard several students complaining that this year's CAPT had multiple questions that ultimately came down to spelling; is this not a sign that the disconnect between real educating and standardized measurement of it is broadening by the day?

My argument isn't that students don't need to know how to spell. They certainly do. One of the hallmarks of a skilled, effective writer is that he or she uses the spellchecker as a tool rather than simply following its guidance blindly. However, do we really believe that the spelling of "occurred" is an effective benchmark of a 10th grade student's learning? And are we ready to double down on this premise and others of its ilk?

If we have 10th graders who lack reading and writing skills, the answer is not to drill that population with spelling exercises. I can think of few things less productive in the attempt to take an illiterate student and make him literate. A student whose background, experience, and schooling have left him insufficiently literate needs creativity, resilience, and independence, combined with a good book taught by a passionate teacher, not an extra lesson on when to double consonants in past tense verbs. But if you tell that teacher that his salary and job security are going to be tied to the test scores of this student, which do you think the student is going to get?

There are certainly parts of CAPT that are focused more on writing and thinking skills than arbitrary knowledge, but they cause a similar problem because they lead teachers to overemphasize test preparation. For example, there's a writing section on CAPT in which students analyze a short story. As preparation, you could simply teach your students how to write analytically and how to use literary evidence to advance and deepen an idea. We already teach that because we believe that looking deeply at a story helps improve a student's analytical and creative skills while also teaching students to create honest viewpoints based on evidence. Hopefully, that focus on crucial skills would also lead to higher test scores.

However, you can't trust that teaching the real skills will be accurately measured by that section, since it offers the kids a mediocre short story and four short responses in which to demonstrate their skills. Kids who don't read or write quickly enough won't have a chance to develop their best ideas, and the stories typically don't offer enough nuance and depth for a truly interesting piece of writing. So if you want your kids to have high scores, you'll teach them strategies for gaming the maximum number of points on that section. Teaching these test-taking strategies offers the greatest possible chance for overall score improvement but the least fruitful educational experience.

Which do you think is already winning in Connecticut? Emphasis on the importance of tests pushes teachers away from focusing on challenging literature and creative thought about it and towards reductive, gather-the-points practice exercises. Sadly, these practices are already far too common in Connecticut schools, and the lower a school's performance on CAPT, the more pressure there is to shift more of the curriculum towards this reductive test preparation. In this way, I really believe that the CAPT and CMT system is already making the achievement gap worse by teaching our at-risk kids how to take tests rather than the skills necessary for the most interesting and rewarding careers.

As a teacher, I could assign my kids Macbeth and develop assignments that nurture their creativity, presentation skills, and analytical writing skills. Or, I could assign a series of lower reading level short stories of the sort that are used on CAPT and have my students spend class time writing timed responses that hit all the scoring areas on the CAPT rubric.

Right now, I do a whole lot of the first kind of thing and very little of the second, because I have the luxury at this school of knowing that the vast majority of my kids will do just fine on CAPT even if I put a whole lot of emphasis on things CAPT ignores, like creative thinking, presentation skills, and the real-life applications of the literary study of human nature. But if I'm forced into a situation in which I need to show year-over-year test score improvement in order to keep the highest level of teacher certification and 5-10% (or more) of my salary, I'll have little choice but to shift even more of my kids' class and homework time toward reductive test preparation.

Design a test that measures the real life skills we want our kids to have, and I'll happily teach to it. New York Mayor Michael R. Bloomberg said, “This business of teaching to the test is exactly what we should do, as long as the test reflects what we want them to learn.” He'd be right if a standardized test could really reflect what we want kids to learn. Show me a test that really measures the most important, hardest to teach things, and I'd love to teach to it. The reality is, though, that our tests mostly don't measure those things, and even in the limited ways that they do, it's easier and more time-efficient to teach kids how to take the test than to teach them real-life skills that only partly help them in a testing situation. Our politicians don't seem to understand that, and that failure of understanding leads them to take actions that will hurt our students in the long run.

They act as if our tests provide a legitimate measurement of the most important kinds of student learning. If that false premise were true, the rest of their actions would be admirable. However, as my students learn,—despite the fact that no standardized test measures it—a false premise typically takes the rest of the chain of logic to a false conclusion.

Using these tests in teacher evaluations would require an expansion of the CMT and CAPT system to measure every kid, every year. The only way to know if a teacher has made a test score contribution is to look at the year-over-year improvement of students under that teacher. We already blow seven instructional days over the course of two weeks for the 10th graders just to take the test itself, not including the instructional time wasted on test prep in the preceding months. If our problem is that kids aren't learning enough, devoting more time to testing across the entire high school moves us away from our goal. It's worse than useless.

The larger the role standardized test scores take in our teachers' and schools' evaluations, the more reductive and alienating the school experience becomes, particularly for the students at the most risk. We need to evaluate effective teaching so we can help struggling teachers improve and weed out the teachers who can't or won't teach effectively. But let's stop pretending that a year-over-year improvement on test scores can tell us whether real learning has taken place.