What’s the Story?
Several years ago, I was watching the evening news and some reporters were covering a story about a new gasoline tax or surcharge. I wish I could recall the details now, but what struck me was the reporting. The main story included a summary of the facts – who did what, when, where and why, followed by some expert analysis by people in government and industry. That whole report lasted about a minute. The basic analysis seemed to minimize fears that people might have about the negative impacts of the new tax.
Then, a second reporter picked up the story, and what do you suppose was happening? Interviews with random motorists. Where? At the gas station, of course. How many times have you seen that report in your life? (Sort of like the clichéd shots of overweight people viewed from the chest down whenever there are news reports about obesity). For the next two or three minutes, they ran a package that had been edited together to show a series of apparently uninformed motorists, who didn’t know the details of the report and hadn’t heard any of the analysis, offering their opinions about what this new tax or surcharge was going to mean for drivers. It’s almost needless to say, but the average person on the street was more inclined to take a negative, worried view, and to offer predictions that did not match the experts.
Well, there you have it. Information, expert analysis, and random uninformed guesswork – the complete report. I offer this little anecdote as a reminder that sometimes the story we see or read in the news is the story the media want to tell; for their purposes, that story doesn’t always need to be subjected to deeper analysis or filled in with enough background to make sense of the present or future. By virtue of the fact that you’re standing at the gas pump at the right time, you can have as much air time as the economist and the policy analyst.
Now, bringing this back to education reporting, the story of the day is about using value-added measures for teacher evaluation. How often have you seen a story that runs like this:
“Value-added measurement is a technique that some people say will allow for better evaluation of teachers by using their students’ test scores. Here are some parents and politicians who like the idea. Teachers and unions don’t like it. Some studies say VAM can be a useful measure of teacher quality, and some experts disagree.”
It’s certainly balanced, and accurate, but is it complete? I found myself quoted in one of those types of stories this week, and was reminded once again of the gas station report. Sharon Noguchi, the reporter who interviewed me, was very professional and gave me ample opportunity to describe my experience and my views, by phone and by email. She asked me follow-up questions, and even pushed back at some of my generalizations and forced me to clarify my views. Her story summarizes various viewpoints on the issue, and at least where I’m included, I can say that my views were presented accurately (in part because Ms. Noguchi double-checked before going to print). We spoke for about half an hour to produce those two little parts of the story, so naturally I would have liked to have more of that information make it into the story.
But what’s still bugging me is that I offered a number of sources of better information, and a number of important findings about value-added measurements. While I think I know teaching better than the folks at the gas pump know energy and fiscal policy, I was actually encouraging Ms. Noguchi not to rely on me too much. I suggested that she look into the joint statement (often cited on this blog) of the NCME/APA/AERA statement. They offer warnings about test misuse, reiterated with more specific reference to VAM by the National Academies last year and the Economic Policy Institute this year. The consensus is clear: using VAM with state tests has too many flaws to be used as a reliable measure of individual teacher quality.
I practically begged Ms. Noguchi to report findings from the real experts on educational measurement and statistical analysis, and to her credit, she did talk to one researcher to add that perspective. Still, I keep wondering why I see so little reporting on the weight of the evidence, so little investigation into the claims of VAM advocates. Why is the story about everyone’s opinions, instead of focusing on the ones grounded in a deep understanding of research, psychometrics, and statistics?
Would it be too much to ask for a reporter to present the facts this way: “While some parents say they’d like to see teacher evaluations based on VAM, all of the leading educational research organizations say that such an analysis would likely be flawed or misleading.” Too much news reporting presents our story as a debate of contrary, but equally valid opinions, rather than a debate in which one side is factually challenged, while the other can rely on the support of so many more expert opinions.
“Well, there you have it, Bob. Dr. Jane Smith of the Energy and Economics Institute has just explained why this surcharge will have minimal impact on motorists. Now, let’s send it out to Nick at the gas station, where he’ll ask this guy in the black Chevy what he thinks of today’s news.”