Skip to content

NBPTS on Teacher Evaluation: Getting it Right

October 5, 2011

On Monday, October 3rd, the National Board for Professional Teaching Standards (NBPTS) produced a live webcast to launch its new guide to teacher evaluation, titled Getting It Right.  As a National Board Certified Teacher and someone who has worked on producing a similar policy guide on teacher evaluation (see Publications, above), I tuned in to see what the National Board had to say.  After all, no organization has a clearer picture of what quality teaching really should look like: NBPTS has produced comprehensive and richly detailed teaching standards that provide the foundation to offer twenty-five different board certifications.  The standards are developed and updated by teams of teachers from around the country, and they put in many days of work individually and collectively to provide the best available standards for professional teaching practices.  NBPTS has certified over 91,000 teachers and counselors who have proven themselves effective to trained evaluators, and in research commissioned by the U.S. Congress.

Linda Darling-Hammond and Gov. Bob Wise

Governor Bob Wise, with Linda Darling-Hammond, at the NBPTS 2011 Conference, July 29, 2011. (photo by the author)

The webcast featured the following individuals:

  • Governor Bob Wise, President, Alliance for Excellent Education, Chair, NBPTS Board of Directors (moderator)
  • Joan Auchter, NBPTS Chief Innovation Officer (presenter)
  • Brenda Welburn, Executive Director, National Association of State Boards of Education, NBPTS Certification Council
  • Peggy Brookins, National Board Certified Teacher (Florida), member, NBPTS Board of Directors (respondent)

I took some notes as the webcast proceeded, and want to share here some of the more important take-away points.  (The statements in bold at the beginning of each paragraph represent my paraphrases of the speakers and may vary slightly from their exact words).

75% of teachers report that they receive no feedback after a positive evaluation.  I’m not sure what the source for this statement was, but it sounds about right to me (and I’m sure they have a source!).  This statistic reveals a key problem around which I think there is a good chance to build consensus: evaluation should lead to improvement for all teachers.  Right now, there are many people who agree with this idea in principle, but then advocate for evaluation reform as if its main purpose were basic quality control.  If we design systems to identify bad teachers, we may or may not succeed at that goal – but it’s the wrong goal.  Evaluation that helps everyone improve will help identify which teachers need more help (and hopefully lead to them receiving that help) before they become the “bad teachers.”  If we have an evaluation system that produces growth, then over time, the good will crowd out the bad.  I don’t mean that just in terms of numbers of teachers, but even within our own practice.  I would expect that a more robust ongoing evaluation system would help me to displace the weaker parts of my teaching by building more strengths.  If you want to see what happens when evaluation systems are more about compliance, witness Tennessee, where, I learned today, teachers and administrators are leaving the profession thanks to the new evaluation system the state rushed to implement as part of Race to the Top.

It is student learning, not student achievement, that should drive evaluation.  Too often, we all let the term “achievement” serve as a proxy for “learning” – but in fact, learning is a process and achievement is the measured outcome of some of that learning.  When Governor Wise said this, I think (I hope) he was not even referring to the excessively narrow “achievement” measurement of state tests, but more broadly, to all sorts of student achievement.  However, the ultimate level of achievement is not the whole story of student learning.  The core propositions that guide the National Board are focused on learning, and good teachers should be as well.  What a student demonstrates in terms of achievement does not always have a direct, linear connection to teaching and learning (as it occurs in the confines of a classroom and arising out of a certain curriculum).  Yes, teacher evaluation policies should look at evidence of student learning, but only teacher evaluation policies that recognize the difference between learning and achievement will capture the real complexity of the work we do, and therefore serve a useful purpose.

Have the right people at the table from the start – if you have teachers, you’ll have buy in.  Joan Auchter made this statement with regard to teacher evaluation reforms, but it would be true regarding other education policy as well, and at every step in the process.

Teachers want administrators to know what they’re looking at, judge it fairly, focus on student learning, provide timely feedback, honest feedback, and helpful professional development, with a reasonable timeline for correction or improvement.   Peggy Brookins gave this concise and clear statement that sums up the view of so many teachers I’ve known and worked with around this issue.  She later added that when teachers deal with student learning, we have a complex understanding of our students, classrooms, content area, and pedagogy.  If administrators, policy makers, and governments want to engage us in a drive for educational improvement, they must understand that it’s not just about data.  I was using Twitter during the webcast and that particular statement led to this exchange:

twitter capture

When it comes to using data, we need more “drilling down.”  Now, don’t forget what Peggy Brookins said above – it’s much more than data that we’re interested in.  But, when we’re using data, let’s be smart about it.  Joan Auchter gave an example of a school where analysis of schoolwide data revealed evidence of the  struggles of immigrant students.  “Drilling down” allowed the school to see that the underlying issues in these struggles were not necessarily matters for the classroom teacher to address, and so the response was school intervention with the parents.  A less astute response might have been that low scores are the problem (rather than a potential indicator of an unknown problem),and that the cause and solution are both a matter of teaching.  Some have taken a liking to the phrase “data-informed” as an antidote to the ubiquitous and unfortunate phrase “data-driven”.  Too often, I see and hear sloppy thinking about correlation and causation (and I’m sure I’ve done it myself).  If evaluation becomes an exercise in uncritical viewing of data, replete with unchecked assumptions about the meaning of data and responses to data, then we’re sunk.

Mobility and high-performance also create “accountability” problems for the data-driven.  More sharp thinking and important insights from Peggy Brookins.  Her district has a 40% student mobility rate.  I’m not certain how that’s calculated, but I think it means that in a given school year, 40% of the students will have spent only part of the school-year in the district, a combination of late arrivals and early departures.  If you have a simplistic idea about using data to hold teachers accountable in circumstances like that, good luck.  (Tangent alert!).  At its very, very best, test-based accountability or evaluation systems suffer from huge gaps in information: they have a crippling inability to identify and control for all of the factors that affect students, and they are saddled with tests that do a poor job of measuring a narrow and over-simplified sliver of what students know and what teachers teach.  Now we’re going to add in 40% changeover in the teacher’s classroom, and for each change that contributes to the 40%, no viable way to measure or distribute the effects of different schools, teachers, curricula or classmates.  Furthermore, teachers know that changing the make-up of a class, sometimes even by a single student, can affect the entire class.  “Data-defenders” will dismiss these concerns, either by claiming that these variables can be controlled for, or are not statistically significant (though of course there’s no study to confirm that).  Or, they might try claiming that test-based data would only be a portion of the evaluation (though they cannot defend the misuse of the test or their arbitrary selection of a percentage weight of testing in evaluations).

Another challenge mentioned by Peggy Brookins involves using test data to measure growth in the highest performing students.  If standardized tests are your measuring stick, you can’t “add value” for a student who’s at the top already.  We can certainly add real value by meeting the students where they are and finding the approrpriate challenges for them.  We can even demonstrate student learning in these cases – but not with state tests.  So, if evaluation reforms aren’t flexible enough to handle all sorts of evidence of student learning, then these high performing students and their teachers will be ill-served by the change.  Joan Auchter agreed: “If they’re going to hold teachers accountable for growth of students at high end of spectrum, there must be a measure for those students.”

Getting It Right is a “tight/loose” report.   Tight/loose and loose/tight are descriptions of management styles or strategies for organizations.  The former is an approach that sets strict controls on the inputs, conditions, or the starting point of work, but allows for flexbility and variability in the final product.  The latter approach allows flexbility on the starting point, inputs, and processes, as long as the final result conforms to certain expectations.  I think tight/loose is the best approach for education policy recommendations, and that’s what we have here.  The report is tight on beginning points: evaluation is essential, and should be focused on improving all teaching,  with a focus on student learning, and appropriate use of data (using the term “data” to mean any observable information about student learning).   It is vital that teachers be full partners in the design and implementation of evaluation systems.  In all of these regards, there is little room for compromise.  The recommendations become “loose” when applied to the specific context.  Schools, districts, and states all have significant variables in their population, challenges, infrastructure, and resources.  An evaluation system that works well for a small rural district or a suburban elementary school district might not be the right tool for Los Angeles Unified School District, and vice versa.  The final output then is “loose” while the guiding principles are “tight.”  Coincidentally, I’ve found over the years that my best teaching (in high school English classes) occurs when I take the same approach.  Give students tight expectations and requirements up front, then get out of the way and see how creative they can be in showing their understanding within the given framework.

I hope that many districts and states will look into Getting It Right for teacher evaluation.  And of course, they’re all welcome to study our 2010 Accomplished California Teachers policy report on teacher evaluation, too.

No comments yet

Leave a comment