Time to re-examine state teacher evaluation laws, following legislative vote

How much should test scores count?

There is an emerging trend in public education these days: widespread teacher shortages, low enrollment in teacher education programs and even teacher walkouts across entire states.

The teaching profession is in a tough spot.

So how did we get here?

Just over a decade ago, Time magazine published their now infamous cover featuring then Chancellor of District of Columbia Public Schools—Michelle Rhee—standing in an empty classroom holding a broom, with the accompanying headline: “How to Fix America’s Schools.”

Sunil Joy

A few short years prior, Rhee had gained national attention for her hardnosed personnel policies—including the use of student test scores—in evaluating, and ultimately firing poor performing teachers. The image conjured the idea our educator ranks were riddled with far too many bad apples—teachers who needed to be swept out of the profession.

It was around this same time state legislatures across the nation—including Michigan—began adopting educator evaluation laws. These laws were largely spurred by federal initiatives like the $4 billion Race to the Top grant, which encouraged states to adopt evaluation systems that incorporated student test performance as a key factor.

While the intent of these laws was to support the development and growth of teachers, many educators continue to remain skeptical of the notion a single set of test scores should matter so much in their overall effectiveness ratings (i.e. highly effective, effective, minimally effective, ineffective). This makes sense, as educators often see a disconnect between these scores and tangible steps toward improving their day-to-day practice. In a recent poll of Tennessee educators, for example, fewer than 30 percent of educators believed the information received from statewide standardized exams was worth the investment of time and effort. In Michigan, it is even lower, as only 20 percent of educators believe this to be true.

Over the last few years, both the federal government and states have slowly began scaling back the emphasis on student test scores for educator evaluation. In 2017 alone, 10 states modified the test score requirements in their state educator evaluation laws. This included House Bill 7069 in Florida, which eliminated the requirement to incorporate student growth data using state assessments—leaving these measures to the discretion of local school districts.

Senate Bill 122 Honoring Teacher Professionalism

Michigan may soon join this national trend, given the state Legislature’s passage of Senate Bill 122—sponsored by Ken Horn (R-Frankenmuth). It is on its way to Gov. Gretchen Whitmer’s desk for her expected signing.

Under Michigan’s current educator evaluation law—Public Act 173 of 2015—educators must have 25 percent of their final evaluation rating be based on their student test score data. The remainder of the calculation (75 percent) is derived from classroom observations. Observations involve evaluators—often school principals—examining the daily practices of teachers. Trained evaluators are required to use research-based and nationally recognized observation tools that clearly define the standards and expectations of quality teaching. This includes everything from how to successfully manage classroom behaviors to shaping engaging and relevant lessons. These tools also provide meaningful action steps for educators to get better.

Beginning in the 2018-19 school year, current law requires the percentage of a teacher’s final evaluation rating based on test scores to increase from 25 to 40 percent. Likewise, it would consequently decrease the proportion derived using classroom observations from 75 to 60 percent.

SB 122 delays the shift for one year—keeping the percentage of an educator’s final rating using test scores at 25 percent—as it had been since the 2015-16 school year. By delaying the shift in weighting, the intent is to not only revisit the role of test scores, but to open up a broader conversation on the state’s educator evaluation law as a whole.

This may be a better time than any to reexamine this law. In early 2019, a survey of nearly 17,000 Michigan educators revealed just one in three teachers believed the state’s evaluation process actually improved their teaching. The survey was commissioned by Launch Michigan—a broad education reform coalition of business, educator, labor and community organizations throughout the state.

We shouldn’t stop at just examining our own educator evaluation system, but also look at other states on what is and isn’t working for them. And what better place to start than the nation’s top performing state—Massachusetts.

Learning from Massachusetts’ experience

One of the ways Massachusetts sets itself apart from other evaluation systems is the flexibility afforded to local educators. The central tenet behind this is educators must be trusted and honored as professionals in improving their craft, which in turn creates ownership in the process. And while the state provides a broad framework for evaluation, it does not prescribe a “one size fits all” model for its local districts. Among the key areas of flexibility in Massachusetts is the role of student academic data.

Unlike Michigan, Massachusetts does not require test scores be rolled up into final evaluation ratings using state-determined algorithms or weights. Rather, teams of local educators have autonomy in identifying the measures for evaluating student growth. In fact, under the Massachusetts framework, educators receive two distinct, but linked ratings on educator practice and educator impact on student learning. The intention in taking this approach is summed up best in a 2016 report from the Center for American Progress, which took an in-depth look at Massachusetts’ evaluation system:

“By keeping the summative performance and Student Impact Ratings separate, Massachusetts has taken a balanced approach. The Student Impact Rating is a check on the system, ensuring that educators do not feel that test scores wholly determine their effectiveness. The framework keeps student growth as a critical goal, but the focus is on other indicators of instructional effectiveness that are more connected to practice.”

While student assessment data may be helpful in pointing out areas for improvement, state tests in particular aren’t designed to tell teachers “how” they should improve. As Massachusetts demonstrates, this is not to say student academic outcomes play no part in the system. It is naïve to believe that any public policy decision is that black and white.

Passage of SB 122 and the broader discussion on evaluation would signal our state is willing to learn from the past in order to create a system that is truly focused on educator improvement. As we’ve learned from Massachusetts, this system must also honor and respect the professional judgement of educators.

After all, it’s the work teachers do with students that leads to learning. While test scores certainly can play a part, it is the daily hard work of teachers that makes all the difference.

LEAVE A REPLY

Please enter your comment!
Please enter your name here