- Sponsorship -

Michigan’s NAEP Scores: What do they mean? Probably not what you think

32 Percent of Michigan’s 4th Graders Score at 7th Grade Level on the NAEP.

Did you miss that headline last week?  I did too.

That’s because it wasn’t reported.  Instead, Michigan’s schools and students were slammed by pundits because just 32 percent were “proficient” on the NAEP – the National Assessment of Educational Progress commonly called the nation’s report card.  To be proficient at the fourth-grade level, students must answer seventh-grade questions. If you think that is confusing, or just doesn’t sound right, join the tens of thousands of educators who think the same thing.

It was reported – and is commonly accepted – that Michigan’s schools are failing and falling behind based on the NAEP scores released April 10.  What is not reported is what the NAEP scores measure and, more importantly, what the student performance measures represent.

Scores on the NAEP are measured on a three-part scale according to the National Center for Education Statistics (NCES), the federal entity that administers the NAEP. Student performance is reported as “basic,” “proficient,” and “advanced.”  What do those words mean to you? With the release of the 2017 results, the associate commissioner of NCES, Dr. Peggy Carr, admits that looking at whether students are “basic” rather than “proficient” is a better indicator of students meeting grade-level standards.

Unfortunately, the NAEP scores, which measure fourth- and eighth-grade math and reading, are never reported in that fashion.  Proficiency, and the percentage of students who perform “above” grade level, is the coin of the realm. Researcher James Harvey, in his Education Leadership article “The Problem with Proficient,” reports the bar is so high that few nations in the world – as with our highest performing states — would report more than 50 percent proficient if their students took the NAEP.

In Michigan, if students performing at grade-level were the measure, the percentage of students meeting this benchmark would be much higher. Again, these state rates don’t vary much from the nation’s either:

Charts showing Michigan students are near, yet trail national averages in reading and math proficiency
Source: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress

We are a victim of what researchers have dubbed misNAEPery misuse of a test that is not intended to measure student achievement growth over time.

Here are some additional considerations regarding the recently released NAEP scores:

  • Small differences between states make rankings problematic: NAEP administers its assessment to a small sample of students in each state — much like public polling. Like public polling, there is always a margin of error associated with the results. And because we know scores between states can be similar, these margins can make a big difference.

For example, in fourth-grade reading, 19 states have results that are not statistically different than Michigan’s average. In fact, across three of the four subjects and grade-levels, there is no statistically significant difference between the national public average and Michigan’s average. Moreover, there is no statistically significant difference between Florida’s and Michigan’s scores in eighth-grade reading and math — despite Florida having been heralded as a national “bright spot” by U.S. Secretary of Education Betsy DeVos. Given these statistically insignificant differences among so many states, we really shouldn’t be ranking states at all.

  • Improvements in scores over two or more years is not the same as student growth: Because NAEP is administered every two years, many misinterpret the changes in scores as student growth. Student growth refers to the improvements or declines for a single student or cohort of students over two or more points in time. NAEP, however, does not administer the assessment to the same students over time. In fact, the students sampled don’t even take the full assessment, but instead complete a sub-portion using a technique known as “matrix” sampling. This is to the frustration of local school leaders, as NAEP’s assessment design prevents local schools from receiving their own results.
  • Causation is not the same as correlation: Differences in NAEP scores can be due to a countless number of factors, ranging from variations in poverty levels to changes in policies. However, the temptation to jump toward conclusions — particularly over short-term changes — is often too hard to suppress. This exact scenario may be playing out in Tennessee. In 2013, the state was lauded as the “fastest improving” in the nation as a result of its massive education reforms in previous years. Unfortunately, this didn’t carry into subsequent years, with 2017 results being described as “disappointing” by local stakeholders.

Given these complexities, should we toss out NAEP altogether? No. NAEP provides states with a general sense of how their students are performing. It can also open the door for further investigation of trends and the areas worth monitoring.

But scoring states, and scaring policy-makers and parents with scores and stories that make it appear as though their schools are failing, does a disservice to our students, educators, schools and districts. There is much improvement to be made, and Michigan’s schools must get better if we are to restore our mitten to the economic powerhouse it once was.  The improvements to be made are much more manageable if we are honest about our scores and our shortcomings.

Ron Koehler contributed to, and edited this article

- Sponsorship -

LATEST ARTICLES

Related Articles

- Sponsorship -

Issues in Education

Making Headlines

- Sponsorship -

MEDIA PARTNERS

Maranda Where You Live WGVU

SUSTAINING SPONSORS