Let’s open with a quick quiz. Can you name a $13 billion enterprise that operates almost totally without regular, ongoing and actionable customer research?
If you’re in the manufacturing, IT, entertainment, transportation, health care, food service or tourism industries, you may be scratching your head, asking yourself what business could grow to that size without the data necessary to refine and revise product and service delivery.
Not so in K-12 education. There are millions of information inputs, reactions and responses to student learning needs, to their responses on tests, to parent inquiries and community demands. But there is no standard research conducted among students, parents and teachers that asks, “How’re we doing, and what could we do better?”
We have an opportunity to fix that. Our state is now developing a new accountability plan to fulfill the requirements of the federal Every Student Succeeds Act, which governs the distribution of federal funds for at-risk and special needs students.
The ESSA demands accountability for the resources provided. But in contrast to the previous federal law, No Child Left Behind, it places less focus on student test scores and allows each state to craft its own accountability measures and corrective actions. This new act actually requires states to identify measures of success that are not related to student performance on standardized tests.
The Michigan Department of Education and educators throughout the state are working now to determine which measures to include. Many would like to include those we’ve used for years — grade-point average, graduation rates and supports for at-risk students.
While those are all important indicators of success, and measures that will never go away, they fail to provide objective and actionable performance data on the environment, the effectiveness and the culture of the school experience for students and their families. Nor do they provide teachers the opportunity to objectively evaluate the support they’re given to meet student needs, or the customer’s opinion of teacher effectiveness.
Survey says: Something New?
School climate surveys aren’t new. They’re used by many schools. But in most, they’re anecdotal. They’re something of a fad that comes and goes with administrative change.
Kent school districts in 2010 conducted grant-funded student engagement surveys for all high school students. We learned a lot. We learned a lot of students are bored in school. We learned most do not know how they will use the content they are learning later in life. We learned most have many of the supports they need to be successful — strong support from staff and engaging activities that keep their interest even if they struggle with math.
The data gave building administrators the opportunity to discuss with staff their shortcomings. “Remember that survey we did last spring? How do you think students responded to the question about their enthusiasm for English 11? What would you say if just 27 percent said they were engaged and interested? What can we do to make that number higher?”
Harvard education researcher Tony Wagner says students can watch clips of teachers in action and, in just a few seconds of viewing, accurately identify those who are effective teachers and those who are struggling.
Virtually all of the educator evaluation programs are built on administrative and peer review. Why not the customer? Why not gather feedback from the student and his/her parents on their level of engagement in the content, and the support they receive?
More important, the school climate is something within the control of administrators. School administrators do not control the state assessment. It is set by the MDE and the Legislature. They do not control the performance measures (cut scores) set to determine efficiency. So, while they do control teaching and learning, they do not control their measure of academic success on the state assessment, now the M-Step and the SAT.
Moreover, school administrators do not control revenue. The state sets the amount of funding they receive. This makes it difficult for districts to fulfill non-assessment indicators such as student support services, or even the number of Advanced Placement courses offered in the high school.
Let Parents and Students Grade, Too
In addition to indicators of success beyond test scores, MDE is also considering a new A-F grading system for school buildings. This has been long sought by some legislators who believe a simple grading system would be easier for parents to understand, despite the complexity of bringing students of disparate backgrounds, developmental levels and parental support to a common standard of proficiency on a single test on a single day.
No health care system, hotel chain or manufacturer would ask customers to provide a simple A-F grade on their product or service without drilling down far more deeply into the customer experience with their product or service.
Whether we go to an A-F system or not, why not give parents the opportunity to grade their schools? Why not the students? Why not add in teachers, to determine if they feel they’re getting the support necessary to succeed?
Right now, we ask parents and students to vote with their feet, and provide ample opportunity to attend another traditional K-12 school or a charter school if they choose to do so. Why not start much earlier, before they feel their best option is to leave, and survey them annually on their experience in school?
ESSA identifies student engagement as a possible measure. It also provides states the opportunity to try something new. Engaged students learn better than those who are alienated, bored or confused.
Instead of guessing why students are disengaged — or why their parents move them from one school to another — maybe we should just ask, listen and respond.