What’s ‘Evidence-Based’ When It Comes to Practice?
Guidance to ensure you’re buying more than empty phrases to undo long-standing educational inequities
BY ELAINE M. RADMER/School Administrator, January 2022


Elaine Radmer, chair of the educational leadership and administration department at Gonzaga University in Spokane, Wash., believes educators need to gauge carefully what it means for an academic program to be evidence-based. PHOTO COURTESY OF ELAINE RADMER
 
In conversations with teachers and principals, you have likely heard statements such as these: “This program is evidence-based; let’s buy it for our English learners” and “We need to find evidence to guide our virtual learning choices.”

These days, the phrase evidence-based has become common in our pursuit of equitable school outcomes for different groups of learners. But few stop to consider what exactly the phrase means.

How do you define this phrase?

The phrase evidence-based was popularized by the Every Student Succeeds Act, or ESSA, a federal policy that replaced No Child Left Behind. Both policies promoted research evidence from scientific experiments. In studies, students are randomly sorted into two groups. One group uses the standard curriculum or practice, and the other group receives the experimental program or intervention. Everything else is kept as similar as possible, so any difference between the groups at the end of the experiment is seen to result from the program or intervention.

Experiments in schools are rare because students can’t usually be randomly assigned to their class schedules. So instead, educational research frequently uses quasi-experimental designs. In a quasi-experiment, classes of students can be randomly sorted into two groups, or groups for comparison can be formed another way.

For example, to study the impact of coaching on science teachers’ effectiveness, researchers examined the state test results of science teachers who worked with an instructional coach compared to teachers who received no coaching. Care was taken to match teachers on important traits, like years of experience. The previous test scores of the students were considered, as well, to ensure the two groups were similar at the start of the study.

Experimental and quasi-experimental research is intended to reveal cause-and-effect relationships between the program/intervention and student outcomes. Under the federal policies, the highest tier of evidence comes from these research designs. But causal research in education or other social settings is difficult. Schools are not sterile laboratory settings where scientists can control everything but the program/intervention they want to study.

Quite the opposite is true, as schools are highly complex social systems in dynamic settings. When students move or change schools, a well-designed large-scale study will be weakened. For this reason, federal policies also allow evidence from correlational studies. Correlations show relationship. As one variable changes, another variable changes as well. But neither variable causes the change in the other.

ESSA also went one step further, allowing a logic-based argument that an experiment or quasi-experiment might show an effect some-time in the future. Educators’ common use of the term evidence-based can be traced to ESSA, where the term appeared dozens of times. In the federal policies, evidence-based means that a program/intervention has shown a positive effect on student outcomes in a scientific study, ideally with causal research but perhaps with a correlation or even an appeal for future research.

However, the meaning is not consistent in everyday usage. One popular program has only one small study that provides evidence of its effect. But the program’s status as evidence-based is accepted as common sense by educators. Other programs may offer no details about the research conducted on the program, so there is no way of knowing whether the program is evidence-based or not.

What should leaders do?

Every time they hear the phrase evidence-based, school decision makers ought to clarify what is meant. Was the program’s impact studied with scientific research? If so, you should be able to access a research report for your school district to consider. By studying details about the research design and procedures, your district team can evaluate the validity of the findings and whether the program is likely to positively impact your setting.
Not all scientific research is created equal. The details of the procedures matter a great deal in the conclusions that were reached. Be sure to scrutinize how research was conducted and judge whether the findings might have been influenced by something other than the program or intervention. A set of seven guideline questions (see box above) will help you evaluate the validity of research findings.

Likewise, even well-researched programs are not right for every setting. External validity refers to the likelihood that causal research findings would apply to different settings or people. A rule of thumb is that findings only extend to the population from whom the sample was drawn. If the participants were from only a single state, it might not be appropriate to generalize to the nation.

As you evaluate research, analyze the key features of the setting and the participants, then decide whether the findings would reason-ably apply in your context. If you work in a rural area with migrant families, a study from an inner-city setting with place-bound students is likely not applicable. Lastly, as you seek to close racial, cultural and linguistic achievement gaps, look for research that is conducted in places and with populations that are similar to yours.

A recent book, Common-Sense Evidence by Nora Gordon and Carrie Conaway, has more tips on how to judge the relevance and credibility of research. The authors also outline how to establish a protocol for studying the data in one’s own district for evidence-based practice.

How can local evidence be helpful?

In addition to research evidence as the federal policies promoted, we need to understand our settings and monitor what is happening in them when programs are implemented. Ongoing inquiry with real-time data and observations can reveal what is working and where growth is needed. 
 Elaine Radmer


Chuck Salina and some colleagues at Gonzaga University wrote books about how leaders transform schools with systemwide data monitoring in real time. In their school improvement model, which involves educators’ belief systems as well as their actions, leaders establish systems of supports to meet every person’s needs. Their Coaches’ Handbook is accessible at no cost with activities to help systematically map the programs and interventions being used in each school for students with different needs.

In their model, a schoolwide team of educators works together to continuously monitor data and match students with academic, social-emotional and behavioral supports. This team strategizes to help every classroom teacher with his or her students, thus avoiding lost instruction time and promoting trust among teachers and students.

What other evidence can be valuable?

Leaders who are planning systematic supports for students and teachers must consider the interdependent parts within a school system but also the environment surrounding the school. We can’t work toward closing achievement gaps while ignoring the neighborhoods that students walk through to reach school every morning.

Research from other fields, such as sociology, can help educators consider the larger context of society. Schools do not exist in a vacuum. They reflect socio-political histories and dynamics. Over time, some people have been allowed to live and accumulate wealth in upwardly mobile neighborhoods while other people were judged to be unacceptable and denied opportunities to accumulate wealth. 

Research documented how Black veterans could not access benefits from the GI bill that created the white middle class after World War II. Even current studies on real estate practices show that people of different races are still directed into different neighborhoods.

Looking at research on these social patterns can help educators understand how children accumulate quite different experiences. Did you know that research influenced the Brown vs. Board of Education decision? Shown a doll with white skin and another with black skin, children were asked which doll was good and which was bad. The children overwhelmingly chose the black doll as bad. Then children were asked to pick the doll that looked like them. Videos showed the conflict African American children felt as they chose the black doll then. Did you know this research was replicated more recently, and children still show a bias towards whiteness? In our society, even young children are socialized to perceive people with different traits in certain ways.

Working toward equity in education requires awareness of these broader patterns across our society. Historical trends in other societal outcomes may lead us to see test-score achievement gaps in a new light and then reevaluate our response. 

Educators may recognize their own need to learn more about students’ lived experiences, including at school. Street Data: A Next-Generation Model for Equity, Pedagogy, and School Transformation by Shane Safir and Jamila Dugan suggests that administrators shadow a student through a school day (with permission, of course). The authors provide many more suggestions to help educators be informed by students and community members — important evidence for educators working to promote equity.

Toward Understanding

The ideals of federal policies promoting evidence-based practice — to leave no child behind and to ensure every student succeeds — frame a moral imperative for district leaders. But evidence-based practice has to mean more than empty phrases or reliance on a pro-gram’s reputation if long-standing educational inequities are going to be undone.

Research evidence, local evidence and input from many voices all are valuable. The goal is for educators to learn together from multiple sources of evidence to better understand and serve all their students.

ELAINE RADMER is an associate professor and chair of the department of educational leadership and administration at Gonzaga University in Spokane, Wash. Twitter: @EvidenceEd


Additional Resources

Elaine Radmer suggests these related books.

»Common Sense Evidence: The Education Leaders’ Guide to Using Data and Research by Nora Gordon and Carrie Conaway, Harvard Education Press

»Powerless to Powerful: Coaches’ Handbook by Chuck Salina and Suzann Girtz 

»Powerless to Powerful: Leadership for School Change by Chuck Salina, Suzann Girtz and Joanie Eppinga, Rowman & Littlefield

»Street Data: A Next-Generation Model for Equity, Pedagogy, and School Transformation by Shane Safir and Jamila Dugan, Corwin

»
Transforming Schools Through Systems Change by Chuck Salina, Suzann Girtz and Joanie Eppinga, Rowman & Littlefield