+1 (401) 307-3957‬​ hello@madeformath.com

Understanding Educational Research

In a recent interview with Adrianne Meldrum, Nathaniel Hansford shared his journey as an educator, teaching across multiple countries and working with students with diverse needs. From classrooms in Korea to behavioral schools in England, Nathaniel’s experiences shaped not only his approach to teaching but also his understanding of how research informs educational practices.

Learning Across Borders: Nathaniel’s Teaching Journey

Nathaniel began his career teaching English as a second language in Korea, where he quickly discovered the power of engagement. He learned that even direct instruction can be made more effective with playful, interactive elements.

In Northern Canada, Nathaniel worked with communities deeply affected by poverty and historical trauma, reinforcing the importance of high expectations combined with extra explicit instruction and extra scaffolding.

“If a student is struggling in basketball, you don’t lower the net—you lift them up. The same is true for academic expectations.”

These experiences highlight a crucial connection: teaching practices are most effective when informed by research and evidence, adapted to the context and needs of the learners.

Research-Based vs. Evidence-Based: What’s the Difference?

Educators often hear terms like research-based and evidence-based tossed around, but what do they really mean?

He explains that these terms, while widely used, are surprisingly ambiguous.
 

Research-Based: Inspiration from Research

  • Definition: A program, strategy, or method is called research-based when it is inspired by existing research.
  • Key point: Almost anything can be labeled research-based, even if it hasn’t been rigorously tested. Nathaniel gives a humorous example:
     

    Evidence-Based: Tested and Verified

  • Definition: An evidence-based practice has been scientifically tested, typically through studies that compare outcomes for a group using the method versus a control group that does not.
  • Key point: Evidence-based approaches are supposed to demonstrate measurable improvement in learning outcomes.
     
    However, Nathaniel notes several challenges and limitations:

    1. Varying standards: Different organizations use different criteria for labeling something evidence-based. For example:

  • Some require multiple experimental studies with large effect sizes.
  • Others, like the What Works Clearinghouse, may consider a single study with a small positive effect sufficient.

    2. Publication bias: Most studies with positive results are published, while null or negative results often remain unpublished.

    3. Researcher bias: Researchers may be hesitant to publish findings that contradict their previous work or career focus.

    4. Data manipulation or misinterpretation: Even when studies are published, claims of “evidence-based” can be misleading if improvements are measured in ways that don’t truly reflect effectiveness.

    “Evidence-based, in theory, is better than research-based. But there’s no universal accepted definition, and it can still be misused.”

    Why This Matters for Educators

    Nathaniel’s explanation highlights why educators need critical thinking when evaluating educational programs and strategies:

  • Look beyond labels like “research-based” or “evidence-based.”
  • Examine study design, number of studies, and effect sizes.
  • Consider context: Does the research apply to your student population and classroom environment?

    Understanding these nuances allows teachers to make informed decisions, bridging the gap between educational research and practical classroom application.

  • Making Sense of Research: The Power of Meta-Analyses

    After clarifying the nuances between research-based and evidence-based practices, Nathaniel Hansford delved into another key tool for educators: meta-analysis.

    “I want to credit John Hattie for this,” Nathaniel says, “He popularized the idea of studying studies, and it really opened my eyes to its importance in education.”

    What is a Meta-Analysis?

    A meta-analysis is essentially a study of studies. It combines the results of multiple rigorous research projects on a similar topic to produce a more comprehensive understanding of what works.

    Factors like individual teacher effectiveness, classroom environment, and student differences can dramatically influence outcomes. Evaluating a single study in isolation can be misleading.

    How Meta-Analyses Help

    Meta-analyses aim to address these inconsistencies by:

  • Aggregating results: Combining data from multiple studies to calculate an average effect.
  • Adjusting for sample size: Giving more weight to studies with larger, more representative samples.
  • Reducing bias: Offering a more objective overview than relying on one researcher’s interpretation.

     
    “It’s like a blurry image of what works,” Nathaniel notes, “But it’s the best tool we have because there isn’t a better one.”

    By understanding how meta-analyses work—and their limitations—educators can better navigate the complex landscape of educational research, using evidence to guide classroom practice while adapting strategies to their unique students and settings.

  • Understanding the Limitations of Meta-Analyses

    While meta-analyses are powerful tools for synthesizing educational research, Nathaniel Hansford emphasizes that they are far from perfect.

    One of the biggest issues is assessment variability. Different studies often use different tests to measure student outcomes. In some cases, researchers even create their own assessments. While these custom tools are valid in context, studies that “teach to the test” often report significantly higher results than studies using standardized measures.

    “It’s very difficult to compare those results directly,” Nathaniel notes. “The criticism of meta-analysis is that you’re comparing apples to oranges. Each study is so unique.”

    Other factors that can complicate meta-analyses include sample size, age differences, and location—highlighting that educational research is rarely a one-size-fits-all solution.

    Using Moderator Analysis

    To address these differences, researchers use moderator analysis. This approach breaks down the meta-analysis across multiple variables, such as age, type of assessment, or class size, helping to identify trends under specific conditions.

    Nathaniel emphasizes that while education research can never be perfectly precise, meta-analyses provide the best available tool for evidence-informed decision-making.

    Finding Reliable Methods and Strategies

    So how can educators separate well-supported methods from trendy but unproven ideas? Nathaniel recommends:

    1. Look for meta-analyses: If a pedagogy or program has undergone a meta-analysis, it’s likely been rigorously studied.

    2. Check effect sizes: A measure like John Hattie’s benchmark of 0.40 indicates a meaningful, replicable improvement.

    3. Stick to foundational practices: Well-researched strategies—explicit instruction, age-appropriate word problems, fluency-building, and teaching multiple approaches—tend to have strong evidence behind them.

    Be Skeptical of “Shiny New” Methods

    Nathaniel also cautions educators against blindly trusting new, highly specific strategies that claim to revolutionize learning:

    “If someone tells you ‘new science shows this is the best thing ever,’ be skeptical. These methods almost never have strong evidence. Often, there’s a financial motive—books, programs, or professional development—behind the hype.”

    Transparency and critical thinking are key when navigating the vast landscape of educational advice. Educators should focus on strategies backed by solid research, while remaining mindful of context, student needs, and practical classroom realities.

    Explore More: Continuing the Journey into Evidence-Based Education

    Throughout our conversation, Nathaniel Hansford has provided a clear and thoughtful roadmap for understanding educational research. From unpacking the differences between research-based and evidence-based practices, to explaining the value—and limitations—of meta-analyses, he emphasizes the importance of approaching teaching strategies with both critical thinking and practical awareness. Educators are encouraged to focus on well-supported foundational practices, remain skeptical of unproven “shiny new” methods, and look to rigorously studied approaches to guide their instruction.

    For those eager to dive deeper, Nathaniel co-founded Pedagogy Non Grata, a resource dedicated to making education research accessible, transparent, and applicable for teachers. The site offers articles on the science of math and reading instruction, reviews of educational programs, and links to research-backed teaching materials. His latest book, The Scientific Principles of Teaching, provides a concise overview of learning theories and meta-analyses to help educators make informed instructional decisions.

    Check out Nathaniel and Rachel Schechter’s article on Challenges and Opportunities of Meta-Analysis in Education Research here.

    Exploring these resources can empower teachers, administrators, and parents alike to separate evidence-based practices from hype, and ultimately improve outcomes for all students.

    Curriculum Alone Can’t Rescue

    a Student Struggling With Math.

    We have the tools and strategies

    that help students with complex learning needs

    finally understand it. Let us show you how.

    MFM Authors

    Jennie Miller

    Jennie Miller

    Marketing Assistant

    is our Marketing Assistant and content creator here at Made for Math. Jennie loves being part of a company that is working to make mathematics accessible to children with dyscalculia.