ABOUT THE BEST EVIDENCE ENCYCLOPEDIA (BEE)


The Best Evidence Encyclopedia (BEE) is a free website created by the Johns Hopkins University School of Education’s Center for Research and Reform in Education (CRRE). It is intended to give educators and researchers fair and useful information about the strength of the evidence supporting a variety of programs available for students in grades K-12.

The BEE mostly consists of systematic meta-analyses of research on effective programs in reading, mathematics, writing, science, early childhood education, and other topics. It also contains articles on review methods and on issues such as special education policy and evidence-based reform. All articles are written by CRRE staff, students, associates, and collaborators, past and present. The articles are technical reports written in the process leading to publication. Almost all entries have been published, or will be. However, we provide the technical reports online to make our reviews available before publication, and we keep them online to enable readers to access the information easily and at no cost.

Criteria for Reviews

The reviews in the Best Evidence Encyclopedia are meta-analyses that apply consistent, scientific review standards to review studies that both meet high standards of methodological quality and evaluate realistic implementations of practical programs.

Each meta-analysis does the following:

  1. Considers all studies in a given area and carries out an exhaustive search for all studies that meet well-justified standards of methodological quality and relevance to the issue being reviewed.
  2. Presents quantitative summaries of evidence on the effectiveness of programs or practices used with children in grades PK-12, focusing on achievement outcomes.
  3. Focuses on studies comparing programs to control groups with random assignment to conditions, or pre-established matching on pretests or other variables that indicate that experimental and control groups were equivalent before the treatments began.
  4. Summarizes program outcomes in terms of effect sizes (experimental-control differences divided by the standard deviation) as well as statistical significance.
  5. Focuses on studies that took place over periods of at least 12 weeks, to avoid brief, artificial laboratory studies.
  6. Focuses on studies that used measures that assessed the content studied by control as well as experimental students, to avoid studies that used measures inherent to the experimental treatment.

Interpreting Effect Sizes

Throughout the Best Evidence Encyclopedia, the term “effect size” (ES) is used. This is the difference between the mean of the experimental group and the mean of the control group (after adjustment for any pretest differences), divided by the standard deviation of the control group. When means or standard deviations are not reported, ES is often estimated from other information that is available.

What is considered a large effect size? There is no universally accepted definition. More is better, but often the quality of the research design is more important than the size of the effect. For example, a large experiment with random assignment to treatments that obtained an effect size of +0.20 is more important than a small, matched experiment with an effect size of +0.40. Small and matched studies are more likely to have unreliable, possibly biased findings, while you can rely on the positive effect size in the large, randomized study.

One way to interpret the size of difference indicated by an effect size is to consider the improvement in percentile scores that would take place if a program with a given effect size is implemented. Another is to estimate “additional months of gain.” The table below shows these estimates:


An effect size of… Would increase
percentile
scores from:
Additional Months of Gain
+0.10 50 to 54 1-2
+0.20 50 to 58 3
+0.30 50 to 62 4
+0.40 50 to 66 5
+0.50 50 to 69 6
+0.60 50 to 73 7
+0.70 50 to 76 8-9
+0.80 50 to 79 9-10
+0.90 50 to 82 11
+1.00 50 to 84 12

Views: 99

Reply to This

JOIN SL 2.0

SUBSCRIBE TO

SCHOOL LEADERSHIP 2.0

Feedspot named School Leadership 2.0 one of the "Top 25 Educational Leadership Blogs"

"School Leadership 2.0 is the premier virtual learning community for school leaders from around the globe."

---------------------------

 Our community is a subscription-based paid service ($19.95/year or only $1.99 per month for a trial membership)  that will provide school leaders with outstanding resources. Learn more about membership to this service by clicking one of our links below.

 

Click HERE to subscribe as an individual.

 

Click HERE to learn about group membership (i.e., association, leadership teams)

__________________

CREATE AN EMPLOYER PROFILE AND GET JOB ALERTS AT 

SCHOOLLEADERSHIPJOBS.COM

New Partnership

image0.jpeg

Mentors.net - a Professional Development Resource

Mentors.net was founded in 1995 as a professional development resource for school administrators leading new teacher induction programs. It soon evolved into a destination where both new and student teachers could reflect on their teaching experiences. Now, nearly thirty years later, Mentors.net has taken on a new direction—serving as a platform for beginning teachers, preservice educators, and

other professionals to share their insights and experiences from the early years of teaching, with a focus on integrating artificial intelligence. We invite you to contribute by sharing your experiences in the form of a journal article, story, reflection, or timely tips, especially on how you incorporate AI into your teaching

practice. Submissions may range from a 500-word personal reflection to a 2,000-word article with formal citations.

© 2025   Created by William Brennan and Michael Keany   Powered by

Badges  |  Report an Issue  |  Terms of Service