The impact of tutoring programs at scale
By Marta Pellegrini, University of Cagliari, Italy
Matthew Kraft and colleagues conducted a meta-analysis of 265 randomized controlled trials evaluating the effects of 340 tutoring programs to understand what impacts should be expected from tutoring programs when implemented at scale in the U.S., using standardized tests as outcome measures. Most of the included studies were conducted in elementary school and in reading.
The results showed that as the number of students served by the tutoring programs increased, the effects tended to decline. The average effect size was +0.44 for programs with fewer than 99 students and +0.30 for those with 100–399 students. For large-scale tutoring studies, the effect size was +0.21 with 400–999 students and +0.16 with more than 1,000 students. The authors noted that these effects remain substantial. However, similar effects to those observed in the full set of meta-analytic studies should not be expected when tutoring is implemented at scale. The wide variability in effect sizes also suggests that individual programs differ considerably in effectiveness.
The authors tested hypotheses to explain this pattern. One possible reason is that it becomes more difficult to maintain high-quality implementation as the number of students increases. Another is that, in large-scale evaluations, program features are often adjusted to make tutoring more feasible, for example, by assigning each tutor to larger groups of students.
The authors concluded that the effects of tutoring observed in their study remain meaningful and relevant for both practice and policy. But it is also important to maintain realistic expectations about the impact of tutoring when it is implemented broadly in real-world school settings.
The impact of tutoring programs at scale
by Michael Keany
Jun 17
The impact of tutoring programs at scale
By Marta Pellegrini, University of Cagliari, Italy
Matthew Kraft and colleagues conducted a meta-analysis of 265 randomized controlled trials evaluating the effects of 340 tutoring programs to understand what impacts should be expected from tutoring programs when implemented at scale in the U.S., using standardized tests as outcome measures. Most of the included studies were conducted in elementary school and in reading.
The results showed that as the number of students served by the tutoring programs increased, the effects tended to decline. The average effect size was +0.44 for programs with fewer than 99 students and +0.30 for those with 100–399 students. For large-scale tutoring studies, the effect size was +0.21 with 400–999 students and +0.16 with more than 1,000 students. The authors noted that these effects remain substantial. However, similar effects to those observed in the full set of meta-analytic studies should not be expected when tutoring is implemented at scale. The wide variability in effect sizes also suggests that individual programs differ considerably in effectiveness.
The authors tested hypotheses to explain this pattern. One possible reason is that it becomes more difficult to maintain high-quality implementation as the number of students increases. Another is that, in large-scale evaluations, program features are often adjusted to make tutoring more feasible, for example, by assigning each tutor to larger groups of students.
The authors concluded that the effects of tutoring observed in their study remain meaningful and relevant for both practice and policy. But it is also important to maintain realistic expectations about the impact of tutoring when it is implemented broadly in real-world school settings.