Wednesday, December 4, 2019

Access to effective teaching did not materially affect dropout rates (or achievement) compared to those schools who did not participate.

From Improving Teaching Effectiveness: Final Report - The Intensive Partnerships for Effective Teaching Through 2015–2016 by The Rand Corporation.
The Intensive Partnerships for Effective Teaching initiative, designed and funded by the Bill & Melinda Gates Foundation, was a multiyear effort to dramatically improve student outcomes by increasing students' access to effective teaching. Participating sites adopted measures of teaching effectiveness (TE) that included both a teacher's contribution to growth in student achievement and his or her teaching practices assessed with a structured observation rubric. The TE measures were to be used to improve staffing actions, identify teaching weaknesses and overcome them through effectiveness-linked professional development (PD), and employ compensation and career ladders (CLs) as incentives to retain the most-effective teachers and have them support the growth of other teachers. The developers believed that these mechanisms would lead to more-effective teaching, greater access to effective teaching for low-income minority (LIM) students, and greatly improved academic outcomes.
The public school systems, one each in Florida, Pennsylvania, and Tennessee and four charter school systems all participated from 2009 through 2016. It looks like it was well intended, well designed, well executed.
Key Findings

Sites implemented new measures of teaching effectiveness and modified personnel policies accordingly but did not achieve their goals for students
The sites succeeded in implementing measures of effectiveness to evaluate teachers and made use of the measures in a range of human-resource (HR) decisions.

Every site adopted an observation rubric that established a common understanding of effective teaching. Sites devoted considerable time and effort to train and certify classroom observers and to observe teachers on a regular basis.

Every site implemented a composite measure of TE that included scores from direct classroom observations of teaching and a measure of growth in student achievement.

Every site used the composite measure to varying degrees to make decisions about HR matters, including recruitment, hiring, and placement; tenure and dismissal; PD; and compensation and CLs.

Overall, however, the initiative did not achieve its goals for student achievement or graduation, particularly for LIM students.

With minor exceptions, by 2014–2015, student achievement, access to effective teaching, and dropout rates were not dramatically better than they were for similar sites that did not participate in the Intensive Partnerships initiative.

There are several possible reasons that the initiative failed to produce the desired dramatic improvement in outcomes across all years: incomplete implementation of the key policies and practices; the influence of external factors, such as state-level policy changes during the Intensive Partnerships initiative; insufficient time for effects to appear; a flawed theory of action; or a combination of these factors.
Well, that's disappointing.

And thank goodness there are the Gates' of the world who are willing and able to put up the money to rigorously test ambitious and plausible theories of education. Plausible theories which keep failing.

Regrettably, though, the field of education seems to be one of those few fields which have an even higher hypothesis failure rate than psychology and sociology. Given that all three fields are to do with understanding man, it might lead you to conclude that people are extraordinarily variable and unpredictable. You might even conclude that we haven't made much progress since Protagoras observed that:
Man is the measure of all things.
The choice of phrasing, "were not dramatically better than they were for similar sites that did not participate" is a red flag. This could cover:
1) The results were consistently positive but small.

2) The results were consistently negative but small.

3) The results were variable but a small positive effect size on average.

4) The results were variable but a small negative effect size on average.

5) There was no difference at all.
Going to the report, and not just the summary, it seems like the actual results were three and four. Variable and positive, though not statistically significant for reading, and variable but negative for mathematics.

That is actually more dismal than "were not dramatically better".
Our analyses of student test results and graduation rates showed that, six years after the IP initiative began, there is no evidence of widespread positive impact of the initiative on student outcomes. In 2014–2015, like in previous years, the estimated impacts in the IP sites were mostly not statistically significant across grades and subjects, although there were significant positive effects for HS ELA in PPS and the CMOs and significant negative ffects in mathematics in grades 3 through 8 in the CMOs.


Quo vadis

No comments:

Post a Comment