Back to Search Start Over

2017-2019 Implementation Evaluation of the National Math and Science Initiative's College Readiness Program

Authors :
National Center for Research on Evaluation, Standards, and Student Testing (CRESST)
Phelan, Julia
Egger, Jeffrey
Kim, Junok
Choi, Kilchan
Keum, Eunhee
Chung, Gregory K. W. K.
Baker, Eva L.
Source :
National Center for Research on Evaluation, Standards, and Student Testing (CRESST). 2021.
Publication Year :
2021

Abstract

The National Math + Science Initiative (NMSI) is a nonprofit organization committed to improving educational outcomes that traces its roots back to the early 1990s. NMSI's College Readiness Program (CRP) is a long-standing program with the goal of promoting STEM education in high schools to improve students' preparation for college. The three-year program provides teacher, student, and school supports to promote high school students' success in English, mathematics, and science Advanced Placement (AP) courses, with a focus on students who are traditionally underrepresented in the targeted AP courses. Through a scale-up grant awarded to NMSI by the Investing in Innovation (i3) program, the CRP was implemented in 27 schools in the 2016-2017 school year (Treatment Schools) and in 21 schools in the 2017-2018 school year (Delayed Treatment Schools), collectively identified as Program Schools. CRESST conducted an independent evaluation of the impact of the CRP on students' AP outcomes using a randomized cluster trial with 48 CRP schools and 48 Comparison Schools in 10 states. The evaluation of the CRP consisted of three parts: (1) measuring the program's impact on selected student AP exam outcomes; (2) determining the impact of the program on school perspectives and culture; and (3) assessing of the fidelity of implementation of the CRP at the school level. AP exam data from 48 Treatment Schools, with a total of 8,778 exams in 2018 and 9,378 in 2019, and 48 matched control schools, with 7,505 exams in 2018 and 2019 in Year 3, were analyzed for this study. Program impact was evaluated using a 2-level hierarchical generalized linear model (HGLM) with students nested within schools. The analyses revealed that in 2018 the probability of a student taking an AP exam in the Program Schools was, on average, 7% higher than the paired Comparison Schools, and the difference was statistically significant. And in 2019 the effect was even greater with the probability of taking an AP exam being significantly higher in Program Schools (18%) than in Comparison Schools (3%). When looking at the probability of an exam yielding a qualifying score, in 2018, the HGLM analyses found no significant difference between the two groups. In 2019, however, exams taken at the Program Schools had a significantly higher overall probability (2%) of receiving a qualifying score than the Comparison Schools (0%). These analyses compared results to the total school population. In the next analyses, we looked only at those students who took AP exams. In 2018, overall, the fitted probability of achieving a qualifying score among the exams taken was 8% in the Program Schools, compared to 22% in the Comparison Schools. In 2019, however, the difference between the Program Schools (7%) and the Comparison Schools (9%) was not statistically significant. Fidelity of implementation was evaluated using a fidelity matrix approach (required as part of the evaluation of the i3 program), which showed that not all elements of the program were implemented with high fidelity. In 2018, results indicated that 43 out of 48 schools (90%) achieved 80% or better implementation fidelity, for an average fidelity score of 89%. Four schools achieved a perfect 100% fidelity score. In 2019, 88% of schools achieved 80% or better implementation fidelity. Ten schools achieved a perfect 100% fidelity score. In 2018 in more than 75% of schools, not all teachers fulfilled their requirements for attending the required teacher training sessions, and so this component was not implemented with fidelity. In 2019, this picture improved a little with 15 schools (31%) meeting the 80% threshold. Teacher stipends, administrator bonuses, and student qualifying score awards were paid as expected. Teacher survey data indicated that teachers found the training and professional development activities provided by the CRP to be the most beneficial program supports. Mentoring was chosen, across all years, as the least effective program component. When prompted for the second most effective CRP component, the same number of teachers selected the funding of classroom and lab supplies as did teacher training. [For "2016-2017 Implementation Evaluation of the National Math and Science Initiative's College Readiness Program," see ED615910.]

Details

Language :
English
Database :
ERIC
Journal :
National Center for Research on Evaluation, Standards, and Student Testing (CRESST)
Publication Type :
Report
Accession number :
ED615919
Document Type :
Reports - Research