10 results on '"Spybrook, Jessaca"'
Search Results
2. PowerUp!-Mediator: Software for Designing Group-Randomized Studies of Mediation
- Author
-
Society for Research on Educational Effectiveness (SREE), Kelcey, Ben, Dong, Nianbo, and Spybrook, Jessaca
- Abstract
The purpose of this study is to disseminate the results of recent advances in statistical power analyses with regard to multilevel mediation and its implementation in the PowerUp!-Mediator software. The authors first focus on the conceptual and statistical differences among common asymptotic, component-wise, and resampling-based tests of mediation as well as their performance in different contexts. They then introduce newly derived power formulas and delineate the statistical and substantive interpretation of the parameters that govern power in studies of multilevel mediation. Third, they outline reasonable values for these parameters across different education contexts using recent empirical compilations of values of these parameters. Finally, the authors demonstrate the use of the PowerUp!-Mediator software along with the formulas and parameter values to plan studies. [SREE documents are structured abstracts of SREE conference symposium, panel, and paper or poster submissions.]
- Published
- 2017
3. A General Framework for Power Analysis to Detect the Moderator Effects in Two- and Three-Level Cluster Randomized Trials
- Author
-
Society for Research on Educational Effectiveness (SREE), Dong, Nianbo, Spybrook, Jessaca, and Kelcey, Ben
- Abstract
The purpose of this study is to propose a general framework for power analyses to detect the moderator effects in two- and three-level cluster randomized trials (CRTs). The study specifically aims to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval to detect the moderation effects in two- and three-level CRTs, which include same and cross-level moderation, binary and continuous moderators, and covariates; and (2) operatize these formulas in the enhanced version of "PowerUp!" (Dong & Maynard 2013) to create spreadsheets for calculating power, MDES, and other information. This study covers the same and cross-level moderation in two- and three-level CRTs. For the cross-level moderation, i.e., the moderator at the level lower than the treatment level, there are two options: the fixed slope and random slope of the moderator variable. The results of two-level CRTs with a treatment variable at Level 2 and a moderator at Level 1 are presented. The standard error formulas indicate that the standard error of the moderation effect estimate is not associated with the residual variance for the intercepts, but is associated with the residual variance at Level 1. This suggests that adding more covariates in the intercept model would not reduce the standard error or improve power to detect the moderation effect, which is different from the main effect analysis. However, adding more covariates at Level 1 that can further explain Level 1 variance would reduce the standard error and increase power. A table is appended.
- Published
- 2016
4. Power Calculations for Moderators in Multi-Site Cluster Randomized Trials
- Author
-
Spybrook, Jessaca, Kelcey, Ben, and Dong, Nianbo
- Abstract
Cluster randomized trials (CRTs), or studies in which intact groups of individuals are randomly assigned to a condition, are becoming more common in evaluation studies of educational programs. A specific type of CRT in which clusters are randomly assigned to treatment within blocks or sites, known as multisite cluster randomized trials (MSCRTs), are the most frequent in the literature. The primary question that often guides the design of MSCRTs is whether or not the program works. Hence the MSCRT is designed with the goal of being powered to detect the main effect of treatment. The purpose of this paper is to extend power calculations to moderator effects in MSCRTs. This paper extends the power calculations for MSCRTs beyond the main effect of treatment to allow researchers to also consider the power for moderator effects in the design phase of the study. A table is appended.
- Published
- 2016
5. Strategies for Improving Power in Cluster Randomized Studies of Professional Development
- Author
-
Society for Research on Educational Effectiveness (SREE), Kelcey, Ben, Spybrook, Jessaca, and Zhang, Jiaqi
- Abstract
With research indicating substantial differences among teachers in terms of their effectiveness (Nye, Konstantopoulous, & Hedges, 2004), a major focus of recent research in education has been on improving teacher quality through professional development (Desimone, 2009; Institute of Educations Sciences [IES], 2012; Measures of Effective Teaching project [MET], 2012; Wayne, Yoon, Zhu, Cronen, & Garet, 2008). Notwithstanding widespread support for the development of teachers, there is a growing recognition of the lack of reliable empirical evidence concerning which features and programs of professional development are effective (Wayne et al., 2008). Consequently, there has been strong interest in supporting research that can inform the design of effective professional development programs (Desimone, 2009; IES, 2012; Wayne et al., 2008; Garet et al., 2011). For instance, through many different programs and topics, the Institute of Education Sciences (IES) has funded dozens of projects that targeted the professional development of teachers and has recently established an entire program devoted to research on effective strategies for improving teacher quality through professional development (IES, 2012). Despite the national emphasis on improving teacher effectiveness and development, there has been little research discussing how to effectively design and implement teacher professional development studies (Wayne et al., 2008). Perhaps because of this lack of research, examples of professional development studies with high quality designs have been rare. For these reasons, the field has called for more studies that evaluate the effectiveness of professional development programs on valued outcomes using rigorous designs (Barrett et al., 2012). In this study, the authors empirically examined the comparative power and practical viability of several different types of cluster randomized trials in professional development studies. They outline why such designs are well suited for studies of many professional development programs. They then report estimates for parameters needed to plan such studies and use the estimates to explore the comparative efficiency of several designs. Tables are appended.
- Published
- 2015
6. Power Calculations for Binary Moderator in Cluster Randomized Trials
- Author
-
Society for Research on Educational Effectiveness (SREE), Spybrook, Jessaca, and Kelcey, Ben
- Abstract
Cluster randomized trials (CRTs), or studies in which intact groups of individuals are randomly assigned to a condition, are becoming more common in the evaluation of educational programs, policies, and practices. The website for the National Center for Education Evaluation and Regional Assistance (NCEE) reveals they have launched over 30 evaluation studies in the past decade, the majority of them utilizing a randomized trial. Clearly there are a large number of randomized trials of educational programs, policies, and practices either complete or currently in the field. The overarching goal of these randomized trials is generate rigorous evidence of whether or not a program works. In order to do this, studies must also be designed to detect moderator effects. The power to detect moderator effects at the student, cluster, or site level in CRTs has received much less attention in the literature than the power for the main effect of treatment. The purpose of this paper is to extend the work on power calculations for moderator effects to include moderator effects at any level for the following 4 types of CRTs: 2-level CRT, 3-level CRT, 3-level multi-site cluster randomized trial (MSCRT), and 4-level MSCRT. In addition to providing the calculations and R code to do the calculations, we start to develop intuition around the minimum detectable effect size for moderator effects using sample sizes from CRTs in the field of education. This paper represents the next step towards building the capacity of researchers to design CRTs that move beyond the main effect of treatment. Designing studies to detect not only whether or not an intervention works, but for whom or under what circumstances is critical. The results from this study suggest that in many cases, if a study is powered to detect a reasonable main effect of treatment and it has a reasonable number of individuals per cluster, then it will also be powered to detect an individual level moderator (although not shown in this proposal).
- Published
- 2014
7. Improving the Design of Science Intervention Studies: An Empirical Investigation of Design Parameters for Planning Group Randomized Trials
- Author
-
Society for Research on Educational Effectiveness (SREE), Westine, Carl, and Spybrook, Jessaca
- Abstract
The capacity of the field to conduct power analyses for group randomized trials (GRTs) of educational interventions has improved over the past decade (Authors, 2009). However, a power analysis depends on estimates of design parameters. Hence it is critical to build the empirical base of design parameters for GRTs across a variety of outcomes and contexts. This study provides a first step towards building this base of design parameters specifically for science outcomes. Unlike reading and math, science is not typically tested each year. Preliminary findings from this study suggest that although not direct comparisons, unconditional intraclass correlations (ICCs) for science outcomes are smaller than grade 3 math and reading as reported by Bloom, Richburg-Hayes, and Black (2005; 2007) for five urban districts. Similarly, Hedges and Hedberg (2007) found larger ICCs for both reading and math using a nationally representative sample of students nested in schools. R-square values for school level covariates have not been computed yet, but a one year lagged student pretest appears to be highly effective in reducing variance between and within schools, more so than in reading (R-squares less than 0.86) and math (less than 0.63) as reported by Bloom et al. (2005; 2007). The empirical estimates from this study will help improve the accuracy of the power analyses for GRTs of science interventions. Tables are appended.
- Published
- 2013
8. Investigating the File Drawer Problem in Causal Effects Studies in Science Education
- Author
-
Society for Research on Educational Effectiveness (SREE), Taylor, Joseph, Kowalski, Susan, Stuhlsatz, Molly, Wilson, Christopher, and Spybrook, Jessaca
- Abstract
The purpose of this paper is to use both conceptual and statistical approaches to explore publication bias in recent causal effects studies in science education, and to draw from this exploration implications for researchers, journal reviewers, and journal editors. This paper fills a void in the "science education" literature as no previous exploration of its kind can be located. The studies in this publication bias analysis are taken from a larger meta-analysis that includes studies from the Unites States, Europe, and East Asia. The general research design is random effects meta-analysis. Specific tests of publication bias were performed within the meta-analysis context. These include funnel plots, Galbraith plots, and Egger's test of Asymmetry. Publication bias was investigated using both intuitive and statistical techniques. The results of this analysis suggest that meta-analyses may not require significant work in the grey literature to produce unbiased results. The data from this 2010-11 study suggest that small effect-small sample studies are being "submitted" for publication and are being "accepted" for publication. One table and four figures are appended.
- Published
- 2013
9. Changes in the Precision of a Study from Planning Phase to Implementation Phase: Evidence from the First Wave of Group Randomized Trials Launched by the Institute of Education Sciences
- Author
-
Society for Research on Educational Effectiveness (SREE), Spybrook, Jessaca, Lininger, Monica, and Cullen, Anne
- Abstract
The purpose of this study is to extend the work of Spybrook and Raudenbush (2009) and examine how the research designs and sample sizes changed from the planning phase to the implementation phase in the first wave of studies funded by IES. The authors examine the impact of the changes in terms of the changes in the precision of the study from the planning phase to the implementation phase. They explore trends in the changes that occurred in order to inform the planning and implementation of future studies. (Contains 3 figures and 1 table.)
- Published
- 2011
10. A Framework for Designing Cluster Randomized Trials with Binary Outcomes
- Author
-
Society for Research on Educational Effectiveness (SREE), Spybrook, Jessaca, and Martinez, Andres
- Abstract
The purpose of this paper is to provide a frame work for approaching a power analysis for a CRT (cluster randomized trial) with a binary outcome. The authors suggest a framework in the context of a simple CRT and then extend it to a blocked design, or a multi-site cluster randomized trial (MSCRT). The framework is based on proportions, an intuitive parameter when the outcome is binary. In addition, they provide sample power tables to provide readers with some intuition regarding sample sizes for CRTs with binary outcomes. (Contains 1 figure, 1 table, and 2 footnotes.)
- Published
- 2011
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.