Until recently, few valid and reliable assessments were available to measure young children's mathematics and science learning in a "comprehensive" way. Now, a number of mathematics assessments have been developed and subjected to testing (Klein, Starkey, & Wakeley, 2000; Ginsburg, 2008; Clements & Sarama, 2008), and progress has been made in science as well (Greenfield, Dominguez, Fuccillo, Maier, & Greenberg, 2008; Greenfield, Dominguez, Greenberg, Fuccillo, & Maier, 2011). To study the relationship between classroom supports and instruction and learning outcomes requires valid and reliable measures of "classroom quality" along with these measures of children's learning. Recent reviews of assessments for classroom quality in science and math have concluded that few comprehensive measures are yet available for use by researchers or educators and that "science is especially weak" (Brenneman et al., 2011; Snow & Van Hemel, 2009). Among the "under development" instruments identified in the Brenneman et al. review was the Preschool Rating Instrument for Science and Mathematics (PRISM), which is the focus of the present paper. The purpose of this work is to develop and validate a classroom observation instrument that objectively measures the presence of classroom materials and teaching interactions that support mathematics and science learning. The overall question that motivates the effort is whether an observational tool can be developed to measure classrooms supports for math and science learning that are correlated with child learning outcomes and school readiness in these domains. Such a tool would make a significant contribution to the field, as it would have both research and professional development applications that would contribute to efforts to improve teaching and learning of key STEM concepts and practices in the critical preschool years. The preliminary findings reported here encourage the authors to continue the development process for the PRISM. Correlations with the ECERS-R are moderate as expected, suggesting that the PRISM is picking up general quality, as well as unique information about observed classrooms. The factor structure of the instrument is strong and matches theoretical predictions, although work remains. Specifically, the authors will be working to better integrate supports for numerical operational thinking into the instrument, most likely by combining it with another number item. Efforts to improve the internal consistency of the scale, especially for science interactions, will also be made. They hope to undertake this work as part of a dedicated, full-scale study of the PRISM that allows for continued development, field testing, more extensive psychometric explorations, and final validation of the instrument. (Contains 4 tables.)