11 results on '"Scott Sandgathe"'
Search Results
2. Issues and Challenges with Using Ensemble-Based Prediction to Probe the Weather–Climate Interface
- Author
-
Bruce D. Cornuelle, Scott Sandgathe, Steve Warren, James Hansen, and Benjamin Kirtman
- Subjects
Atmospheric Science ,business.industry ,Computer science ,Interface (Java) ,Aerospace engineering ,business - Published
- 2014
- Full Text
- View/download PDF
3. Three Spatial Verification Techniques: Cluster Analysis, Variogram, and Optical Flow
- Author
-
Caren Marzban, Scott Sandgathe, Nicholas C. Lederer, and Hilary Lyons
- Subjects
Structure (mathematical logic) ,Atmospheric Science ,Computer science ,Optical flow ,Data mining ,Function (mathematics) ,Covariance ,computer.software_genre ,Variogram ,Forecast verification ,computer ,Field (computer science) ,Displacement (vector) - Abstract
Three spatial verification techniques are applied to three datasets. The datasets consist of a mixture of real and artificial forecasts, and corresponding observations, designed to aid in better understanding the effects of global (i.e., across the entire field) displacement and intensity errors. The three verification techniques, each based on well-known statistical methods, have little in common and, so, present different facets of forecast quality. It is shown that a verification method based on cluster analysis can identify “objects” in a forecast and an observation field, thereby allowing for object-oriented verification in the sense that it considers displacement, missed forecasts, and false alarms. A second method compares the observed and forecast fields, not in terms of the objects within them, but in terms of the covariance structure of the fields, as summarized by their variogram. The last method addresses the agreement between the two fields by inferring the function that maps one to the other. The map—generally called optical flow—provides a (visual) summary of the “difference” between the two fields. A further summary measure of that map is found to yield useful information on the distortion error in the forecasts.
- Published
- 2009
- Full Text
- View/download PDF
4. An Object-Oriented Verification of Three NWP Model Formulations via Cluster Analysis: An Objective and a Subjective Analysis
- Author
-
Hilary Lyons, Scott Sandgathe, and Caren Marzban
- Subjects
Scheme (programming language) ,Atmospheric Science ,Object-oriented programming ,Computer science ,Mesoscale meteorology ,Numerical weather prediction ,Forecast verification ,Obstacle ,Econometrics ,Range (statistics) ,Spatial analysis ,Algorithm ,computer ,computer.programming_language - Abstract
Recently, an object-oriented verification scheme was developed for assessing errors in forecasts of spatial fields. The main goal of the scheme was to allow the automatic and objective evaluation of a large number of forecasts. However, processing speed was an obstacle. Here, it is shown that the methodology can be revised to increase efficiency, allowing for the evaluation of 32 days of reflectivity forecasts from three different mesoscale numerical weather prediction model formulations. It is demonstrated that the methodology can address not only spatial errors, but also intensity and timing errors. The results of the verification are compared with those performed by a human expert. For the case when the analysis involves only spatial information (and not intensity), although there exist variations from day to day, it is found that the three model formulations perform comparably, over the 32 days examined and across a wide range of spatial scales. However, the higher-resolution model formulation appears to have a slight edge over the other two; the statistical significance of that conclusion is weak but nontrivial. When intensity is included in the analysis, it is found that these conclusions are generally unaffected. As for timing errors, although for specific dates a model may have different timing errors on different spatial scales, over the 32-day period the three models are mostly “on time.” Moreover, although the method is nonsubjective, its results are shown to be consistent with an expert’s analysis of the 32 forecasts. This conclusion is tentative because of the focused nature of the data, spanning only one season in one year. But the proposed methodology now allows for the verification of many more forecasts.
- Published
- 2008
- Full Text
- View/download PDF
5. Cluster Analysis for Object-Oriented Verification of Fields: A Variation
- Author
-
Caren Marzban and Scott Sandgathe
- Subjects
Atmospheric Science ,Object-oriented programming ,Matching (statistics) ,Operations research ,Computer science ,Computation ,computer.software_genre ,Forecast verification ,Field (computer science) ,Set (abstract data type) ,Feature (computer vision) ,Cluster (physics) ,Data mining ,computer - Abstract
In a recent paper, a statistical method referred to as cluster analysis was employed to identify clusters in forecast and observed fields. Further criteria were also proposed for matching the identified clusters in one field with those in the other. As such, the proposed methodology was designed to perform an automated form of what has been called object-oriented verification. Herein, a variation of that methodology is proposed that effectively avoids (or simplifies) the criteria for matching the objects. The basic idea is to perform cluster analysis on the combined set of observations and forecasts, rather than on the individual fields separately. This method will be referred to as combinative cluster analysis (CCA). CCA naturally lends itself to the computation of false alarms, hits, and misses, and therefore, to the critical success index (CSI). A desirable feature of the previous method—the ability to assess performance on different spatial scales—is maintained. The method is demonstrated on reflectivity data and corresponding forecasts for three dates using three mesoscale numerical weather prediction model formulations—the NCEP/NWS Nonhydrostatic Mesoscale Model (NMM) at 4-km resolution (nmm4), the University of Oklahoma’s Center for Analysis and Prediction of Storms (CAPS) Weather Research and Forecasting Model (WRF) at 2-km resolution (arw2), and the NCAR WRF at 4-km resolution (arw4). In the small demonstration sample herein, model forecast quality is efficiently differentiated when performance is assessed in terms of the CSI. In this sample, arw2 appears to outperform the other two model formulations across all scales when the cluster analysis is performed in the space of spatial coordinates and reflectivity. However, when the analysis is performed only on spatial data (i.e., when only the spatial placement of the reflectivity is assessed), the difference is not significant. This result has been verified both visually and using a standard gridpoint verification, and seems to provide a reasonable assessment of model performance. This demonstration of CCA indicates promise in quickly evaluating mesoscale model performance while avoiding the subjectivity and labor intensiveness of human evaluation or the pitfalls of non-object-oriented automated verification.
- Published
- 2008
- Full Text
- View/download PDF
6. Cluster Analysis for Verification of Precipitation Fields
- Author
-
Caren Marzban and Scott Sandgathe
- Subjects
Atmospheric Science ,Measure (data warehouse) ,Scale (ratio) ,Computer science ,Function (mathematics) ,computer.software_genre ,Forecast verification ,Field (computer science) ,Product (mathematics) ,Statistics ,Cluster (physics) ,Data mining ,computer ,Event (probability theory) - Abstract
A statistical method referred to as cluster analysis is employed to identify features in forecast and observation fields. These features qualify as natural candidates for events or objects in terms of which verification can be performed. The methodology is introduced and illustrated on synthetic and real quantitative precipitation data. First, it is shown that the method correctly identifies clusters that are in agreement with what most experts might interpret as features or objects in the field. Then, it is shown that the verification of the forecasts can be performed within an event-based framework, with the events identified as the clusters. The number of clusters in a field is interpreted as a measure of scale, and the final “product” of the methodology is an “error surface” representing the error in the forecasts as a function of the number of clusters in the forecast and observation fields. This allows for the examination of forecast error as a function of scale.
- Published
- 2006
- Full Text
- View/download PDF
7. Designing Multimodel Ensembles Requires Meaningful Methodologies
- Author
-
Scott Sandgathe, Edward I. Tollerud, Barbara G. Brown, and Brian J. Etherton
- Subjects
Atmospheric Science ,Computer science - Published
- 2013
- Full Text
- View/download PDF
8. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
- Author
-
R. Ferraro, Igor Aleinov, Steven E. Peckham, T. Whitcomb, A. da Silva, N. Zadeh, James D. Doyle, Peggy Li, Alan J. Wallcraft, Gerhard Theurich, Scott Sandgathe, Mariana Vertenstein, Venkatramani Balaji, R. Dunlap, M. Iredell, Thomas L. Black, Maxwell Kelley, Fushan Liu, David McCarren, Robert Jacob, Timothy J Campbell, Benjamin Kirtman, K. Saint, J. Chen, R. Oehmke, Francis X. Giraldo, Tom Clune, Cecelia DeLuca, and Applied Mathematics
- Subjects
Atmospheric Science ,010504 meteorology & atmospheric sciences ,010505 oceanography ,Computer science ,business.industry ,Suite ,Interoperability ,Weather and climate ,computer.software_genre ,01 natural sciences ,Article ,Earth system science ,Metadata ,Software ,Hybrid system ,Systems engineering ,Data mining ,Architecture ,business ,computer ,0105 earth and related environmental sciences - Abstract
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.
- Published
- 2015
9. The Earth System Prediction Capability program
- Author
-
Daniel P. Eleuterio and Scott Sandgathe
- Subjects
Earth system science ,Engineering management ,Test case ,Meteorology ,Computer science ,Environmental research ,Predictability ,Set (psychology) - Abstract
The Earth System Prediction Capability (ESPC) inter-agency program was established in 2010 as a coordinating effort to improve collaboration across the federally sponsored environmental research and operational prediction communities for the development and implementation of improved national physical earth system prediction. Towards these goals, a set of five demonstration projects are under development and researchers are invited to participate in the definition and execution of the projects. The goal of the demonstrations is to provide unifying themes and common diagnostics to develop a common modeling environment, establish community data sets & test cases, assess predictability at sub-seasonal to inter-annual timescales, and begin to develop guidelines for the future transition to operational forecasts.
- Published
- 2012
- Full Text
- View/download PDF
10. Verification-Based Model Tuning
- Author
-
Scott Sandgathe, David W. Jones, and Caren Marzban
- Subjects
Mathematical optimization ,Observational error ,Mathematical model ,Ensemble forecasting ,Computer science ,business.industry ,Weather forecasting ,computer.software_genre ,Machine learning ,Forecast verification ,Set (abstract data type) ,A priori and a posteriori ,Sensitivity (control systems) ,Artificial intelligence ,business ,computer ,Physics::Atmospheric and Oceanic Physics - Abstract
All numerical models (e.g., Numerical Weather Prediction models) have certain parameters within model algorithms which effect forecasts to a different degree, depending on the forecast quantity. The specific values of these model parameters are determined either theoretically using fundamental physics laws but incorporating necessary approximations to reduce computational cost, or empirically using observations from field experiments where observational error introduces uncertainty. In either case, the exact value of the parameter is often unknown a priori, and so their values are usually set to improve forecast quality through some form of forecast verification. Such an approach to model tuning, however, requires knowledge of the observations to which the forecasts must be compared, and therefore, a multitude of highly detailed experimental cases in order to fully resolve parameter values, a data set very difficult to obtain. A knowledge of the relationship between model parameters and forecast quantities, without reference to observations, can not only aid in such an observation-based approach to model tuning, it can also aid in tuning the model parameters according to other criteria that may not be based on observations directly, e.g., a desire to affect the forecasts according to some long-term experience of a forecaster. The main goal of our work has been to develop a framework for representing the complex relationship between model parameters and forecast quantities, without any reference to observations.
- Published
- 2012
- Full Text
- View/download PDF
11. Automated Verification of Mesoscale Forecasts using Image Processing Techniques
- Author
-
Scott Sandgathe and David W. Jones
- Subjects
Computer science ,business.industry ,Weather forecasting ,Mesoscale meteorology ,Image processing ,Technology development ,computer.software_genre ,Forecast verification ,Automation ,Visualization ,Navy ,Data mining ,business ,computer - Abstract
The APL atmospheric sciences group is working to improve forecaster performance at Navy operational weather forecast detachments afloat and ashore. This work encompasses broad research and technology development in areas of visualization, human factors, human-machine interaction, and model and forecast verification with an emphasis on mesoscale ensembles and visualization of uncertainty. The verification effort's long-term goal is to develop an automated, objective verification technique for assessment of very high-resolution mesoscale predictions which accurately accounts for spatially or temporally misplaced features, false alarms and misses (Brown 2002).
- Published
- 2003
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.