934 results on '"postprocessing"'
Search Results
152. The Second Real-Time, Virtual Spring Forecasting Experiment to Advance Severe Weather Prediction.
- Author
-
Clark, Adam J., Jirak, Israel L., Gallo, Burkely T., Knopfmeier, Kent H., Roberts, Brett, Krocak, Makenzie, Vancil, Jake, Hoogewind, Kimberly A., Dahl, Nathan A., Loken, Eric D., Jahn, David, Harrison, David, Imy, David, Burke, Patrick, Wicker, Louis J., Skinner, Patrick S., Heinselman, Pamela L., Marsh, Patrick, Wilson, Katie A., and Dean, Andrew R.
- Subjects
- *
WEATHER forecasting , *SEVERE storms , *FORECASTING - Published
- 2022
- Full Text
- View/download PDF
153. Statistical Forecasts for the Occurrence of Precipitation Outperform Global Models over Northern Tropical Africa.
- Author
-
Vogel, Peter, Knippertz, Peter, Gneiting, Tilmann, Fink, Andreas H., Klar, Manuel, and Schlueter, Andreas
- Subjects
- *
PRECIPITATION forecasting , *MESOSCALE convective complexes , *PRECIPITATION probabilities , *STATISTICAL models , *REGRESSION analysis , *RAIN gauges - Abstract
Short‐term global ensemble predictions of rainfall currently have no skill over northern tropical Africa when compared to simple climatology‐based forecasts, even after sophisticated statistical postprocessing. Here, we demonstrate that 1‐day statistical forecasts for the probability of precipitation occurrence based on a simple logistic regression model have considerable potential for improvement. The new approach we present here relies on gridded rainfall estimates from the Tropical Rainfall Measuring Mission for July‐September 1998–2017 and uses rainfall amounts from the pixels that show the highest positive and negative correlations on the previous two days as input. Forecasts using this model are reliable and have a higher resolution and better skill than climatology‐based forecasts. The good performance is related to westward propagating African easterly waves and embedded mesoscale convective systems. The statistical model is outmatched by the postprocessed dynamical forecast in the dry outer tropics only, where extratropical influences are important. Plain Language Summary: Forecasts of precipitation for the next few days based on state‐of‐the‐art weather models are currently inaccurate over northern tropical Africa, even after systematic forecast errors are corrected statistically. In this paper, we show that we can use rainfall observations from the previous 2 days to improve 1‐day predictions of precipitation occurrence. Such an approach works well over this region, as rainfall systems tend to travel from the east to the west organized by flow patterns several kilometers above the ground, called African easterly waves. This statistical forecast model requires training over a longer time period (here 19 years) to establish robust relationships on which future predictions can be based. The input data employed are gridded rainfall estimates based on satellite data for the African summer monsoon in July to September. The new method outperforms all other methods currently available on a day‐to‐day basis over the region, except for the dry outer tropics, where influences from midlatitudes, which are better captured by weather models, become more important. Key Points: Raw and statistically postprocessed global ensemble forecasts fail to predict West African rainfall occurrenceA logistic regression model using observations from preceding days outperforms all other types of forecastsThe skill of the statistical model is mainly related to propagating African easterly waves and mesoscale convective systems [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
154. LOCALLY CONSERVATIVE SERENDIPITY FINITE ELEMENT SOLUTIONS FOR ELLIPTIC EQUATIONS.
- Author
-
YANHUI ZHOU and QINGSONG ZOU
- Subjects
- *
ELLIPTIC equations , *DIFFERENTIAL equations , *CONSERVATION laws (Mathematics) , *NUMERICAL analysis , *POROUS materials - Abstract
In this paper, we post-process an eight-nodes-serendipity finite element solution for elliptic equations. In the post-processing procedure, we first construct a control volume for each node in the serendipity finite element mesh, then we enlarge the serendipity finite element space by adding some appropriate element-wise bubbles and require the novel solution to satisfy the local conservation law on each control volume. Our post-processing procedure can be implemented in a parallel computing environment and its computational cost is proportional to the cardinality of the serendipity elements. Moreover, both our theoretical analysis and numerical examples show that the postprocessed solution converges to the exact solution with optimal convergence rates both under H¹ and L² norms. A numerical experiment for a single-phase porous media problem validates the necessity of the post-processing procedure. [ABSTRACT FROM AUTHOR]
- Published
- 2021
155. Probabilistic Forecast of Wind Power Generation With Data Processing and Numerical Weather Predictions.
- Author
-
Wu, Yuan-Kang, Wu, Yun-Chih, Hong, Jing-Shan, Phan, Le Ha, and Phan, Quoc Dung
- Subjects
- *
NUMERICAL weather forecasting , *WIND power , *WIND forecasting , *ELECTRONIC data processing , *WIND speed , *WIND measurement - Abstract
With the increasing proportion of renewable energy, some problems have gradually emerged. To reduce the operating cost and improve system reliability, renewable power forecasting is an indispensable part. Compared with the deterministic prediction, the probabilistic forecast considers the uncertainty, which helps manage the power system operations. This study proposes a novel hour-ahead probabilistic forecasting method for wind power generation. It includes data preprocessing, adaptive neuro fuzzy inference system training model with fuzzy C-means clustering algorithm, and postprocessing of predicted interval (PI). The input data of the proposed forecasting model include the numerical weather prediction (NWP) ensemble wind speeds, NWP spot wind speeds, and historical wind power measurements. The research results demonstrate that the proposed model supports better performance and prediction stability. Furthermore, this work reveals that the data preprocessing and postprocessing of PI are essential for wind power forecasting. These processes greatly improve the performance of the probabilistic wind power forecasts. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
156. Understanding changes of the continuous ranked probability score using a homogeneous Gaussian approximation.
- Author
-
Leutbecher, Martin and Haiden, Thomas
- Subjects
- *
SOFTWARE verification , *ERROR functions , *SPATIAL variation , *PROBABILITY theory , *SWINE - Abstract
Improving ensemble forecasts is a complex process which involves proper scores such as the continuous ranked probability score (CRPS). A homogeneous Gaussian (hoG) model is introduced in order to better understand the characteristics of the CRPS. An analytical formula is derived for the expected CRPS of an ensemble in the hoG model. The score is a function of the variance of the error of the ensemble mean, the mean error of the ensemble mean and the ensemble variance. The hoG model also provides a score decomposition into reliability and resolution components. We examine whether the hoG model provides a useful approximation of the CRPS when applied to operational ECMWF medium‐range ensemble forecasts. The hoG approximation describes the spatial variations of the CRPS well while moderately overestimating the mean score. Seasonal averages over large domains are within 10% of the actual CRPS. Furthermore, the ability to approximate score changes is evaluated by (a) comparing raw ensemble forecasts with postprocessed ensemble forecasts, and (b) by examining score changes associated with a recent upgrade of the IFS. Overall, the hoG approximation predicts the actual CRPS changes well. One of the main anticipated applications of the hoG approximation are new diagnostics in verification software used by NWP developers routinely. The purpose of the diagnostics is to help developers explain impacts of forecast system changes on the CRPS in terms of the changes in mean error, changes in error variance and changes in ensemble variance. The diagnostics require little additional computational resources compared to the alternative of verifying postprocessed versions of the ensemble forecasts. Therefore, it will be feasible to apply the diagnostics easily to all variables that are examined as part of the model development process. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
157. Development of Reconfigurable Five-axis Machine Tool Using OPEN Computer Numerical Control System Integration Architecture.
- Author
-
Yuan-Ming Cheng and Mu-Sheng Lin
- Subjects
NUMERICAL control of machine tools ,AUTOMATION ,MACHINE tools ,FINISHES & finishing ,PROGRAMMABLE controllers ,MACHINING ,SYSTEM integration - Abstract
The purpose of this study is to use an OPEN computer numerical control (CNC) system to develop a reconfigurable five-axis machine tool (RFMT) for system integration research. The RFMT is mainly composed of a three-degree-of-freedom (3DOF) parallel platform and an X-Y table. In the study, the human-machine interface of OPEN CNC was written, and the programmable logic controller (PLC) architecture of OPEN CNC is used to control threeaxis simultaneous movement or individual drives. The processing path of this study uses a postprocessing program to transfer the numerical control (NC) code with five-axis machining software, and the Z, α, and β values of the NC program are used to convert each axis value of a three-axis motion platform by inverse kinematics. Finally, rough and finishing methods are used to cut a concave circle to verify the accuracy of RFMT. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
158. TWO-DERIVATIVE ERROR INHIBITING SCHEMES AND ENHANCED ERROR INHIBITING SCHEMES.
- Author
-
DITKOWSKI, ADI, GOTTLIEB, SIGAL, and GRANT, ZACHARY J.
- Subjects
- *
PARTIAL differential equations , *DIFFERENTIAL evolution - Abstract
High order methods are often desired for the evolution of ordinary differential equations, in particular those arising from the semidiscretization of partial differential equations. In prior work we investigated the interplay between the local truncation error and the global error to construct error inhibiting general linear methods (GLMs) that control the accumulation of the local truncation error over time. We defined sufficient conditions that allow us to postprocess the final solution and obtain a solution that is two orders of accuracy higher than expected from truncation error analysis alone. In this work we extend this theory to the class of two-derivative GLMs. We define sufficient conditions that control the growth of the error so that the solution is one order higher than expected from truncation error analysis, and furthermore, define the construction of a simple postprocessor that will extract an additional order of accuracy. Using these conditions as constraints, we develop an optimization code that enables us to find explicit two-derivative methods up to eighth order that have favorable stability regions, explicit strong stability preserving (SSP) methods up to seventh order, and A-stable implicit methods up to fifth order. We numerically verify the order of convergence of a selection of these methods, and the total variation diminishing performance of some of the SSP methods. We confirm that the methods found perform as predicted by the theory developed herein. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
159. Merged Volume Rendered Flat-panel Computed Tomography for Postoperative Cochlear Implant Assessment.
- Author
-
Eisenhut, Felix, Lang, Stefan, Taha, Lava, Doerfler, Arnd, Iro, Heinrich, and Hornung, Joachim
- Abstract
Purpose: Evaluation of a new postprocessing method for postoperative control of cochlear implants (CI) based on a single flat detector computed tomography (FD-CT) run and volume rendering of 3D models of the inner ear. Methods: The FD-CT datasets of CIs were selected and postprocessed to generate both standard multiplanar reconstructions (MPR) and merged volume-rendered 3D datasets (MRD) of the CIs. The MRDs consisted of two different reconstructions (bone/implant) that are automatically layered to avoid manual coregistration inaccuracy. Corresponding datasets were evaluated in consensus reading in terms of qualitative (integrity, position, configuration) and quantitative (insertion depth angle) CI parameters. Results: In total 20 FD-CTs with 20 CIs were successfully postprocessed. Qualitative evaluation of MPR and MRD demonstrated complete congruency (integrity: n
array integrity = 20; position: nscala tympani = 13, nscalar translocation = 7; configuration: nharmonic spiralization = 16, ntip fold over = 3, nlooped implant = 1). Adverse intracochlear implant spiralization was identified in all 10 cases with MRD and MPR. Measurement of the insertion depth angle in MRD was equivalent to that in MPR (r = 0.99; P = <0.0001). Conclusion: The use of MRD is a helpful method for precise postoperative CI assessment and provides easy detection of incorrect intracochlear spiralization. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
160. LOCALLY CONSERVATIVE FINITE ELEMENT SOLUTIONS FOR PARABOLIC EQUATIONS.
- Author
-
WENBO GONG and QINGSONG ZOU
- Subjects
- *
CONSERVATION laws (Physics) , *CONTINUOUS functions , *EQUATIONS , *LINEAR systems , *CONSERVATION laws (Mathematics) - Abstract
In this paper, we post-process the finite element solutions for parabolic equations to meet discrete conservation laws in element-level. The post-processing procedure are implemented by two different approaches: one is by computing a globally continuous flux function and the other is by computing the so-called finite-volume-element-like solution. Both approaches only require to solve a small linear system on each element of the underlying mesh. The post-processed flux converges to the exact flux with optimal convergence rates. Numerical computations verify our theoretical findings. [ABSTRACT FROM AUTHOR]
- Published
- 2020
161. A postprocessing and path optimization based on nonlinear error for multijoint industrial robot-based 3D printing.
- Author
-
Fu, Guoqiang, Gu, Tengda, Gao, Hongli, and Lu, Caijiang
- Subjects
INDUSTRIAL robots ,RAPID prototyping ,THREE-dimensional printing ,ROTATIONAL motion ,KINEMATICS ,STRUCTURAL analysis (Engineering) - Abstract
Multijoint industrial robots can be used for 3D printing to manufacture the complex freeform surfaces. The postprocessing is the basis of the precise printing. Due to the nonlinear motion of the rotational joint, nonlinear error is inevitable in multijoint industrial robots. In this article, the postprocessing and the path optimization based on the nonlinear errors are proposed to improve the accuracy of the multijoint industrial robots-based 3D printing. Firstly, the kinematics of the multijoint industrial robot for 3D printing is analyzed briefly based on product of exponential (POE) theory by considering the structure parameters. All possible groups of joint angles for one tool pose in the joint range are obtained in the inverse kinematics. Secondly, the nonlinear error evaluation based on the interpolation is derived according to the kinematics. The nonlinear error of one numerical control (NC) code or one tool pose is obtained. The principle of minimum nonlinear error of joint angle is proposed to select the appropriate solution of joint angle for the postprocessing. Thirdly, a path smoothing method by inserting new tool poses adaptively is proposed to reduce the nonlinear error of the whole printing path. The smooth level in the smoothing is proposed to avoid the endless insertion near the singular area. Finally, simulation and experiments are carried out to testify the effectiveness of the proposed postprocessing and path optimization method. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
162. Contact-Stress-Based Stress Recovery Methods for Discontinuous Deformation Analysis.
- Author
-
Guan, Ruoyu and Bie, Shean
- Abstract
Discontinuous deformation analysis (DDA) has been widely applied for the simulation of block systems that have many discontinuous surfaces. The penalty method is utilized to ensure that there are no penetrations between blocks. A linear polynomial function for displacement leads to a constant stress for a block, which cannot precisely describe the stress field within the block. Therefore, a high-order polynomial displacement function and a fine mesh are always used to improve the precision of the stress field. However, these means are not practical for simulating block systems that have many contacts. In this paper, the contact-stress-based stress recovery methods are proposed for DDA. High-precision solutions for the contact stresses on the boundaries of the blocks are utilized. The first-order Gaussian point of a block is the block's centroid, where the constant stress obtained via DDA is of higher precision. The high-precision solutions for the stresses are utilized in the least squares method to recover a single block's inner stress field. The proposed methods enhance the resolution of the stress field inside a single block without increasing the computational effort in the main iterative process for displacement in DDA. Numerical examples are simulated using both the finite element method (FEM) with a fine mesh and the proposed DDA program. The recovered DDA results can accurately describe the distribution of the stresses in a single block and, in some areas, have the same precision as the FEM results. Moreover, the precision of the proposed methods improves as the gradient of the contact stress on the boundary decreases. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
163. Highly Independent MTJ-Based PUF System Using Diode-Connected Transistor and Two-Step Postprocessing for Improved Response Stability.
- Author
-
Lim, Sehee, Song, Byungkyu, and Jung, Seong-Ook
- Abstract
In physically unclonable functions (PUFs), generating random cryptographs is required to secure private information. Various memory-based PUFs (MemPUFs), where cryptographs are generated independently from each PUF cell to increase the unpredictability of the cryptographs, have been proposed. Among them, the spin-transfer torque magnetic random-access memory MemPUF generates constant responses under temperature and voltage variations by exploiting a magnetic tunnel junction (MTJ) as the variation source. However, its response stability is diminished by the different characteristics of the two access transistors used in a PUF cell. To solve this problem, a novel PUF array that employs a diode-connected transistor and a shared access transistor, is proposed. In addition, a two-step postprocessing is adopted: 1) a write-back technique that amplifies the initial mismatch of MTJ resistances, and 2) a cell-classification technique that detects unstable PUF cells and discards their responses. The Monte Carlo HSPICE simulation results using industry-compatible 65-nm technology show that the proposed PUF system achieves the highest independence (autocorrelation factor of 0.0306) and the lowest maximum bit error rate (BER) under temperature and supply-voltage variations (<0.01% and 0.04% in the ranges of −25 to 75 °C and 0.8–1.2 V, respectively) compared with conventional PUF systems that exploit independent variation sources. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
164. Improved fuzzy weighted‐iterative association rule based ontology postprocessing in data mining for query recommendation applications.
- Author
-
Sumathi, G. and Akilandeswari, J.
- Subjects
- *
DATA mining , *FUZZY systems , *ONTOLOGIES (Information retrieval) - Abstract
The usage of association rules is playing a vital role in the field of knowledge data discovery. Numerous rules have to be processed and plot based on the ranges on the schema. The step in this process depends on the user's queries. Previously, several projects have been proposed to reduce work and improve filtration processes. However, they have some limitations in preprocessing time and filtration rate. In this article, an improved fuzzy weighted‐iterative concept is introduced to overcome the limitation based on the user request and visualization of discovering rules. The initial step includes the mix of client learning with posthandling to use the semantics. The above advance was trailed by surrounding rule schemas to fulfill and anticipate unpredictable guidelines dependent on client desires. Preparing the above developments can be imagined by the use of yet another clever method of study. Standards on guidelines are recognized by the average learning professionals. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
165. ANALYSIS OF MINE-VENTILATION NETWORKS USING THE AEROSET SOFTWARE.
- Author
-
Kormshchikov, Denis and Grishin, Evgeniy
- Subjects
- *
MINE ventilation , *DATA analysis , *AIR flow , *VENTILATION , *COMPUTER software - Abstract
This paper describes the tools developed for modelling mine-ventilation networks and postprocessing simulation results using the Aeroset software, with a specific focus on solution visualization tools. These tools are aimed at simplifying the analysis of simulation data of large ventilation networks – the visual representation and graphical analysis of models of such ventilation networks, with many (more than 1000) branches, becomes difficult without the use of specific tools. The following special tools are suggested to effectively solve this problem: gradient-fill, mine-sections, and qualitycontrol tools. The first tool is based on a gradient-fill display mode for results. The temperature of an object is displayed in different colours, depending on the value, in a similar manner to a thermal imager. The second tool allows users to create a hierarchical structure of the ventilation network and automatically calculate the indicators for each mine section, including intake airflow in the section, the required amount of air, air leakages, the amount of air used, and the portion of recycled air coming from other mine sections. The required amount of air can be calculated based on applications that consume air, including personnel, diesel equipment, blasting, and methane emissions. This tool facilitates the automatic calculation of the required amount of air in an individual working area, a section of the mine, or in an entire mine. The quality-control tool allows users to highlight recirculation circuits, working areas with a lack of air, and fans with low efficiency. The program also notifies the user regarding the completeness of designed mine ventilation models, inform them of what source data is not sufficient for conducting simulations, and what the possible errors are. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
166. Garmin GPSMAP 66sr: Assessment of Its GNSS Observations and Centimeter-Accurate Positioning
- Author
-
Lambert Wanninger, Anja Heßelbarth, and Volker Frevert
- Subjects
Garmin GPSMAP 66sr ,dual-frequency GNSS ,carrier-phase observations ,ambiguity fixing ,phase-center calibration ,postprocessing ,Chemical technology ,TP1-1185 - Abstract
In 2020, Garmin released one of the first consumer devices with a dual-frequency GNSS chip and a quadrifilar helix antenna: GPSMAP 66sr. The device is intended to serve as a positioning and navigation device for outdoor recreation purposes with positioning accuracies on the few meter level. However, due to its highly accurate GNSS dual-frequency carrier-phase observations, the equipment can also be used for centimeter-accurate positioning. We performed extensive test measurements and analyzed the quality of its code and carrier-phase observations. We calibrated the Garmin GPSMAP 66sr antenna with respect to its phase-center offset and phase-center variations. We also performed dual-frequency GPS/Galileo precise point positioning (PPP) and precise relative positioning in baselines to virtual reference stations (VRS). We demonstrate and explain how centimeter-accurate positioning can be achieved with this novel kind of equipment.
- Published
- 2022
- Full Text
- View/download PDF
167. Mining Actionable Knowledge Using Reordering Based Diversified Actionable Decision Trees
- Author
-
Subramani, Sudha, Wang, Hua, Balasubramaniam, Sathiyabhama, Zhou, Rui, Ma, Jiangang, Zhang, Yanchun, Whittaker, Frank, Zhao, Yueai, Rangarajan, Sarathkumar, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Cellary, Wojciech, editor, Mokbel, Mohamed F., editor, Wang, Jianmin, editor, Wang, Hua, editor, Zhou, Rui, editor, and Zhang, Yanchun, editor
- Published
- 2016
- Full Text
- View/download PDF
168. Giant Exoplanets, Sirius, and Starlight Subtraction at Scale
- Author
-
Close, Laird M., Douglas, Ewan S., Weinberger, Alycia J., Merchant, Nirav C., Long, Joseph D., Close, Laird M., Douglas, Ewan S., Weinberger, Alycia J., Merchant, Nirav C., and Long, Joseph D.
- Published
- 2023
169. Analysis of the dimensional variation of 3D printed objects (FMD) subjected to annealing postprocessing
- Author
-
Monar Naranjo, Martín Benancio, Freire Guevara, Belén, Sánchez Pomboza, Cristhian, Monar Naranjo, Martín Benancio, Freire Guevara, Belén, and Sánchez Pomboza, Cristhian
- Abstract
The annealing process in FDM printing applying materials such as PLA and HTPLA, is a technique that looks for the improvement of the mechanical characteristics of post-printing products. The aim of the research is to determine the most suitable annealing process that does not generate significant changes in the dimension of the object. To this end, 108 test pieces were manufactured within the applied methodology, which were exposed to combined tests that considered variables such as the type of printing material, the fill level, temperature and coating. Once the experiment was carried out, it was concluded that the most suitable annealing process is in PLA or HTPLA material at 100% filler, exposed to 200°C with plaster stone coating., El proceso de recocido en impresiones por FDM con el empleo de materiales como PLA y HTPLA, resulta en una técnica que busca mejorar las características mecánicas post impresión de los productos. El objetivo de la investigación es determinar el proceso de recocido más adecuado que no genere cambios significativos en la dimensión del objeto. Para ello, dentro de la metodología aplicada se fabricaron 108 probetas, las cuales fueron expuestas a pruebas combinadas que consideraron variables como el tipo de material de la impresión, el nivel de relleno, temperatura y revestimiento. Una vez realizado la experimentación, se concluyó que el proceso de recocido más adecuado es en material PLA o HTPLA a 100% de relleno, expuesto a 200°C con recubrimiento de yeso piedra.
- Published
- 2023
170. Spatial-Mode-Based Calibration (SMoC) of Forecast Precipitation Fields with Spatially Correlated Structures: An Extended Evaluation and Comparison with Gridcell-by-Gridcell Postprocessing
- Author
-
Zhao, Pengcheng, Wang, Quan J., Wu, Wenyan, Yang, Qichun, Zhao, Pengcheng, Wang, Quan J., Wu, Wenyan, and Yang, Qichun
- Abstract
Postprocessing forecast precipitation fields from numerical weather prediction models aims to produce ensemble forecasts that are of high quality at each grid cell and, importantly, are spatially structured in an appropriate man-ner. A conventional approach, the gridcell-by-gridcell postprocessing, typically consists of two steps: 1) perform statistical calibration separately at individual grid cells to generate unbiased, skillful, and reliable ensemble forecasts and 2) employ ensemble reordering to link ensemble members of all grid cells according to certain templates to form spatially structured ensemble forecasts. However, ensemble reordering techniques are generally problematic in practical use. For example, the well-known Schaake shuffle is often criticized for not considering real physical atmospheric conditions. In this context, a fundamentally new approach, namely, spatial-mode-based calibration (SMoC), has recently been developed for postprocessing forecast precipitation fields with inbuilt spatial structures, thereby eliminating the need for ensemble reordering. SMoC was tested on 1-day-ahead forecasts of heavy precipitation events and was found to produce ensemble forecasts with appropriate spatial structures. In this paper, we extend SMoC to calibrate forecasts of light and no precipitation events and forecasts at long lead times. We also compare SMoC with the gridcell-by-gridcell postprocessing. Results based on multiple evaluation metrics show that SMoC performs well in calibrating both forecasts of light and no precipitation events and forecasts at long lead times. Compared with the gridcell-by-gridcell postprocessing, SMoC produces ensemble forecasts with similar forecast skill, improved forecast reliability, and clearly better spatial structures. In addition, SMoC is computationally far more efficient.
- Published
- 2023
171. Radial Turbine-Diffuser Interaction: Effect of Tip Gap
- Author
-
Matabuena Sedano, Luis (author) and Matabuena Sedano, Luis (author)
- Abstract
This document contains the final Master Thesis Report to obtain the Master of Science on Aerospace Engineering (Flight Performance & Propulsion - Propulsion & Power) at the Delft University of Technology. The research work focuses on characterizing the interaction of a radial inlet turbine with a downstream diffuser through the size of the rotor tip gaps. In the implementation of power turbines it is common to add a diffuser downstream of it to lower the rotor exhaust pressure and thus increase power extraction for the same inlet conditions. However, diffusersare bulky and take a lot of space in the assembly. This lowers the power density of the machine, increases installation weight in transport applications, and installation costs in ground based operations. In the quest to obtain compact diffusers, researchers noticed that the non-dimensional static pressure recovery (Cp) of this device was higher when operating downstream of a turbine than with uniform inlet conditions. The publications in this field are relatively scarce, and thus there is a knowledge gap with immediate practical application. Some researchers have focused on the interaction of turbine vortical structures with the boundary layer of the diffuser. They have found out that under this conditions the boundary layer can support steeper pressure gradients without detaching. This is only applicable in very steep diffusers that would stall in isolation. Notably, all the publications in this field deal with axial machines, and as the text will show the picture changes considerably when applying the theory to radial turbines. This work studies another side of the problem. The text will focus on stable diffusers, and thus there is no boundary layer that needs reinforcement. Turbine rotor tip gaps generate an increase of entropy and reduce turbine power generation. However, these gaps also cast powerful vortexes that affect the diffuser flowfield. This project will study th, Aerospace Engineering | Flight Performance and Propulsion
- Published
- 2023
172. Optimizing Luminous Transmittance of a Three-Dimensional-Printed Fixed Bed Photobioreactor.
- Author
-
Scherer K, Huwer A, Ulber R, and Wahl M
- Abstract
The development of innovative production processes and the optimization of photobioreactors play an important role in generating industrial competitive production technologies for phototrophic biofilms. With emerse photobioreactors a technology was introduced that allowed efficient surface attached cultivation of terrestrial cyanobacteria. However, the productivity of emerse photobioreactors depends on the available cultivation surface. By the implementation of biocarriers to the bioreactor volume, the cultivation surface can be increased which potentially improves productivity and thus the production of valuable compounds. To investigate the surface attached cultivation on biocarriers new photobioreactors need to be developed. Additive manufacturing (AM) offers new opportunities for the design of photobioreactors but producing the needed transparent parts can be challenging using AM techniques. In this study an emerse fixed bed photobioreactor was designed for the use of biocarriers and manufactured using different AM processes. To validate the suitability of the photobioreactor for phototrophic cultivation, the optical properties of three-dimensional (3D)-printed transparent parts and postprocessing techniques to improve luminous transmittance of the components were investigated. We found that stereolithography 3D printing can produce parts with a high luminous transmittance of over 85% and that optimal postprocessing by sanding and clear coating improved the clarity and transmittance to more than 90%. Using the design freedom of AM resulted in a bioreactor with reduced part count and improved handling. In summary, we found that modern 3D-printing technologies and materials are suitable for the manufacturing of functional photobioreactor prototypes., Competing Interests: The authors have no conflicts of interest to declare that are relevant to the content of this article., (Copyright 2024, Mary Ann Liebert, Inc., publishers.)
- Published
- 2024
- Full Text
- View/download PDF
173. Skewed and Mixture of Gaussian Distributions for Ensemble Postprocessing
- Author
-
Maxime Taillardat
- Subjects
ensemble model output statistics ,weather forecasting ,ensemble forecasting ,postprocessing ,calibration ,Gaussian distribution ,Meteorology. Climatology ,QC851-999 - Abstract
The implementation of statistical postprocessing of ensemble forecasts is increasingly developed among national weather services. The so-called Ensemble Model Output Statistics (EMOS) method, which consists of generating a given distribution whose parameters depend on the raw ensemble, leads to significant improvements in forecast performance for a low computational cost, and so is particularly appealing for reduced performance computing architectures. However, the choice of a parametric distribution has to be sufficiently consistent so as not to lose information on predictability such as multimodalities or asymmetries. Different distributions are applied to the postprocessing of the European Centre for Medium-range Weather Forecast (ECMWF) ensemble forecast of surface temperature. More precisely, a mixture of Gaussian and skewed normal distributions are tried from 3- up to 360-h lead time forecasts, with different estimation methods. For this work, analytical formulas of the continuous ranked probability score have been derived and appropriate link functions are used to prevent overfitting. The mixture models outperform single parametric distributions, especially for the longest lead times. This statement is valid judging both overall performance and tolerance to misspecification.
- Published
- 2021
- Full Text
- View/download PDF
174. The pursuit of a dream, Francisco Javier Sayas and the HDG methods
- Author
-
Cockburn, Bernardo
- Published
- 2022
- Full Text
- View/download PDF
175. Referenceless Quality Evaluation of Tone-Mapped HDR and Multiexposure Fused Images.
- Author
-
Yue, Guanghui, Yan, Weiqing, and Zhou, Tianwei
- Abstract
Nowadays, the standard dynamic range (SDR) image acquired at a fixed exposure exposes weakness in portraying fine-grained details of real scenes. The high dynamic range (HDR) image and other types of SDR images generated by multiexposure fusion techniques provide us new choices for scene representation. To display on SDR screens, an HDR image must be tone-mapped to an SDR one. Since different tone-mapping/fusion algorithms produce images with varying visual quality levels, it naturally desires a quality evaluation model for comparison. This article proposes an effective model in the absence of the reference image. By analyzing the characteristics of tone-mapped HDR and multiexposure fused images, we first extract multiple quality-sensitive features from the following aspects: 1) colorfulness; 2) exposure; and 3) naturalness. Then, the model is built by bridging all extracted features and associated subjective ratings via support vector regression. Extensive experiments on publicly available databases prove the superiority of our model over the state-of-the-art referenceless quality evaluation ones. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
176. Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)-MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T.
- Author
-
Bell, Laura C., Stokes, Ashley M., and Quarles, C. Chad
- Subjects
BIOMARKERS ,CEREBRAL circulation ,RECEIVER operating characteristic curves ,TIKHONOV regularization ,BLOOD volume ,GLIOMAS ,MAGNETIC resonance imaging ,RETROSPECTIVE studies ,CONTRAST media ,BRAIN tumors ,TUMOR grading - Abstract
Background: Dynamic susceptibility contrast (DSC)-MRI analysis pipelines differ across studies and sites, potentially confounding the clinical value and use of the derived biomarkers.Purpose/hypothesis: To investigate how postprocessing steps for computation of cerebral blood volume (CBV) and residue function dependent parameters (cerebral blood flow [CBF], mean transit time [MTT], capillary transit heterogeneity [CTH]) impact glioma grading.Study Type: Retrospective study from The Cancer Imaging Archive (TCIA).Population: Forty-nine subjects with low- and high-grade gliomas.Field Strength/sequence: 1.5 and 3.0T clinical systems using a single-echo echo planar imaging (EPI) acquisition.Assessment: Manual regions of interest (ROIs) were provided by TCIA and automatically segmented ROIs were generated by k-means clustering. CBV was calculated based on conventional equations. Residue function dependent biomarkers (CBF, MTT, CTH) were found by two deconvolution methods: circular discretization followed by a signal-to-noise ratio (SNR)-adapted eigenvalue thresholding (Method 1) and Volterra discretization with L-curve-based Tikhonov regularization (Method 2).Statistical Tests: Analysis of variance, receiver operating characteristics (ROC), and logistic regression tests.Results: MTT alone was unable to statistically differentiate glioma grade (P > 0.139). When normalized, tumor CBF, CTH, and CBV did not differ across field strengths (P > 0.141). Biomarkers normalized to automatically segmented regions performed equally (rCTH AUROC is 0.73 compared with 0.74) or better (rCBF AUROC increases from 0.74-0.84; rCBV AUROC increases 0.78-0.86) than manually drawn ROIs. By updating the current deconvolution steps (Method 2), rCTH can act as a classifier for glioma grade (P < 0.007), but not if processed by current conventional DSC methods (Method 1) (P > 0.577). Lastly, higher-order biomarkers (eg, rCBF and rCTH) along with rCBV increases AUROC to 0.92 for differentiating tumor grade as compared with 0.78 and 0.86 (manual and automatic reference regions, respectively) for rCBV alone.Data Conclusion: With optimized analysis pipelines, higher-order perfusion biomarkers (rCBF and rCTH) improve glioma grading as compared with CBV alone. Additionally, postprocessing steps impact thresholds needed for glioma grading.Level Of Evidence: 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2020;51:547-553. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
177. Influence of optic media of the human eye on the imaging of Argus® II retinal prosthesis with intraoperative spectral-domain optical coherence tomography.
- Author
-
Lytvynchuk, Lyubomyr M., Falkner-Radler, Christiane I., Grzybowski, Andrzej, Glittenberg, Carl G., Shams-Mafi, Farnusch, Ansari-Shahrezaei, Siamak, and Binder, Susanne
- Abstract
Copyright of Spektrum der Augenheilkunde is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2020
- Full Text
- View/download PDF
178. A novel S-box-based postprocessing method for true random number generation.
- Author
-
AVAROĞLU, Erdinç and TUNCER, Taner
- Subjects
- *
RANDOM numbers , *RANDOM number generators , *BIT rate , *HOTEL suites - Abstract
The quality of randomness in numbers generated by true random number generators (TRNGs) depends on the source of entropy. However, in TRNGs, sources of entropy are affected by environmental changes and this creates a correlation between the generated bit sequences. Postprocessing is required to remove the problem created by this correlation in TRNGs. In this study, an S-box-based postprocessing structure is proposed as an alternative to the postprocessing structures seen in the published literature. A ring oscillator (RO)-based TRNG is used to demonstrate the use of an S-box for postprocessing and the removal of correlations between number sequences. The statistical properties of the numbers generated through postprocessing are obtained according to the entropy, autocorrelation, statistical complexity measure, and the NIST 800.22 test suite. According to the results, the postprocessing successfully removed the correlation. Moreover, the data rate of the bit sequence generated by the proposed postprocessing is reduced to 2/3 of its original value at the output. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
179. Superconvergence of the Crouzeix-Raviart element for elliptic equation.
- Author
-
Zhang, Yidan, Huang, Yunqing, and Yi, Nianyu
- Subjects
- *
ELLIPTIC equations , *PARALLELOGRAMS - Abstract
In this paper, a superconvergence result of the Crouzeix-Raviart element method is derived for the second-order elliptic equation on the uniform triangular meshes, in which any two adjacent triangles form a parallelogram. A local weighted averaging post-processing algorithm for the numerical stress is presented. Based on the equivalence between the Crouzeix-Raviart element method and the lowest order Raviart-Thomas element method, we prove that the error between the exact stress and the postprocessed numerical stress is of order h3/2. Two numerical examples are presented to confirm the theoretical result. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
180. Manual on-the-fly physician postprocessing of computed tomographic angiography data guides embolotherapy of atypical bleeding following paracentesis; a case report.
- Author
-
Siddiqi, Nasir, Issa, Mohamed, Raissi, Driss, Qian, Chengao, Gabriel, Gaby, and Winkler, Michael
- Subjects
- *
THERAPEUTIC embolization , *PHYSICIANS , *PULMONARY veins , *HEMORRHAGE , *BLOOD vessels - Abstract
Major bleeding, typically due to laceration of abdominal wall arteries or venous varices, is a rare but serious complication of paracentesis. We report a case of major bleeding post paracentesis to evidence that a sequence of 1) customized post processing of computed tomographic angiography data for periprocedural guidance, followed by 2) transcatheter cyanoacrylate glue embolotherapy, is the optimal treatment of this complication. • Major bleeding is a rare but a serious complication of paracentesis • Customized post processing is an advanced method for rapid visualization of the source and site of bleeding. • Focused field of view reconstructions allows for expedient and effective post-processing of small blood vessels. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
181. Efficient and sample‐specific interpretation of ToF‐SIMS data by additional postprocessing of principal component analysis results.
- Author
-
Heller‐Krippendorf, Danica, Veith, Lothar, Veen, Rik, Breitenstein, Daniel, Tallarek, Elke, Hagenhoff, Birgit, and Engelhard, Carsten
- Subjects
- *
MULTIPLE correspondence analysis (Statistics) , *SECONDARY ion mass spectrometry , *ELECTRON impact ionization , *ION mobility , *MATHEMATICAL transformations , *MASS spectrometry , *SURFACE analysis - Abstract
Time‐of‐flight secondary ion mass spectrometry (ToF‐SIMS) is a powerful tool for surface analysis, but fragmentation of molecular species during the SIMS process may lead to complex mass spectra. While the fragmentation pattern is typically characteristic for each compound, industrial samples are engineered materials, and, thus, may contain a mixture of many compounds, which may result in a variety of overlapping peak patterns in ToF‐SIMS spectra. Consequently, the process of data evaluation is challenging and time‐consuming. Principal component analysis (PCA) can be used to simplify data analysis for complex sample systems. Especially, correlation loadings were observed as an ideal tool to identify relevant signals in PCA results, which induce the separation of different sample groups. This is because correlation loadings show the relevance of signals independent from their intensity in the raw data. In correlation loadings, however, fragmentation patterns are no longer observed and the identification of peaks' sum formulas is challenging. In this study, a new approach is presented, which simplifies peak identification and assignment in ToF‐SIMS spectra after PCA is performed. The approach uses a mathematical transformation that projects PCA results, in particular loadings and correlation loadings, in the direction of specific sample groups. The approach does not change PCA results but rather presents them in a new way. This method allows to visualize characteristic spectra for specific sample groups that contain only relevant signals and, additionally, visualize fragmentation patterns. Data analysis is simplified and helps the user to focus on data interpretation rather than processing. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
182. Statistical postprocessing of dual‐resolution ensemble precipitation forecasts across Europe.
- Author
-
Gascón, Estíbaliz, Lavers, David, Hamill, Thomas M., Richardson, David S., Bouallègue, Zied B., Leutbecher, Martin, and Pappenberger, Florian
- Subjects
- *
PRECIPITATION forecasting , *LONG-range weather forecasting , *HISTOGRAMS , *LEAD time (Supply chain management) - Abstract
This article verifies 1‐ to 10‐day probabilistic precipitation forecasts in June, July, and August 2016 from an experimental dual‐resolution version of the European Centre for Medium‐Range Weather Forecasts (ECMWF) ensemble prediction system. Five different ensemble combinations were tested. These comprised subsets of the 51‐member operational ECMWF configuration (18‐km grid) and an experimental 201‐member lower‐resolution configuration (29‐km grid). The motivation of the dual‐resolution ensemble forecast is to trade some higher‐resolution members against a larger number of lower‐resolution members to increase the overall ensemble size at constant overall computational cost. Forecasts were verified against precipitation analyses over Europe. Given substantial systematic errors of precipitation forecasts, both raw and post‐processed dual‐resolution ensemble predictions were evaluated. Postprocessing consisted of quantile mapping, tested with and without an objective weighting of sorted ensemble members using closest‐member histogram statistics. Reforecasts and retrospective precipitation analyses were used as training data. However, the reforecast ensemble size and the dual‐resolution ensemble sizes differed, which motivated the development of a novel approach for developing closest‐member histogram statistics for the larger real‐time ensemble from the smaller reforecast ensemble. Results show that the most skilful combination was generally 40 ensemble members from the operational configuration and 40 from the lower‐resolution ensemble, evaluated by continuous ranked probability scores, Brier Scores at various thresholds, and reliability diagrams. This conclusion was generally valid with and without postprocessing. Reliability was improved by postprocessing, though the improvement of the resolution component is not so clear. The advantages of many members at higher resolution was diminished at longer lead times; predictability of smaller scale features was lost, and there is more benefit in increasing the ensemble size to reduce sampling uncertainty. This article evaluates only one aspect in deciding on any future ensemble configuration, and other skill‐related considerations need to be taken into account. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
183. Automated image quality evaluation of structural brain MRI using an ensemble of deep learning networks.
- Author
-
Sujit, Sheeba J., Coronado, Ivan, Kamali, Arash, Narayana, Ponnada A., and Gabr, Refaat E.
- Subjects
DEEP learning ,RECEIVER operating characteristic curves ,BRAIN abnormalities ,BRAIN imaging ,AUTISTIC people ,DIAGNOSIS of autism ,BRAIN anatomy ,MULTIPLE sclerosis ,COMPUTERS in medicine ,BRAIN ,RESEARCH ,RESEARCH evaluation ,RESEARCH methodology ,MAGNETIC resonance imaging ,RETROSPECTIVE studies ,EVALUATION research ,MEDICAL cooperation ,DIAGNOSTIC imaging ,COMPARATIVE studies ,RESEARCH funding - Abstract
Background: Deep learning (DL) is a promising methodology for automatic detection of abnormalities in brain MRI.Purpose: To automatically evaluate the quality of multicenter structural brain MRI images using an ensemble DL model based on deep convolutional neural networks (DCNNs).Study Type: Retrospective.Population: The study included 1064 brain images of autism patients and healthy controls from the Autism Brain Imaging Data Exchange (ABIDE) database. MRI data from 110 multiple sclerosis patients from the CombiRx study were included for independent testing.Sequence: T1 -weighted MR brain images acquired at 3T.Assessment: The ABIDE data were separated into training (60%), validation (20%), and testing (20%) sets. The ensemble DL model combined the results from three cascaded networks trained separately on the three MRI image planes (axial, coronal, and sagittal). Each cascaded network consists of a DCNN followed by a fully connected network. The quality of image slices from each plane was evaluated by the DCNN and the resultant image scores were combined into a volumewise quality rating using the fully connected network. The DL predicted ratings were compared with manual quality evaluation by two experts.Statistical Tests: Receiver operating characteristic (ROC) curve, area under ROC curve (AUC), sensitivity, specificity, accuracy, and positive (PPV) and negative (NPV) predictive values.Results: The AUC, sensitivity, specificity, accuracy, PPV, and NPV for image quality evaluation of the ABIDE test set using the ensemble model were 0.90, 0.77, 0.85, 0.84, 0.42, and 0.96, respectively. On the CombiRx set the same model achieved performance of 0.71, 0.41, 0.84, 0.73, 0.48, and 0.80.Data Conclusion: This study demonstrated the high accuracy of DL in evaluating image quality of structural brain MRI in multicenter studies.Level Of Evidence: 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2019;50:1260-1267. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
184. Pseudorandom orbiting stroke for freeform optics postprocessing.
- Author
-
Xiangyu Guo, Yong Shu, Geon-Hee Kim, Palmer, Michael, Heejoo Choi, and Kima, Dae Wook
- Subjects
- *
METHYL methacrylate , *OPTICS , *RAPID prototyping , *SURFACE roughness measurement , *GRINDING & polishing , *SURFACE finishing , *DIAMOND turning , *SURFACE roughness - Abstract
In addition to achieving a desired freeform profile, ensuring a superb micro-roughness finish is a key factor for successful freeform optics manufacturing. We present a pseudorandom orbiting stroke-based postprocessing technique that maintains freeform optic forms, while improving small-scale surface quality. The full-aperture tool can avoid subaperture effects, and the small stroke pseudorandom tool path guarantees the match of freeform profiles while preventing the directionality of the final surface profiles. Three independent experimental studies are designed, conducted, and presented for a wide range of optics, including magnetorheological finishing-polished BK7 glass, single-point diamond turned (SPDT) poly(methyl methacrylate), and SPDT Al6061 optics. The comparison of direct measured maps on the initial and final smoothed optics verifies the form maintenance capability of the freeform optics postprocessing technology. Surface roughness measurement highlights improvements in local surface roughness and periodic toolmark errors left by the previous polishing method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
185. Robust power system state estimation by appropriate selection of tolerance for the least measurement rejected algorithm.
- Author
-
SHAHRIAR, Mohammad Shoaib and HABIBALLAH, Ibrahim Omar
- Subjects
- *
PHASOR measurement , *ABSOLUTE value , *LOAD forecasting (Electric power systems) , *MEASUREMENT , *ALGORITHMS - Abstract
Modern power systems are highly complicated and nonlinear in nature. Accurate estimation of the power system states (voltage-magnitude and phase-angle) is required for the secure operation of the power system. The presence of bad-data measurements in meters has made this estimation process challenging. An efficient estimator should detect and eliminate the effect of bad data during the estimation process. Least measurement rejected (LMR) is a robust estimator that has been found successful in dealing with various categories of bad data. The performance of LMR depends upon the proper selection of a tolerance for each measurement. This paper presents a novel approach for tolerance value selection to improve the capability of handling different single and multiple bad-data scenarios successfully. The performance of this updated LMR (ULMR) is compared with weighted least squares, weighted least absolute value, and two versions of LMR from the literature. IEEE 30-bus and 118-bus systems are used to demonstrate the robustness of the proposed estimator under different bad-measurement (single and multiple) scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
186. POSTPROCESSING OF CONTINUOUS GALERKIN SOLUTIONS FOR DELAY DIFFERENTIAL EQUATIONS WITH NONLINEAR VANISHING DELAY.
- Author
-
QIUMEI HUANG, KUN JIANG, and XIUXIU XU
- Subjects
- *
DELAY differential equations , *GALERKIN methods - Abstract
In this paper we propose several postprocessing techniques to accelerate the convergence of the continuous Galerkin solutions for delay differential equations with nonlinear vanishing delay. They are interpolation postprocessings (including integration type, Lagrange type, and polynomial preserving recovery type) and iteration postprocessing. The theoretical expectations are confirmed by numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2019
187. Grid‐ Versus Station‐Based Postprocessing of Ensemble Temperature Forecasts.
- Author
-
Feldmann, Kira, Richardson, David S., and Gneiting, Tilmann
- Subjects
- *
WEATHER forecasting , *SPATIOTEMPORAL processes , *METEOROLOGICAL observations , *SURFACE temperature , *MINIMUM temperature forecasting - Abstract
Statistical postprocessing aims to improve ensemble model output by delivering calibrated predictive distributions. To train and assess these methods, it is crucial to choose appropriate verification data. Reanalyses cover the entire globe on the same spatiotemporal scale as the forecasting model, while observation stations are scattered across planet Earth. Here we compare the benefits of postprocessing with gridded analyses against postprocessing at observation sites. In a case study, we apply local Ensemble Model Output Statistics to 2‐m temperature forecasts by the European Centre for Medium‐Range Weather Forecasts ensemble system. Our evaluation period ranges from November 2016 to December 2017. Postprocessing yields improvements over the raw ensemble at all lead times. The relative improvement achieved by postprocessing is greater when trained and verified against station observations. Plain Language Summary: To this day, weather forecasts are uncertain and subject to error. Statistical postprocessing aims to remove systematic deficiencies from the output of numerical weather prediction models. To apply these statistical methods, training and reference data are required. Weather observation sites are scattered across planet Earth. An alternative source of training and reference data is provided by so‐called analyses, which combine weather observations with past forecasts to provide gridded pseudo‐data with full global coverage. In this study we consider forecasts of surface temperature from the European Centre for Medium‐Range Weather Forecasts. We find that the benefits of postprocessing are greater when it is performed directly on observational data, as opposed to using gridded analyses. In both cases, statistical postprocessing yields improved temperature forecasts at lead times from a single day to more than 2 weeks ahead. Key Points: Station‐based postprocessing of ensemble temperature forecasts yields greater improvement than grid‐based approachesA day ahead, calibrated forecasts yield mean CRPS values of 0.97 and 0.66 °C, respectivelyStatistical postprocessing remains effective beyond week two [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
188. Superconvergent DPG Methods for Second-Order Elliptic Problems.
- Author
-
Führer, Thomas
- Subjects
SUPERCONVERGENT methods ,TEST methods ,SCALAR field theory ,POLYNOMIAL approximation - Abstract
We consider DPG methods with optimal test functions and broken test spaces based on ultra-weak formulations of general second-order elliptic problems. Under some assumptions on the regularity of solutions of the model problem and its adjoint, superconvergence for the scalar field variable is achieved by either increasing the polynomial degree in the corresponding approximation space by one or by a local postprocessing. We provide a uniform analysis that allows the treatment of different test norms. Particularly, we show that in the presence of convection only the quasi-optimal test norm leads to higher convergence rates, whereas other norms considered do not. Moreover, we also prove that our DPG method delivers the best L 2 {L^{2}} approximation of the scalar field variable up to higher-order terms, which is the first theoretical explanation of an observation made previously by different authors. Numerical studies that support our theoretical findings are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
189. Postprocessing for Skin Detection
- Author
-
Diego Baldissera, Loris Nanni, Sheryl Brahnam, and Alessandra Lumini
- Subjects
segmentation ,skin detector ,convolutional neural networks ,postprocessing ,Photography ,TR1-1050 ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Skin detectors play a crucial role in many applications: face localization, person tracking, objectionable content screening, etc. Skin detection is a complicated process that involves not only the development of apposite classifiers but also many ancillary methods, including techniques for data preprocessing and postprocessing. In this paper, a new postprocessing method is described that learns to select whether an image needs the application of various morphological sequences or a homogeneity function. The type of postprocessing method selected is learned based on categorizing the image into one of eleven predetermined classes. The novel postprocessing method presented here is evaluated on ten datasets recommended for fair comparisons that represent many skin detection applications. The results show that the new approach enhances the performance of the base classifiers and previous works based only on learning the most appropriate morphological sequences.
- Published
- 2021
- Full Text
- View/download PDF
190. Post-Capture Synthesis of Images Using Manipulable Integration Functions
- Author
-
Eberhart, Paul
- Subjects
- Computational photography, Exposure, Image capture, Postprocessing, Video Processing, Other Computer Engineering, Photography
- Abstract
Traditional photographic practice, as dictated by the properties of photochemical emulsion film, mechanical apparatus, and human operators, largely treats the sensitivity (gain) and integration interval as coarsely parameterized constants for the entire scene, set no later than the time of exposure. This frame-at-a-time capture and processing model permeates digital cameras and computer image processing. Emerging imaging technologies, such as time domain continuous imaging (TDCI), quanta image sensors (QIS), event cameras, and conventional sensors augmented with computational processing and control, provide opportunities to break out of the frame-oriented paradigm and capture a stream of data describing changes to scene appearance over the capture interval with high temporal precision. Captured scene data can then be computationally post-processed to render images with user control over the time interval being sampled and the gain of integration, not just for each image rendered but for every site in each rendered image, allowing the user to ideally expose each portion of the scene. For example, in a scene that contains a mixture of moving elements some of which are more brightly lit, it becomes possible to render dark and light portions with different gains and potentially overlapping intervals, such that both have good contrast, neither one suffers motion blur, and little to no artifacting occurs at the interfaces. This dissertation represents a preliminary exploration of the properties, application, and tooling required to capture TDCI streams and render images from them in a paradigm that supports functional post-capture manipulation of time and gain.
- Published
- 2024
191. Properties and Estimates of an Integral Type Nonconforming Finite Element
- Author
-
Andreev, A. B., Racheva, M. R., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Lirkov, Ivan, editor, Margenov, Svetozar, editor, and Waśniewski, Jerzy, editor
- Published
- 2012
- Full Text
- View/download PDF
192. Post Mining of Diversified Multiple Decision Trees for Actionable Knowledge Discovery
- Author
-
Subramani, Sudha, Balasubramaniam, Sathiyabhama, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Thilagam, P. Santhi, editor, Pais, Alwyn Roshan, editor, Chandrasekaran, K., editor, and Balakrishnan, N., editor
- Published
- 2012
- Full Text
- View/download PDF
193. Practical guidance to identify and troubleshoot suboptimal DSC-MRI results.
- Author
-
Prah MA and Schmainda KM
- Abstract
Relative cerebral blood volume (rCBV) derived from dynamic susceptibility contrast (DSC) perfusion MR imaging (pMRI) has been shown to be a robust marker of neuroradiological tumor burden. Recent consensus recommendations in pMRI acquisition strategies have provided a pathway for pMRI inclusion in diverse patient care centers, regardless of size or experience. However, even with proper implementation and execution of the DSC-MRI protocol, issues will arise that many centers may not easily recognize or be aware of. Furthermore, missed pMRI issues are not always apparent in the resulting rCBV images, potentiating inaccurate or missed radiological diagnoses. Therefore, we gathered from our database of DSC-MRI datasets, true-to-life examples showcasing the breakdowns in acquisition, postprocessing, and interpretation, along with appropriate mitigation strategies when possible. The pMRI issues addressed include those related to image acquisition and postprocessing with a focus on contrast agent administration, timing, and rate, signal-to-noise quality, and susceptibility artifact. The goal of this work is to provide guidance to minimize and recognize pMRI issues to ensure that only quality data is interpreted., Competing Interests: Imaging Biometrics LLC (KMS-financial interest), IQ-AI Ltd (KMS-ownership interest), Prism Clinical Imaging Inc (KMS-ownership interest, board membership). The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest., (© 2024 Prah and Schmainda.)
- Published
- 2024
- Full Text
- View/download PDF
194. Improving magnetic resonance spectroscopy in the brainstem periaqueductal gray using spectral registration.
- Author
-
Sirucek L, Zoelch N, and Schweinhardt P
- Subjects
- Female, Humans, Signal-To-Noise Ratio, Magnetic Resonance Spectroscopy methods, Brain Stem, Water metabolism, Periaqueductal Gray diagnostic imaging, Brain metabolism
- Abstract
Purpose: Functional understanding of the periaqueductal gray (PAG), a clinically relevant brainstem region, can be advanced using
1 H-MRS. However, the PAG's small size and high levels of physiological noise are methodologically challenging. This study aimed to (1) improve1 H-MRS quality in the PAG using spectral registration for frequency and phase error correction; (2) investigate whether spectral registration is particularly useful in cases of greater head motion; and (3) examine metabolite quantification using literature-based or individual-based water relaxation times., Methods: Spectra were acquired in 33 healthy volunteers (50.1 years, SD = 17.19, 18 females) on a 3 T Philipps MR system using a point-resolved spectroscopy (PRESS) sequence optimized with very selective saturation pulses (OVERPRESS) and voxel-based flip angle calibration (effective volume of interest size: 8.8 × 10.2 × 12.2 mm3 ). Spectra were fitted using LCModel and SNR, NAA peak linewidths and Cramér-Rao lower bounds (CRLBs) were measured after spectral registration and after minimal frequency alignment., Results: Spectral registration improved SNR by 5% (p = 0.026, median value post-correction: 18.0) and spectral linewidth by 23% (p < 0.001, 4.3 Hz), and reduced the metabolites' CRLBs by 1% to 15% (p < 0.026). Correlational analyses revealed smaller SNR improvements with greater head motion (p = 0.010) recorded using a markerless motion tracking system. Higher metabolite concentrations were detected using individual-based compared to literature-based water relaxation times (p < 0.001)., Conclusion: This study demonstrates high-quality1 H-MRS acquisition in the PAG using spectral registration. This shows promise for future1 H-MRS studies in the PAG and possibly other clinically relevant brain regions with similar methodological challenges., (© 2023 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals LLC on behalf of International Society for Magnetic Resonance in Medicine.)- Published
- 2024
- Full Text
- View/download PDF
195. From bytes to bites: Advancing the food industry with three-dimensional food printing.
- Author
-
Hamilton AN, Mirmahdi RS, Ubeyitogullari A, Romana CK, Baum JI, and Gibson KE
- Subjects
- Food Industry, Nutrients, Food Technology, Printing, Three-Dimensional, Food
- Abstract
The rapid advancement of three-dimensional (3D) printing (i.e., a type of additive manufacturing) technology has brought about significant advances in various industries, including the food industry. Among its many potential benefits, 3D food printing offers a promising solution to deliver products meeting the unique nutritional needs of diverse populations while also promoting sustainability within the food system. However, this is an emerging field, and there are several aspects to consider when planning for use of 3D food printing for large-scale food production. This comprehensive review explores the importance of food safety when using 3D printing to produce food products, including pathogens of concern, machine hygiene, and cleanability, as well as the role of macronutrients and storage conditions in microbial risks. Furthermore, postprocessing factors such as packaging, transportation, and dispensing of 3D-printed foods are discussed. Finally, this review delves into barriers of implementation of 3D food printers and presents both the limitations and opportunities of 3D food printing technology., (© 2024 Institute of Food Technologists®.)
- Published
- 2024
- Full Text
- View/download PDF
196. Generating and Postprocessing of Biclusters from Discrete Value Matrices
- Author
-
Michalak, Marcin, Stawarz, Magdalena, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Jędrzejowicz, Piotr, editor, Nguyen, Ngoc Thanh, editor, and Hoang, Kiem, editor
- Published
- 2011
- Full Text
- View/download PDF
197. A staggered DG method of minimal dimension for the Stokes equations on general meshes.
- Author
-
Zhao, Lina, Park, Eun-Jae, and Shin, Dong-wook
- Subjects
- *
DIMENSIONAL analysis , *STOKES equations , *NUMERICAL grid generation (Numerical analysis) , *GALERKIN methods , *QUADRILATERALS - Abstract
Abstract In this paper, a locally conservative, lowest order staggered discontinuous Galerkin method is developed for the Stokes equations. The proposed method allows rough grids and is based on the partition of the domain into arbitrary shapes of quadrilaterals or polygons, which makes the method highly desirable for practical applications. A priori error analysis covering low regularity is demonstrated. A new postprocessing scheme for the velocity earning faster convergence is constructed. Next, adaptive mesh refinement is highly appreciated on quadrilateral and polygonal meshes since hanging nodes are allowed. Therefore, we propose two guaranteed-type error estimators in L 2 error of stress and energy error of the postprocessed velocity, respectively. Numerical experiments confirm our theoretical findings and illustrate the flexibility of the proposed method and accuracy of the guaranteed upper bounds. Highlights • Piecewise constant approximations of the Stokes problem on general meshes are developed. • The method can be flexibly applied to rough grids and the implementation is simple. • A new postprocessing scheme is developed based on the superconvergence of the velocity. • Two guaranteed-type error estimators are proposed for the stress and the velocity. • Numerical results are given to illustrate the performance of the method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
198. How to obtain diagnostic planes of the fetal central nervous system using three-dimensional ultrasound and a context-preserving rendering technology.
- Author
-
Dall'Asta, Andrea, Paramasivam, Gowrishankar, Basheer, Sheikh Nigel, Whitby, Elspeth, Tahir, Zubair, and Lees, Christoph
- Subjects
CENTRAL nervous system ,ANATOMICAL planes ,MAGNETIC resonance imaging ,FETAL monitoring ,IMAGE processing - Abstract
The antenatal evaluation of the fetal central nervous system (CNS) is among the most difficult tasks of prenatal ultrasound (US), requiring technical skills in relation to ultrasound and image acquisition as well as knowledge of CNS anatomy and how this changes with gestation. According to the International Guidelines for fetal neurosonology, the basic assessment of fetal CNS is most frequently performed on the axial planes, whereas the coronal and sagittal planes are required for the multiplanar evaluation of the CNS within the context of fetal neurosonology. It can be even more technically challenging to obtain "nonaxial" views with 2-dimensional (2D) US. The modality of 3-dimensional (3D) US has been suggested as a panacea to overcome the technical difficulties of achieving nonaxial views. The lack of familiarity of most sonologists with the use of 3D US and its related processing techniques may preclude its use even where it could play an important role in complementing antenatal 2D US assessment. Furthermore, once a 3D volume has been acquired, proprietary software allows it to be processed in different ways, leading to multiple ways of displaying and analyzing the same anatomical imaging or plane. These are difficult to learn and time consuming in the absence of specific training. In this article, we describe the key steps for volume acquisition of a 3D US volume, manipulation, and processing with reference to images of the fetal CNS, using a newly developed context-preserving rendering technique. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
199. Multi‐modal functional MRI to explore placental function over gestation.
- Author
-
Hutter, Jana, Slator, Paddy J., Jackson, Laurence, Gomes, Ana Dos Santos, Ho, Alison, Story, Lisa, O'Muircheartaigh, Jonathan, Teixeira, Rui P. A. G., Chappell, Lucy C., Alexander, Daniel C., Rutherford, Mary A., and Hajnal, Joseph V.
- Abstract
Purpose: To investigate, visualize and quantify the physiology of the human placenta in several dimensions ‐ functional, temporal over gestation, and spatial over the whole organ. Methods: Bespoke MRI techniques, combining a rich diffusion protocol, anatomical data and T2* mapping together with a multi‐modal pipeline including motion correction and extracted quantitative features were developed and employed on pregnant women between 22 and 38 weeks gestational age including two pregnancies diagnosed with pre‐eclampsia. Results: A multi‐faceted assessment was demonstrated showing trends of increasing lacunarity, and decreasing T2* and diffusivity over gestation. Conclusions: The obtained multi‐modal acquisition and quantification shows promising opportunities for studying evolution, adaptation and compensation processes. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
200. Thermo-mechanical modelling of laminated glass with the use of two-dimensional in-plane mesh.
- Author
-
Pluciński, P. and Jaśkowiec, J.
- Subjects
- *
COUPLED problems (Complex systems) , *LAMINATED glass , *POLYMER films , *HEAT transfer , *THERMAL properties - Abstract
Abstract The three-dimensional (3D) numerical modelling of laminated glass (LG) plate subjected to coupled thermomechanical loading is in the scope of this paper. The method called FEM23 is applied, in which a 2D in-plane mesh is used, however full 3D results are obtained. In any LG plate glass panes are bonded by very thin polymer films. This layered structure consists of subsequent thick and thin layers of glass and polymer, respectively. Additionally, the thermal and mechanical properties of the glass and the bonding polymer are significantly different. FEM23 is suitable for analyses of such kind of structure. The full 3D results of the coupled problem are obtained following special FEM23 postprocessing. FEM23 is a relatively simple, robust and effective method and 3D thermo-mechanical results obtained are correct for both stationary and non-stationary heat transport. The accuracy of the method has been examined with the use of solutions obtained from the ABAQUS system. The examples presented in the article include two-, three- and four-paned LG plates. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.