8 results on '"*SCIENTIFIC computing"'
Search Results
2. GPU-HADVPPM4HIP V1.0: higher model accuracy on China's domestically GPU-like accelerator using heterogeneous compute interface for portability (HIP) technology to accelerate the piecewise parabolic method (PPM) in an air quality model (CAMx V6.10).
- Author
-
Kai Cao, Qizhong Wu, Lingling Wang, Hengliang Guo, Nan Wang, Huaqiong Cheng, Xiao Tang, Lina Liu, Dongqing Li, Hao Wu, and Lanning Wang
- Subjects
- *
MESSAGE passing (Computer science) , *HETEROGENEOUS computing , *AIR quality , *GRAPHICS processing units , *SCIENTIFIC computing - Abstract
The graphics processing units (GPUs) are becoming a compelling acceleration strategy for geoscience numerical model due to their powerful computing performance. In this study, AMD's heterogeneous compute interface for portability (HIP) was implemented to port the GPU acceleration version of the Piecewise Parabolic Method (PPM) solver (GPU-HADVPPM) from the NVIDIA GPUs to China' s domestically GPU-like accelerators as GPU-HADVPPM4HIP, and further introduced the multi-level hybrid parallelism scheme to improve the total computational performance of the HIP version of CAMx (CAMx-HIP) model on the China' s domestically heterogeneous cluster. The experimental results show that the acceleration effect of GPU-HADVPPM on the different GPU accelerator is more obvious when the computing scale is larger and the maximum speedup of GPU-HADVPPM on the domestic GPU-like accelerator is 28.9 times. The hybrid parallelism with a message passing interface (MPI) and HIP enables achieve up to 17.2 times speedup when configure 32 CPU cores and GPU-like accelerators on the domestic heterogeneous cluster. And the OpenMP technology is introduced to further reduce the computation time of CAMx-HIP model by 1.9 times. More importantly, by comparing the simulation results of GPU-HADVPPM on NVIDIA GPUs and domestic GPU-like accelerators, it is found that the simulation results of GPU-HADVPPM on domestic GPU-like accelerators have less difference than the NVIDIA GPUs, and the reason for this difference may be related to the fact that the NVIDIA GPU sacrifices part of the accuracy for improved computing performance. All in all, the domestic GPU-like accelerators are more accuracy for scientific computing in the field of geoscience numerical models. Furthermore, we also exhibit that the data transfer efficiency between CPU and GPU has an important impact on heterogeneous computing, and point out that optimizing the data transfer efficiency between CPU and GPU is one of the important directions to improve the computing efficiency of geoscience numerical models in heterogeneous clusters in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. The EarthCARE mission: science data processing chain overview.
- Author
-
Eisinger, Michael, Marnas, Fabien, Wallace, Kotska, Kubota, Takuji, Tomiyama, Nobuhiro, Ohno, Yuichi, Tanaka, Toshiyuki, Tomita, Eichi, Wehr, Tobias, and Bernaerts, Dirk
- Subjects
- *
SCIENTIFIC computing , *HEAT flux , *ATMOSPHERIC models , *AEROSOLS - Abstract
The Earth Cloud Aerosol and Radiation Explorer (EarthCARE) is a satellite mission implemented by the European Space Agency (ESA) in cooperation with the Japan Aerospace Exploration Agency (JAXA) to measure vertical profiles of aerosols, clouds, and precipitation properties together with radiative fluxes and derived heating rates. The data will be used in particular to evaluate the representation of clouds, aerosols, precipitation, and associated radiative fluxes in weather forecasting and climate models. The satellite embarks four instruments: the ATmospheric LIDar (ATLID), the Cloud Profiling Radar (CPR), the Multi-Spectral Imager (MSI), and the Broadband Radiometer (BBR). The science data acquired by the four satellite instruments are processed on ground. Calibrated instrument data – level 1 data products – and retrieved geophysical data products – level 2 data products – are produced in the ESA and JAXA ground segments. This paper provides an overview of the data processing chains of ESA and JAXA and explains the instrument level 1 data products and main aspects of the calibration algorithms. Furthermore, an overview of the level 2 data products, with references to the respective dedicated papers, is provided. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. DEFINING A METHODOLOGY FOR INTEGRATING SEMANTIC, GEOSPATIAL, AND TEMPORAL TECHNIQUES FOR CONFLICT ANALYSIS.
- Author
-
Obukhov, T. and Brovelli, M. A.
- Subjects
SCIENTIFIC computing ,TERRORIST organizations ,POLITICAL scientists ,MACHINE learning ,NON-state actors (International relations) ,CLIMATE change - Abstract
Globally, the absolute number of war deaths has been declining since 1946. And yet, conflict and violence are currently on the rise, with many conflicts today waged between non-state actors such as political militias, criminal, and international terrorist groups. Unresolved regional tensions, a breakdown in the rule of law, absent or co-opted state institutions, illicit economic gain, and the scarcity of resources exacerbated by climate change, have become dominant drivers of conflict (UN. A new era of conflicts, 2022).In the ear of modern technology, data science, machine learning, and AI, the available shall be used to analyze, understand and possibly predict the possibility of conflicts outbreaks in various parts of the world. Moreover, it should provide tools for political scientists to a deeper understanding of political processes and enhance their decision-making processes.This paper focuses on applying data science techniques to process and analyze data in three various data analysis domains: Semantic, Geospatial, and Temporal Analysis. It provides the possible sources of the conflict and other datasets used for the analytics mentioned above. The data is used for research and experimental purposes only. These analytical processes provide the mechanisms to discover the historical data and identify the potential causes of the conflicts. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. SciKit-GStat 1.0: a SciPy-flavored geostatistical variogram estimation toolbox written in Python.
- Author
-
Mälicke, Mirko
- Subjects
- *
GEOLOGICAL statistics , *VARIOGRAMS , *PYTHON programming language , *DATA structures , *SCIENTIFIC computing - Abstract
Geostatistical methods are widely used in almost all geoscientific disciplines, i.e., for interpolation, rescaling, data assimilation or modeling. At its core, geostatistics aims to detect, quantify, describe, analyze and model spatial covariance of observations. The variogram, a tool to describe this spatial covariance in a formalized way, is at the heart of every such method. Unfortunately, many applications of geostatistics focus on the interpolation method or the result rather than the quality of the estimated variogram. Not least because estimating a variogram is commonly left as a task for computers, and some software implementations do not even show a variogram to the user. This is a miss, because the quality of the variogram largely determines whether the application of geostatistics makes sense at all. Furthermore, the Python programming language was missing a mature, well-established and tested package for variogram estimation a couple of years ago. Here I present SciKit-GStat, an open-source Python package for variogram estimation that fits well into established frameworks for scientific computing and puts the focus on the variogram before more sophisticated methods are about to be applied. SciKit-GStat is written in a mutable, object-oriented way that mimics the typical geostatistical analysis workflow. Its main strength is the ease of use and interactivity, and it is therefore usable with only a little or even no knowledge of Python. During the last few years, other libraries covering geostatistics for Python developed along with SciKit-GStat. Today, the most important ones can be interfaced by SciKit-GStat. Additionally, established data structures for scientific computing are reused internally, to keep the user from learning complex data models, just for using SciKit-GStat. Common data structures along with powerful interfaces enable the user to use SciKit-GStat along with other packages in established workflows rather than forcing the user to stick to the author's programming paradigms. SciKit-GStat ships with a large number of predefined procedures, algorithms and models, such as variogram estimators, theoretical spatial models or binning algorithms. Common approaches to estimate variograms are covered and can be used out of the box. At the same time, the base class is very flexible and can be adjusted to less common problems, as well. Last but not least, it was made sure that a user is aided in implementing new procedures or even extending the core functionality as much as possible, to extend SciKit-GStat to uncovered use cases. With broad documentation, a user guide, tutorials and good unit-test coverage, SciKit-GStat enables the user to focus on variogram estimation rather than implementation details. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. SciKit-GStat 1.0: A SciPy flavoured geostatistical variogram estimation toolbox written in Python.
- Author
-
Mälicke, Mirko
- Subjects
- *
VARIOGRAMS , *GEOLOGICAL statistics , *PYTHON programming language , *SCIENTIFIC computing , *COMPUTER software , *DATA structures - Abstract
Geostatistical methods are widely used in almost all geoscientific disciplines, i.e. for interpolation, re-scaling, data assimilation or modelling. At its core geostatistics aims to detect, quantify, describe, analyze and model spatial covariance of observations. The variogram, a tool to describe this spatial covariance in a formalized way, is at the heart of every such method. Unfortunately, many applications of geostatistics rather focus on the interpolation method or the result, than the quality of the estimated variogram. Not least because estimating a variogram is commonly left as a task for computers and some software implementations do not even show a variogram to the user. This is a miss, because the quality of the variogram largely determines, whether the application of geostatistics makes sense at all. Furthermore, the Python programming language was missing a mature, well-established and tested package for variogram estimation a couple of years ago. Here I present SciKit-GStat, an open source Python package for variogram estimation, that fits well into established frameworks for scientific computing and puts the focus on the variogram before more sophisticated methods are about to be applied. SciKit-GStat is written in a mutable, object-oriented way that mimics the typical geostatistical analysis workflow. Its main strength is the ease of usage and interactivity and it is therefore usable with only a little or even no knowledge in Python. During the last few years, other libraries covering geostatistics for Python developed along with SciKit-GStat. Today, the most important ones can be interfaced by SciKit-GStat. Additionally, established data structures for scientific computing are reused internally, to keep the user from learning complex data models, just for using SciKit-GStat. Common data structures along with powerful interfaces enable the user to use SciKit-GStat along with other packages in established workflows, rather than forcing the user to stick to the authors programming paradigms. SciKit-GStat ships with a large number of predefined procedures, algorithms and models, such as variogram estimators, theoretical spatial models or binning algorithms. Common approaches to estimate variograms are covered and can be used out of the box. At the same time, the base class is very flexible and can be adjusted to less common problems, as well. Last but not least, it was made sure, that a user is aided at implementing new procedures, or even extending the core functionality as much as possible, to extend SciKit-GStat to uncovered use-cases. With broad documentation, user guide, tutorials and good unit-test coverage, SciKit-GStat enables the user to focus on variogram estimation, rather than implementation details. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. AstroGeoVis v1.0: Astronomical Visualizations and Scientific Computing for Earth Science Education.
- Author
-
Kostadinov, Tihomir S.
- Subjects
- *
EARTH science education , *EARTH system science , *SCIENTIFIC computing , *SUSTAINABLE design , *SCIENTIFIC visualization , *PHYSICAL geography , *COMPUTER science - Abstract
Modern climate science, Earth system science, physical geography, oceanography, meteorology, and related disciplines have increasingly turned into highly quantitative, computational fields, dealing with processing, analysis and visualization of large numerical data sets. Students of these and many other disciplines thus need to acquire robust scientific computing and data analysis skills, which have universal applicability. In addition, the increasing economic importance and environmental significance of solar power and sustainable practices such as passive building design have recently increased the importance of understanding of the apparent motions of the Sun on the celestial sphere, for a wider array of students and professionals. In this paper, I introduce and describe AstroGeoVis v1.0: open-source software that calculates solar coordinates and related parameters and produces astronomical visualizations relevant to the Earth and climate sciences. The software is written in MATLAB©; while its primary intended purpose is pedagogical, research use is envisioned as well. Both the visualizations and the code are intended to be used in the classroom in a variety of courses, at a variety of levels (targeting high school students to undergraduates), including Earth and climate sciences, geography, physics, astronomy, mathematics, statistics and computer science. I provide examples of classroom use and assignment ideas, as well as examples of ways I have used these resources in my college-level teaching. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. AN OPTIMIZED SFC APPROACH FOR ND WINDOW QUERYING ON POINT CLOUDS.
- Author
-
Liu, H., Van Oosterom, P., Meijers, M., and Verbree, E.
- Subjects
POINT cloud ,DATA management ,DIMENSION reduction (Statistics) ,SCIENTIFIC computing ,DATA transmission systems ,STATISTICS - Abstract
Dramatically increasing collection of point clouds raises an essential demand for highly efficient data management. It can also facilitate modern applications such as robotics and virtual reality. Extensive studies have been performed on point data management and querying, but most of them concentrate on low dimensional spaces. High dimensional data management solutions from computer science have not considered the special features of spatial data; so, they may not be optimal. A Space Filling Curve (SFC) based approach, PlainSFC which is capable of nD point querying has been proposed and tested in low dimensional spaces. However, its efficiency in nD space is still unknown. Besides that, PlainSFC performs poorly on skewed data querying. This paper develops HistSFC which utilizes point distribution information to improve the querying efficiency on skewed data. Then, the paper presents statistical analysis of how PlainSFC and HistSFC perform when dimensionality increases. By experimenting on simulated nD data and real data, we confirmed the patterns deduced: for inhomogeneous data querying, the false positive rate (FPR) of PlainSFC increases drastically as dimensionality goes up. HistSFC alleviates such deterioration to a large extent. Despite performance degeneration in ultra high dimensional spaces, HistSFC can be applied with high efficiency for most spatial applications. The generic theoretical framework developed also allows us to study related topics such as visualization and data transmission in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.