4,971 results on '"Scientific Computing"'
Search Results
2. Extending GPU-accelerated Gaussian integrals in the TeraChem software package to f type orbitals: Implementation and applications.
- Author
-
Wang, Yuanheng, Hait, Diptarka, Johnson, K. Grace, Fajen, O. Jonathan, Zhang, Juncheng Harry, Guerrero, Rubén D., and Martínez, Todd J.
- Subjects
- *
ANGULAR momentum (Mechanics) , *TRANSITION metal complexes , *WATER clusters , *DENSITY functional theory , *SCIENTIFIC computing , *GRAPHICS processing units - Abstract
The increasing availability of graphics processing units (GPUs) for scientific computing has prompted interest in accelerating quantum chemical calculations through their use. However, the complexity of integral kernels for high angular momentum basis functions often limits the utility of GPU implementations with large basis sets or for metal containing systems. In this work, we report the implementation of f function support in the GPU-accelerated TeraChem software package through the development of efficient kernels for the evaluation of Hamiltonian integrals. The high efficiency of the resulting code is demonstrated through density functional theory (DFT) calculations on increasingly large organic molecules and transition metal complexes, as well as coupled cluster singles and doubles calculations on water clusters. Preliminary investigations into Ni(I) catalysis with DFT and the photochemistry of MnH(CH3) with complete active space self-consistent field are also carried out. Overall, our GPU-accelerated software appears to be well-suited for fast simulation of large transition metal containing systems, as well as organic molecules. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. PyGrossone: A Python Library for the Infinity Computer
- Author
-
Falcone, Alberto, Garro, Alfredo, Sergeyev, Yaroslav D., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Sergeyev, Yaroslav D., editor, Kvasov, Dmitri E., editor, and Astorino, Annabella, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Discovering 3D Hidden Elasticity in Isotropic and Transversely Isotropic Materials with Physics-informed UNets
- Author
-
Kamali, Ali and Laksari, Kaveh
- Subjects
Engineering ,Biomedical Engineering ,Bioengineering ,Biomedical Imaging ,Biotechnology ,digital volume correlation ,model-based elastography ,physics-informed deep learning ,scientific computing ,tissue biomechanics - Abstract
Three-dimensional variation in structural components or fiber alignments results in complex mechanical property distribution in tissues and biomaterials. In this paper, we use a physics-informed UNet-based neural network model (El-UNet) to discover the three-dimensional (3D) internal composition and space-dependent material properties of heterogeneous isotropic and transversely isotropic materials without a priori knowledge of the composition. We then show the capabilities of El-UNet by validating against data obtained from finite-element simulations of two soft tissues, namely, brain tissue and articular cartilage, under various loading conditions. We first simulated compressive loading of 3D brain tissue comprising of distinct white matter and gray matter mechanical properties undergoing small strains with isotropic linear elastic behavior, where El-UNet reached mean absolute relative errors under 1.5 % for elastic modulus and Poisson's ratio estimations across the 3D volume. We showed that the 3D solution achieved by El-UNet was superior to relative stiffness mapping by inverse of axial strain and two-dimensional plane stress/plane strain approximations. Additionally, we simulated a transversely isotropic articular cartilage with known fiber orientations undergoing compressive loading, and accurately estimated the spatial distribution of all five material parameters, with mean absolute relative errors under 5 %. Our work demonstrates the application of the computationally efficient physics-informed El-UNet in 3D elasticity imaging and provides methods for translation to experimental 3D characterization of soft tissues and other materials. The proposed El-UNet offers a powerful tool for both in vitro and ex vivo tissue analysis, with potential extensions to in vivo diagnostics. STATEMENT OF SIGNIFICANCE: Elasticity imaging is a technique that reconstructs mechanical properties of tissue using deformation and force measurements. Given the complexity of this reconstruction, most existing methods have mostly focused on 2D problems. Our work is the first implementation of physics-informed UNets to reconstruct three-dimensional material parameter distributions for isotropic and transversely isotropic linear elastic materials by having deformation and force measurements. We comprehensively validate our model using synthetic data generated using finite element models of biological tissues with high bio-fidelity-the brain and articular cartilage. Our method can be implemented in elasticity imaging scenarios for in vitro and ex vivo mechanical characterization of biomaterials and biological tissues, with potential extensions to in vivo diagnostics.
- Published
- 2024
5. Employing artificial intelligence to steer exascale workflows with colmena.
- Author
-
Ward, Logan, Pauloski, J. Gregory, Hayot-Sasson, Valerie, Babuji, Yadu, Brace, Alexander, Chard, Ryan, Chard, Kyle, Thakur, Rajeev, and Foster, Ian
- Subjects
- *
COMPUTATIONAL intelligence , *ARTIFICIAL intelligence , *SCIENTIFIC computing , *BIOPHYSICS , *MATERIALS science - Abstract
Computational workflows are a common class of application on supercomputers, yet the loosely coupled and heterogeneous nature of workflows often fails to take full advantage of their capabilities. We created Colmena to leverage the massive parallelism of a supercomputer by using Artificial Intelligence (AI) to learn from and adapt a workflow as it executes. Colmena allows scientists to define how their application should respond to events (e.g., task completion) as a series of cooperative agents. In this paper, we describe the design of Colmena, the challenges we overcame while deploying applications on exascale systems, and the science workflows we have enhanced through interweaving AI. The scaling challenges we discuss include developing steering strategies that maximize node utilization, introducing data fabrics that reduce communication overhead of data-intensive tasks, and implementing workflow tasks that cache costly operations between invocations. These innovations coupled with a variety of application patterns accessible through our agent-based steering model have enabled science advances in chemistry, biophysics, and materials science using different types of AI. Our vision is that Colmena will spur creative solutions that harness AI across many domains of scientific computing. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
6. Reviews.
- Author
-
Roth, Kimberly A.
- Subjects
- *
SCIENCE education , *COLLEGE curriculum , *SCIENTIFIC computing , *CALIFORNIA wildfires , *GLAZING (Glass installation) - Published
- 2025
- Full Text
- View/download PDF
7. Model design in data science: engineering design to uncover design processes and anomalies.
- Author
-
Bordas, Antoine, Le Masson, Pascal, and Weil, Benoit
- Subjects
- *
SCIENTIFIC computing , *DESIGN science , *SCIENTIFIC literature , *DATA science , *ENGINEERING design - Abstract
In the current data-rich environment, valorizing of data has become a common task in data science and requires the design of a statistical model to transform input data into a desirable output. The literature in data science regarding the design of new models is abundant, while in parallel, other streams of literature such as epistemology of science, has shown the relevance of anomalies in model design processes. Anomalies are to be understood as unexpected observations in data, an historical example being the discovery of Mercury based on its famous anomalous precession perihelion. Therefore, this paper addresses the various design processes in data science and their relationships to anomalies. To do so, we conceptualize what designing a data science model means, and we derive three design processes based on the latest theories in engineering design. This allows us to formulate assumptions regarding the relationships between each design process and anomalies, which we test with several case studies. Notably, three processes for the design of models in data science are identified and, for each of them, the following information is provided: (1) the various knowledge leveraged and generated and (2) the specific relations with anomalies. From a theoretical standpoint, this work is one of the first applications of design methods in data science. This work paves the way for more research at the intersection of engineering design and data science, which could enrich both fields. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. Metamorphic Testing on Scientific Programs for Solving Second‐Order Elliptic Differential Equations.
- Author
-
Yan, Shiyu and Zhu, Hong
- Subjects
ELLIPTIC differential equations ,DIFFERENTIAL equations ,SCIENTIFIC computing ,SCIENTIFIC method ,TEST methods ,COMPUTER software testing - Abstract
Practical problems in scientific computation that solve differential equations rarely have explicit exact solutions. Therefore, verifying the correctness of such programs has long been a challenge due to the difficulty of producing expected outputs on test cases. In this paper, the principles of metamorphic testing are applied to verify programs that solve second‐order elliptic differential equations. We present a testing process specifically tailored for the verification testing of scientific computation programs and integrate it to the process of developing scientific software. Unlike existing approaches, we formally derive metamorphic relations from the numerical models of differential equations built in development process of scientific computing programs. The experimental results clearly show that our approach is effective in detecting faults commonly found in scientific computing programs. It outperforms the fault detecting ability of the trend method, which is a traditional testing method for scientific software. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Two improved nonlinear conjugate gradient methods with application in conditional model regression function.
- Author
-
Elhamid, Mehamdia Abd, Yacine, Chaib, and Tahar, Bechouat
- Subjects
CONJUGATE gradient methods ,NONLINEAR equations ,SCIENTIFIC computing ,LINEAR equations ,REGRESSION analysis - Abstract
The conjugate gradient (CG) method is one of the most important ideas in scientific computing, it is applied to solve linear systems of equations and nonlinear optimization problems. In this paper, based on a variant of Dai-Yuan (DY) method and Fletcher-Reeves (FR) method, two modified CG methods (named IDY and IFR) are presented and analyzed. The search direction of the presented methods fulfills the sufficient descent condition at each iteration. We establish the global convergence of the proposed algorithms under normal assumptions and strong Wolfe line search. Preliminary elementary numerical experiment results are presented, demonstrating the promise and the effectiveness of the proposed methods. Finally, the proposed methods are further extended to solve the problem of conditional model regression function. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. Selective learning for sensing using shift-invariant spectrally stable undersampled networks.
- Author
-
Verma, Ankur, Goyal, Ayush, Sarma, Sanjay, and Kumara, Soundar
- Subjects
- *
REMOTE submersibles , *SAMPLING theorem , *ARTIFICIAL intelligence , *DATA augmentation , *SCIENTIFIC computing - Abstract
The amount of data collected for sensing tasks in scientific computing is based on the Shannon-Nyquist sampling theorem proposed in the 1940s. Sensor data generation will surpass 73 trillion GB by 2025 as we increase the high-fidelity digitization of the physical world. Skyrocketing data infrastructure costs and time to maintain and compute on all this data are increasingly common. To address this, we introduce a selective learning approach, where the amount of data collected is problem dependent. We develop novel shift-invariant and spectrally stable neural networks to solve real-time sensing problems formulated as classification or regression problems. We demonstrate that (i) less data can be collected while preserving information, and (ii) test accuracy improves with data augmentation (size of training data), rather than by collecting more than a certain fraction of raw data, unlike information theoretic approaches. While sampling at Nyquist rates, every data point does not have to be resolved at Nyquist and the network learns the amount of data to be collected. This has significant implications (orders of magnitude reduction) on the amount of data collected, computation, power, time, bandwidth, and latency required for several embedded applications ranging from low earth orbit economy to unmanned underwater vehicles. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. A Review of Large Language Models: Fundamental Architectures, Key Technological Evolutions, Interdisciplinary Technologies Integration, Optimization and Compression Techniques, Applications, and Challenges.
- Author
-
Han, Songyue, Wang, Mingyu, Zhang, Jialong, Li, Dongdong, and Duan, Junhong
- Subjects
LANGUAGE models ,NATURAL language processing ,COMPUTER vision ,SCIENTIFIC computing ,HALLUCINATIONS (Artificial intelligence) ,MACHINE translating - Abstract
Large language model-related technologies have shown astonishing potential in tasks such as machine translation, text generation, logical reasoning, task planning, and multimodal alignment. Consequently, their applications have continuously expanded from natural language processing to computer vision, scientific computing, and other vertical industry fields. This rapid surge in research work in a short period poses significant challenges for researchers to comprehensively grasp the research dynamics, understand key technologies, and develop applications in the field. To address this, this paper provides a comprehensive review of research on large language models. First, it organizes and reviews the research background and current status, clarifying the definition of large language models in both Chinese and English communities. Second, it analyzes the mainstream infrastructure of large language models and briefly introduces the key technologies and optimization methods that support them. Then, it conducts a detailed review of the intersections between large language models and interdisciplinary technologies such as contrastive learning, knowledge enhancement, retrieval enhancement, hallucination dissolution, recommendation systems, reinforcement learning, multimodal large models, and agents, pointing out valuable research ideas. Finally, it organizes the deployment and industry applications of large language models, identifies the limitations and challenges they face, and provides an outlook on future research directions. Our review paper aims not only to provide systematic research but also to focus on the integration of large language models with interdisciplinary technologies, hoping to provide ideas and inspiration for researchers to carry out industry applications and the secondary development of large language models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Randomly pivoted Cholesky: Practical approximation of a kernel matrix with few entry evaluations.
- Author
-
Chen, Yifan, Epperly, Ethan N., Tropp, Joel A., and Webber, Robert J.
- Subjects
- *
SCIENCE education , *SCIENTIFIC computing , *MACHINE learning , *ARITHMETIC , *SEMIDEFINITE programming , *ALGORITHMS - Abstract
The randomly pivoted Cholesky algorithm (RPCholesky) computes a factorized rank‐k$k$ approximation of an N×N$N \times N$ positive‐semidefinite (psd) matrix. RPCholesky requires only (k+1)N$(k + 1)N$ entry evaluations and O(k2N)$\mathcal {O}(k^2 N)$ additional arithmetic operations, and it can be implemented with just a few lines of code. The method is particularly useful for approximating a kernel matrix. This paper offers a thorough new investigation of the empirical and theoretical behavior of this fundamental algorithm. For matrix approximation problems that arise in scientific machine learning, experiments show that RPCholesky matches or beats the performance of alternative algorithms. Moreover, RPCholesky provably returns low‐rank approximations that are nearly optimal. The simplicity, effectiveness, and robustness of RPCholesky strongly support its use in scientific computing and machine learning applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Towards standarized benchmarks of LLMs in software modeling tasks: a conceptual framework: Towards standarization of LLM benchmarks: J. Cámara et al.
- Author
-
Cámara, Javier, Burgueño, Lola, and Troya, Javier
- Subjects
- *
LANGUAGE models , *SYSTEMS software , *COMPUTER software quality control , *COMPUTER science , *SCIENTIFIC computing - Abstract
The integration of Large Language Models (LLMs) in software modeling tasks presents both opportunities and challenges. This Expert Voice addresses a significant gap in the evaluation of these models, advocating for the need for standardized benchmarking frameworks. Recognizing the potential variability in prompt strategies, LLM outputs, and solution space, we propose a conceptual framework to assess their quality in software model generation. This framework aims to pave the way for standardization of the benchmarking process, ensuring consistent and objective evaluation of LLMs in software modeling. Our conceptual framework is illustrated using UML class diagrams as a running example. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Upper triangulation-based infinity norm bounds for the inverse of Nekrasov matrices with applications.
- Author
-
Gao, Lei, Gu, Xian-Ming, Jia, Xiudan, and Li, Chaoqian
- Subjects
- *
MATRIX inversion , *MATRIX norms , *SCIENTIFIC computing , *LINEAR complementarity problem - Abstract
The infinity norm bounds for the inverse of Nekrasov matrices play an important role in scientific computing. We in this paper propose a triangulation-based approach that can easily be implemented to seek sharper infinity norm bounds for the inverse of Nekrasov matrices. With the help of such sharper bounds, new error estimates for the linear complementarity problem of Nekrasov matrices are presented, and a new infinity norm estimate of the iterative matrix of parallel-in-time methods for an all-at-once system from Volterra partial integral-differential problems is given. Finally, these new bounds are compared with other state-of-the-art results so that the effectiveness of our proposed results is verified. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Generation of normal distributions revisited.
- Author
-
Umeda, Takayuki
- Subjects
- *
CUMULATIVE distribution function , *RANDOM numbers , *GAUSSIAN distribution , *RANDOM sets , *SCIENTIFIC computing - Abstract
Normally distributed random numbers are commonly used in scientific computing in various fields. It is important to generate a set of random numbers as close to a normal distribution as possible for reducing initial fluctuations. Two types of samples from a uniform distribution are examined as source samples for inverse transform sampling methods. Three types of inverse transform sampling methods with new approximations of inverse cumulative distribution functions are also discussed for converting uniformly distributed source samples to normally distributed samples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Modeling performance of data collection systems for high-energy physics.
- Author
-
Olin-Ammentorp, Wilkie, Wu, Xingfu, and Chien, Andrew A.
- Subjects
COMPACT muon solenoid experiment ,DATA acquisition systems ,FILTERING software ,HADRON colliders ,SCIENTIFIC computing - Abstract
Exponential increases in scientific experimental data are outpacing silicon technology progress, necessitating heterogeneous computing systems—particularly those utilizing machine learning (ML)—to meet future scientific computing demands. The growing importance and complexity of heterogeneous computing systems require systematic modeling to understand and predict the effective roles for ML. We present a model that addresses this need by framing the key aspects of data collection pipelines and constraints and combining them with the important vectors of technology that shape alternatives, computing metrics that allow complex alternatives to be compared. For instance, a data collection pipeline may be characterized by parameters such as sensor sampling rates and the overall relevancy of retrieved samples. Alternatives to this pipeline are enabled by development vectors including ML, parallelization, advancing CMOS, and neuromorphic computing. By calculating metrics for each alternative such as overall F1 score, power, hardware cost, and energy expended per relevant sample, our model allows alternative data collection systems to be rigorously compared. We apply this model to the Compact Muon Solenoid experiment and its planned high luminosity-large hadron collider upgrade, evaluating novel technologies for the data acquisition system (DAQ), including ML-based filtering and parallelized software. The results demonstrate that improvements to early DAQ stages significantly reduce resources required later, with a power reduction of 60% and increased relevant data retrieval per unit power (from 0.065 to 0.31 samples/kJ). However, we predict that further advances will be required in order to meet overall power and cost constraints for the DAQ. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. A Centrality-Weighted Bidirectional Encoder Representation from Transformers Model for Enhanced Sequence Labeling in Key Phrase Extraction from Scientific Texts.
- Author
-
Zengeya, Tsitsi, Fonou Dombeu, Jean Vincent, and Gwetu, Mandlenkosi
- Subjects
LANGUAGE models ,SCIENTIFIC computing - Abstract
Deep learning approaches, utilizing Bidirectional Encoder Representation from Transformers (BERT) and advanced fine-tuning techniques, have achieved state-of-the-art accuracies in the domain of term extraction from texts. However, BERT presents some limitations in that it primarily captures the semantic context relative to the surrounding text without considering how relevant or central a token is to the overall document content. There has also been research on the application of sequence labeling on contextualized embeddings; however, the existing methods often rely solely on local context for extracting key phrases from texts. To address these limitations, this study proposes a centrality-weighted BERT model for key phrase extraction from text using sequence labelling (CenBERT-SEQ). The proposed CenBERT-SEQ model utilizes BERT to represent terms with various contextual embedding architectures, and introduces a centrality-weighting layer that integrates document-level context into BERT. This layer leverages document embeddings to influence the importance of each term based on its relevance to the entire document. Finally, a linear classifier layer is employed to model the dependencies between the outputs, thereby enhancing the accuracy of the CenBERT-SEQ model. The proposed CenBERT-SEQ model was evaluated against the standard BERT base-uncased model using three Computer Science article datasets, namely, SemEval-2010, WWW, and KDD. The experimental results show that, although the CenBERT-SEQ and BERT-base models achieved higher and close comparable accuracy, the proposed CenBERT-SEQ model achieved higher precision, recall, and F1-score than the BERT-base model. Furthermore, a comparison of the proposed CenBERT-SEQ model to that of related studies revealed that the proposed CenBERT-SEQ model achieved a higher accuracy, precision, recall, and F1-score of 95%, 97%, 91%, and 94%, respectively, than related studies, showing the superior capabilities of the CenBERT-SEQ model in keyphrase extraction from scientific documents. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Formally understanding Rust's ownership and borrowing system at the memory level.
- Author
-
Kan, Shuanglong, Chen, Zhe, Sanán, David, and Liu, Yang
- Subjects
SEMANTICS ,PROGRAMMING languages ,COMPUTER software ,COMPUTER science ,SCIENTIFIC computing - Abstract
Rust is an emergent systems programming language highlighting memory safety through its Ownership and Borrowing System (OBS). Formalizing OBS in semantics is essential in certifying Rust's memory safety guarantees. Existing formalizations of OBS are at the language level. That is, they explain OBS on Rust's constructs. This paper proposes a different view of OBS at the memory level, independent of Rust's constructs. The basic idea of our formalization is mapping the OBS invariants maintained by Rust's type system to memory layouts and checking the invariants for memory operations. Our memory-level formalization of OBS helps people better understand the relationship between OBS and memory safety by narrowing the gap between OBS and memory operations. Moreover, it enables potential reuse of Rust's OBS in other programming languages since memory operations are standard features and our formalization is not bound to Rust's constructs. Based on the memory model, we have developed an executable operational semantics for Rust, called RustSEM, and implemented the semantics in K-Framework (K ). RustSEM covers a much larger subset of the significant language constructs than existing formal semantics for Rust. More importantly, RustSEM can run and verify real Rust programs by exploiting K 's execution and verification engines. We have evaluated the semantic correctness of RustSEM wrt. the Rust compiler using around 700 tests. In particular, we have compared our formalization of OBS in the memory model with Rust's type system and identified their differences due to the conservation of the Rust compiler. Moreover, our formalization of OBS is helpful to identifying undefined behavior of Rust programs with mixed safe and unsafe operations. We have also evaluated the potential applications of RustSEM in automated runtime and formal verification for functional and memory properties. Experimental results show that RustSEM can enhance Rust's memory safety mechanism, as it is more powerful than OBS in the Rust compiler for detecting memory errors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Sustainability-integrated value stream mapping with process mining.
- Author
-
Horsthofer-Rauch, Julia, Guesken, Sarah Ranjana, Weich, Jonas, Rauch, Alexander, Bittner, Maik, Schulz, Julia, and Zaeh, Michael F.
- Subjects
VALUE stream mapping ,SCIENTIFIC computing ,CLIMATE change ,PROCESS mining ,MANUFACTURING processes - Abstract
Value stream mapping is a well-established tool for analyzing and optimizing value streams in production. In its conventional form, it requires a high level of manual effort and is often inefficient in volatile and high-variance environments. The idea of digitizing value stream mapping to increase efficiency has thus been put forward. A common means suggested for digitization is Process Mining, a field related to Data Science and Process Management. Furthermore, adding sustainability aspects to value stream mapping has also been subject to research. Regarding the ongoing climate crisis and companies' endeavors to improve overall sustainability, integrating sustainability into value stream mapping must be deemed equally relevant. This research paper provides an overview of the state of the art of Process Mining-based and sustainability-integrated value stream mapping, proposes a framework for a combined approach, and presents technical details for the implementation of such an approach, including a validation from practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Data science and automation in the process of theorizing: Machine learning's power of induction in the co-duction cycle.
- Author
-
Kolkman, Daan, Lee, Gwendolyn K., and van Witteloostuijn, Arjen
- Subjects
- *
SCIENTIFIC computing , *DATA science , *MACHINE performance , *DATA analysis , *ABDUCTION - Abstract
Recent calls to take up data science either revolve around the superior predictive performance associated with machine learning or the potential of data science techniques for exploratory data analysis. Many believe that these strengths come at the cost of explanatory insights, which form the basis for theorization. In this paper, we show that this trade-off is false. When used as a part of a full research process, including inductive, deductive and abductive steps, machine learning can offer explanatory insights and provide a solid basis for theorization. We present a systematic five-step theory-building and theory-testing cycle that consists of: 1. Element identification (reduction); 2. Exploratory analysis (induction); 3. Hypothesis development (retroduction); 4. Hypothesis testing (deduction); and 5. Theorization (abduction). We demonstrate the usefulness of this approach, which we refer to as co-duction, in a vignette where we study firm growth with real-world observational data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Mathematical modeling of electroosmotically driven peristaltic propulsion due to transverse deflections of two periodically deformable curved tubes of unequal wavelengths.
- Author
-
Yadav, Pramod Kumar and Roshan, Muhammad
- Subjects
- *
ELECTRIC double layer , *ELECTRIC potential , *SCIENTIFIC computing , *SHEARING force , *FLUID flow - Abstract
The present study aims to investigate the viscid fluid propulsion due to the electroosmosis and transverse deflections of the sinusoidally deformable tubes of unequal wavelengths in the presence of electro-kinetic forces. This situation is estimated from the physical model of physiological fluid flow through a tubular structure in which an artificial flexible tube is being inserted. In this model, both peristaltically deforming tubes are taken in a curved configuration. The flow-governing momentum equations are simplified by the approximation of the long wavelength as compared to the outer tube's radius, whereas the Debye–Hückel approximation is used to simplify the equations that govern the electric potential distribution. Here, the authors have used the DSolve command in the scientific computing software MATHEMATICA 14 to obtain the expressions for electric potential and axial velocity of viscid fluid. In this work, the authors have analyzed the impact of various controlling parameters, such as the electro-physical parameters, curvature parameter, radius ratio, wavelength ratio, and amplitude ratios, on the various flow quantities graphically during the transport of viscid fluid through a curved endoscope. Here, contour plots are also drawn to visualize the streamlines and to observe the impacts of the control parameters on fluid trapping. During the analysis of the results, a noteworthy outcome extracted from the present model is that an increment in electro-physical parameters, such as Helmholtz–Smoluchowski velocity and the Debye–Hückel parameter, are responsible for enhancement in the shear stress at the inner tube's wall and the axial velocity under the influence of electro-kinetic forces. This is because of the electric double layer (EDL) thickness, which gets reduced on strengthening the Debye–Hückel parameter. This reduced EDL thickness is responsible for the enhancement in the axial velocity of the transporting viscid fluid. The present model also suggests that the axial velocity of viscid fluid can be reduced by enhancing the ratio of wavelengths of waves that travel down the walls of the outer curved tube and the inner curved tube. The above-mentioned results can play a significant role in developing and advancing the endoscopes that will be useful in many biomedical processes, such as gastroscopy, colonoscopy, and laparoscopy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. The design of a scientific data management system based on DOMAS at CSNS-II (preliminary stage).
- Author
-
Hu, Peng, Wang, Li, Tang, Ming, Li, Yakang, Chen, Juan, Hu, Hao, Wang, Haofan, Zhuang, Bo, Qi, Fazhi, and Zhang, Junrong
- Subjects
- *
DATABASE management , *DATA management , *DATABASES , *NEUTRON sources , *SCIENTIFIC computing - Abstract
At the second stage of China Spallation Neutron Source (CSNS-II), it is predicted that 2 PB raw experimental data will be produced annually from twenty instruments. Scientific computing puts forward higher requirements for data sharing, utilization, retrieval, analysis efficiency, and security. However, the existing data management system (DMS) based on ICAT has several limitations including poor scalability of metadata database, imperfect data-management lifecycle and inflexible API. To ensure the accuracy, usability, scalability and efficiency of CSNS-II experimental data, a new scientific data management system is therefore designed based on the DOMAS framework developed by the Computing Center of IHEP. The data acquisition, transmission, storage and service systems are re-designed and tailored specifically for CSNS-II. Upon its completion, the new DMS will overcome the existing challenges and offer functions such as online display, search functionality and rapid download capabilities for metadata, raw data and analyzed data; flexible and user-friendly authorization; and data lifecycle management. Ultimately, the implementation of the new Data Management System (DMS) is expected to enhance the efficiency of experimental data analysis, propelling CSNS-II to achieve international advanced standards. Furthermore, it aims to reinforce self-reliance and technological strength in the field of science and technology at a high level in China. The development and deployment of the new DMS begin at the end of 2023. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. ssc-cdi : A Memory-Efficient, Multi-GPU Package for Ptychography with Extreme Data.
- Author
-
Tonin, Yuri Rossi, Peixinho, Alan Zanoni, Brandao-Junior, Mauro Luiz, Ferraz, Paola, and Miqueles, Eduardo Xavier
- Subjects
X-ray imaging ,SCIENTIFIC computing ,INTEGRATED software ,SYNCHROTRONS ,EXPERTISE ,PYTHON programming language - Abstract
We introduce ssc-cdi, an open-source software package from the Sirius Scientific Computing family, designed for memory-efficient, single-node multi-GPU ptychography reconstruction. ssc-cdi offers a range of reconstruction engines in Python version 3.9.2 and C++/CUDA. It aims at developing local expertise and customized solutions to meet the specific needs of beamlines and user community of the Brazilian Synchrotron Light Laboratory (LNLS). We demonstrate ptychographic reconstruction of beamline data and present benchmarks for the package. Results show that ssc-cdi effectively handles extreme datasets typical of modern X-ray facilities without significantly compromising performance, offering a complementary approach to well-established packages of the community and serving as a robust tool for high-resolution imaging applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. On Verified Automated Reasoning in Propositional Logic: Teaching Sequent Calculus to Computer Science Students.
- Author
-
Lund, Simon Tobias and Villadsen, Jørgen
- Subjects
COMPUTER science students ,PROPOSITION (Logic) ,SCIENTIFIC computing ,SYSTEMS software ,MATHEMATICIANS ,SOFTWARE verification - Abstract
As the complexity of software systems is ever increasing, so is the need for practical tools for formal verification. Among these are automatic theorem provers, capable of solving various reasoning problems automatically, and proof assistants, capable of deriving more complex results when guided by a mathematician/programmer. In this paper we consider using the latter to build the former. In the proof assistant Isabelle/HOL we combine functional programming and logical program verification to build a theorem prover for propositional logic. We also consider how such a prover can be used to solve a reasoning task without much mental labor. The development is extended with a formalized proof system for writing machine-checked sequent calculus proofs. We consider how this can be used to teach computer science students about logic, automated reasoning and proof assistants. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. 基于超算的多模式计算融合支撑系统.
- Author
-
卢宇彤 and 陈志广
- Subjects
ARTIFICIAL intelligence ,ELECTRONIC data processing ,SCIENTIFIC computing ,BIG data ,TELECOMMUNICATION systems - Abstract
Copyright of Acta Scientiarum Naturalium Universitatis Sunyatseni / Zhongshan Daxue Xuebao is the property of Sun-Yat-Sen University and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
26. Transforming Science Through Software: Improving While Delivering 100×
- Author
-
Gerber, Richard, Gottlieb, Steven, Heroux, Michael A, and McInnes, Lois Curfman
- Subjects
Information and Computing Sciences ,Software Engineering ,Affordable and Clean Energy ,Special issues and sections ,Ecosystems ,Computational modeling ,Scientific computing ,Programming ,Productivity ,Software development management ,US Department of Energy ,Exascale computing ,Numerical and Computational Mathematics ,Computation Theory and Mathematics ,Distributed Computing ,Fluids & Plasmas ,Engineering ,Information and computing sciences - Published
- 2024
27. A Cast of Thousands: How the IDEAS Productivity Project Has Advanced Software Productivity and Sustainability
- Author
-
McInnes, Lois Curfman, Heroux, Michael A, Bernholdt, David E, Dubey, Anshu, Gonsiorowski, Elsa, Gupta, Rinku, Marques, Osni, Moulton, J David, Nam, Hai Ah, Norris, Boyana, Raybourn, Elaine M, Willenbring, Jim, Almgren, Ann, Bartlett, Roscoe A, Cranfill, Kita, Fickas, Stephen, Frederick, Don, Godoy, William F, Grubel, Patricia A, Hartman-Baker, Rebecca, Huebl, Axel, Lynch, Rose, Malviya-Thakur, Addi, Milewicz, Reed, Miller, Mark C, Mundt, Miranda R, Palmer, Erik, Parete-Koon, Suzanne, Phinney, Megan, Riley, Katherine, Rogers, David M, Sims, Benjamin, Stevens, Deborah, and Watson, Gregory R
- Subjects
Information and Computing Sciences ,Software Engineering ,Networking and Information Technology R&D (NITRD) ,Decent Work and Economic Growth ,Affordable and Clean Energy ,Software development management ,Sustainable development ,Productivity ,Ecosystems ,Next generation networking ,Technological innovation ,Scientific computing ,ATAP-2024 ,ATAP-GENERAL ,ATAP-AMP ,Numerical and Computational Mathematics ,Computation Theory and Mathematics ,Distributed Computing ,Fluids & Plasmas ,Engineering ,Information and computing sciences - Abstract
Computational and data-enabled science and engineering are revolutionizing advances throughout science and society, at all scales of computing. For example, teams in the U.S. Department of Energy's Exascale Computing Project have been tackling new frontiers in modeling, simulation, and analysis by exploiting unprecedented exascale computing capabilities-building an advanced software ecosystem that supports next-generation applications and addresses disruptive changes in computer architectures. However, concerns are growing about the productivity of the developers of scientific software. Members of the Interoperable Design of Extreme-scale Application Software project serve as catalysts to address these challenges through fostering software communities, incubating and curating methodologies and resources, and disseminating knowledge to advance developer productivity and software sustainability. This article discusses how these synergistic activities are advancing scientific discovery-mitigating technical risks by building a firmer foundation for reproducible, sustainable science at all scales of computing, from laptops to clusters to exascale and beyond.
- Published
- 2024
28. UPC++ v1.0 Specification, Revision 2023.9.0
- Author
-
Bonachea, Dan and Kamil, Amir
- Subjects
Exascale Computing ,Library specification ,parallel distributed programming ,PGAS ,scientific computing ,UPC++ - Abstract
UPC++ is a C++ library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. The key communication facilities in UPC++ are one-sided Remote Memory Access (RMA) and Remote Procedure Call (RPC). All communication operations are syntactically explicit and default to non-blocking; asynchrony is managed through the use of futures, promises and continuation callbacks, enabling the programmer to construct a graph of operations to execute asynchronously as high-latency dependencies are satisfied. A global pointer abstraction provides system-wide addressability of shared memory, including host and accelerator memories. The parallelism model is primarily process-based, but the interface is thread-safe and designed to allow efficient and expressive use in multi-threaded applications. The interface is designed for extreme scalability throughout, and deliberately avoids design features that could inhibit scalability.
- Published
- 2023
29. UPC++ v1.0 Programmer’s Guide, Revision 2023.9.0
- Author
-
Bachan, John, Baden, Scott B, Bonachea, Dan, Corbino, Johnny, Grossman, Jonathan, Hargrove, Paul H, Hofmeyr, Steven, Jacquelin, Mathias, Kamil, Amir, Van Straalen, Brian, and Waters, Daniel
- Subjects
Exascale Computing ,GASNet ,Library Programmer's Guide ,parallel distributed programming ,PGAS ,scientific computing ,UPC++ - Abstract
UPC++ is a C++ library that supports Partitioned Global Address Space (PGAS) programming. It is designed for writing efficient, scalable parallel programs on distributed-memory parallel computers. The key communication facilities in UPC++ are one-sided Remote Memory Access (RMA) and Remote Procedure Call (RPC). The UPC++ control model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. The PGAS memory model additionally provides one-sided RMA communication to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ also features Remote Procedure Call (RPC) communication, making it easy to move computation to operate on data that resides on remote processes.UPC++ was designed to support exascale high-performance computing, and the library interfaces and implementation are focused on maximizing scalability. In UPC++, all communication operations are syntactically explicit, which encourages programmers to consider the costs associated with communication and data movement. Moreover, all communication operations are asynchronous by default, encouraging programmers to seek opportunities for overlapping communication latencies with other useful work. UPC++ provides expressive and composable abstractions designed for efficiently managing aggressive use of asynchrony in programs. Together, these design principles are intended to enable programmers to write applications using UPC++ that perform well even on hundreds of thousands of cores.
- Published
- 2023
30. Computation of pairs of related Gauss-type quadrature rules.
- Author
-
Alqahtani, H., Borges, C.F., Djukić, D.Lj., Mutavdžić Djukić, R.M., Reichel, L., and Spalević, M.M.
- Subjects
- *
SCIENTIFIC computing - Abstract
The evaluation of Gauss-type quadrature rules is an important topic in scientific computing. To determine estimates or bounds for the quadrature error of a Gauss rule often another related quadrature rule is evaluated, such as an associated Gauss-Radau or Gauss-Lobatto rule, an anti-Gauss rule, an averaged rule, an optimal averaged rule, or a Gauss-Kronrod rule when the latter exists. We discuss how pairs of a Gauss rule and a related Gauss-type quadrature rule can be computed efficiently by a divide-and-conquer method. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
31. Accelerating imaging research at large-scale scientific facilities through scientific computing
- Author
-
Chunpeng Wang, Xiaoyun Li, Rongzheng Wan, Jige Chen, Jing Ye, Ke Li, Aiguo Li, Renzhong Tai, and Alessandro Sepe
- Subjects
scientific computing ,synchrotron ,imaging ,automation ,tomography ,Nuclear and particle physics. Atomic energy. Radioactivity ,QC770-798 ,Crystallography ,QD901-999 - Abstract
To date, computed tomography experiments, carried-out at synchrotron radiation facilities worldwide, pose a tremendous challenge in terms of the breadth and complexity of the experimental datasets produced. Furthermore, near real-time three-dimensional reconstruction capabilities are becoming a crucial requirement in order to perform high-quality and result-informed synchrotron imaging experiments, where a large amount of data is collected and processed within a short time window. To address these challenges, we have developed and deployed a synchrotron computed tomography framework designed to automatically process online the experimental data from the synchrotron imaging beamlines, while leveraging the high-performance computing cluster capabilities to accelerate the real-time feedback to the users on their experimental results. We have, further, integrated it within a modern unified national authentication and data management framework, which we have developed and deployed, spanning the entire data lifecycle of a large-scale scientific facility. In this study, the overall architecture, functional modules and workflow design of our synchrotron computed tomography framework are presented in detail. Moreover, the successful integration of the imaging beamlines at the Shanghai Synchrotron Radiation Facility into our scientific computing framework is also detailed, which, ultimately, resulted in accelerating and fully automating their entire data processing pipelines. In fact, when compared with the original three-dimensional tomography reconstruction approaches, the implementation of our synchrotron computed tomography framework led to an acceleration in the experimental data processing capabilities, while maintaining a high level of integration with all the beamline processing software and systems.
- Published
- 2024
- Full Text
- View/download PDF
32. MDSA: A Dynamic and Greedy Approach to Solve the Minimum Dominating Set Problem.
- Author
-
Okumuş, Fatih and Karcı, Şeyda
- Subjects
DOMINATING set ,GRAPH theory ,SCIENTIFIC computing ,TIME complexity ,DYNAMIC programming - Abstract
The graph theory is one of the fundamental structures in computer science used to model various scientific and engineering problems. Many problems within the graph theory are categorized as NP-hard and NP-complete. One such problem is the minimum dominating set (MDS) problem, which seeks to identify the minimum possible subsets in a graph such that every other node in the subset is directly connected to a node in this subset. Due to its inherent complexity, developing an efficient polynomial-time method to address the MDS problem remains a significant challenge in graph theory. This paper introduces a novel algorithm that utilizes a centrality measure known as the Malatya Centrality to effectively address the MDS problem. The proposed algorithm, called the Malatya Dominating Set Algorithm (MDSA), leverages centrality values to identify dominating sets within a graph. It extends the Malatya centrality by incorporating a second-level centrality measure, which enhances the identification of dominating nodes. Through a systematic and algorithmic approach, these centrality values are employed to pinpoint the elements of the dominating set. The MDSA uniquely integrates greedy and dynamic programming strategies. At each step, the algorithm selects the most optimal (or near-optimal) node based on the centrality values (greedy approach) while updating the neighboring nodes' criteria to influence subsequent decisions (dynamic programming). The proposed algorithm demonstrates efficient performance, particularly in large-scale graphs, with time and space requirements scaling proportionally with the size of the graph and its average degree. Experimental results indicate that our algorithm outperforms existing methods, especially in terms of time complexity when applied to large datasets, showcasing its effectiveness in addressing the MDS problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. A study on I-lacunary statistical convergence of multiset sequences.
- Author
-
DEMİR, Nihal and GÜMÜŞ, Hafize
- Subjects
- *
SET theory , *SCIENTIFIC computing , *COMPUTER science , *EVERYDAY life , *MATHEMATICS - Abstract
In classical set theory, elements of the set are written once but the sets in which the same item is repeated several times in daily life are in all areas of our lives. These sets are called multisets and are studied in many fields such as Mathematics, Physics, Chemistry, and Computer Sciences. Sequences consisting of elements of these sets are called multiset sequences. In this paper, we study the concept of I-lacunary statistical convergence of multiset sequences and investigate some important results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Notes on Modified Planar Kelvin–Stuart Models: Simulations, Applications, Probabilistic Control on the Perturbations.
- Author
-
Kyurkchiev, Nikolay, Zaevski, Tsvetelin, Iliev, Anton, Kyurkchiev, Vesselin, and Rahnev, Asen
- Subjects
- *
APPROXIMATION theory , *WEB-based user interfaces , *ANTENNAS (Electronics) , *SCIENTIFIC computing , *INTEGRALS - Abstract
In this paper, we propose a new modified planar Kelvin–Stuart model. We demonstrate some modules for investigating the dynamics of the proposed model. This will be included as an integral part of a planned, much more general Web-based application for scientific computing. Investigations in light of Melnikov's approach are considered. Some simulations and applications are also presented. The proposed new modifications of planar Kelvin–Stuart models contain many free parameters (the coefficients g i , i = 1 , 2 , ... , N ), which makes them attractive for use in engineering applications such as the antenna feeder technique (a possible generating and simulating of antenna factors) and the theory of approximations (a possible good approximation of a given electrical stage). The probabilistic control of the perturbations is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Local multiset dimension of corona product on tree graphs.
- Author
-
Alfarisi, Ridho, Susilowati, Liliek, Dafik, and Kristiana, Arika Indah
- Subjects
- *
TREE graphs , *SCIENTIFIC computing , *COMPUTER science , *MULTIPLICITY (Mathematics) , *ROBOTS - Abstract
One of the topics of distance in graphs is resolving set problem. This topic has many applications in science and technology namely navigation robots, chemistry structure, and computer sciences. Suppose the set W = { s 1 , s 2 , ... , s k } ⊂ V (G) , the vertex representations of x ∈ V (G) is r m (x | W) = { d (x , s 1) , d (x , s 2) , ... , d (x , s k) } , where d (x , s i) is the length of the shortest path of the vertex x and the vertex in W together with their multiplicity. The set W is called a local m -resolving set of graphs G if r m (v | W) ≠ r m (u | W) for u v ∈ E (G). The local m -resolving set having minimum cardinality is called the local multiset basis and its cardinality is called the local multiset dimension of G , denoted by m d l (G). In our paper, we determine the establish bounds of local multiset dimension of graph resulting corona product of tree graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Anderson acceleration with approximate calculations: Applications to scientific computing.
- Author
-
Lupo Pasini, Massimiliano and Laiu, M. Paul
- Subjects
- *
BOLTZMANN'S equation , *SCIENTIFIC computing , *LINEAR systems , *PROBLEM solving , *HEURISTIC - Abstract
Summary: We provide rigorous theoretical bounds for Anderson acceleration (AA) that allow for approximate calculations when applied to solve linear problems. We show that, when the approximate calculations satisfy the provided error bounds, the convergence of AA is maintained while the computational time could be reduced. We also provide computable heuristic quantities, guided by the theoretical error bounds, which can be used to automate the tuning of accuracy while performing approximate calculations. For linear problems, the use of heuristics to monitor the error introduced by approximate calculations, combined with the check on monotonicity of the residual, ensures the convergence of the numerical scheme within a prescribed residual tolerance. Motivated by the theoretical studies, we propose a reduced variant of AA, which consists in projecting the least‐squares used to compute the Anderson mixing onto a subspace of reduced dimension. The dimensionality of this subspace adapts dynamically at each iteration as prescribed by the computable heuristic quantities. We numerically show and assess the performance of AA with approximate calculations on: (i) linear deterministic fixed‐point iterations arising from the Richardson's scheme to solve linear systems with open‐source benchmark matrices with various preconditioners and (ii) non‐linear deterministic fixed‐point iterations arising from non‐linear time‐dependent Boltzmann equations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Building and Sustaining a Community Resource for Best Practices in Scientific Software: The Story of BSSw.io.
- Author
-
Gupta, Rinku, Bernholdt, David E., Bartlett, Roscoe A., Grubel, Patricia A., Heroux, Michael A., McInnes, Lois Curfman, Miller, Mark C., Salim, Kasia, Shuler, Jean, Stevens, Deborah, Watson, Gregory R., and Wolfenbarger, Paul R.
- Subjects
COMPUTER software development ,INDUSTRIAL engineering ,SUSTAINABLE engineering ,SCIENTIFIC computing ,ENGINEERING management - Abstract
The development of scientific software—a cornerstone of long-term collaboration and scientific progress—parallels the development of other types of software but still poses distinct challenges, especially in high-performance computing. Although web searches yield numerous resources on software engineering, there is still a scarcity specifically for scientific software development. This article introduces the Better Scientific Software site (https://bssw.io), a platform that hosts a community of researchers, developers, and practitioners who share their experiences and insights on scientific software development. Since 2017, this collaborative hub has gained traction within the scientific computing community, attracting a growing number of readers and contributors eager to share ideas and elevate their software development practices. In sharing the BSSw.io site's story, we hope to encourage further growth of the BSSw.io community through both readership and contributors, with a long-term goal of fostering culture change by increasing emphasis on best practices in scientific software. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Grad Ready: Enhancing graduate readiness through intelligent mobile application.
- Author
-
ALOTAIBI, Hanan and SAIFUDDIN, Shireen S.
- Subjects
COMPUTER literacy ,JOB hunting ,SCIENTIFIC computing ,LABOR market ,MOBILE apps - Abstract
Graduates experience significant challenges when moving from academic environments to the job market. The issues include a shortage of practical experience, a disparity between their academic knowledge and companies’ requirements, struggles with resume building, and challenges in job searching. The application designed to facilitate the graduates’ transition into the job market is called “The Grad Ready”. It is an intelligent mobile app that bridges the gap between academic knowledge and the professional skill requirements for computer science graduates using matching algorithms to enable companies to target and recruit them. The Waterfall software development methodology was used for it. The requirements were determined by conducting a survey with students ready to graduate from the computer science department. Following the design, implementation, and testing phases, the application was successfully completed, resulting in the development of a user-friendly and intelligent solution meeting all the intended goals and performance criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. On the convergence properties of generalized Szász–Kantorovich type operators involving Frobenious–Euler–Simsek-type polynomials.
- Author
-
Agyuz, Erkan
- Subjects
EULER polynomials ,GENERATING functions ,SCIENTIFIC computing ,APPROXIMATION error ,POLYNOMIALS - Abstract
This work focuses on the study of approximation properties of functions by Szász type operators involving Frobenius–Euler–Simsek-type polynomials, which have become more popular recently because of their special characteristics and functional organization. The convergence properties such as uniformly convergence and pointwise convergence in terms of modulus of continuity and Peetre- K functional are investigated with the help of these sequences of operators in depth. This paper also includes the estimation of the error of the approximation of these sequences of operators to some particular class of functions. The estimates are depicted using the Maple scientific computing program and presented in tables. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. U-DeepONet: U-Net enhanced deep operator network for geologic carbon sequestration.
- Author
-
Diab, Waleed and Al Kobaisi, Mohammed
- Subjects
- *
ARTIFICIAL neural networks , *GEOLOGICAL carbon sequestration , *POROUS materials , *TWO-phase flow , *SCIENCE education , *SCIENTIFIC computing - Abstract
Learning operators with deep neural networks is an emerging paradigm for scientific computing. Deep Operator Network (DeepONet) is a modular operator learning framework that allows for flexibility in choosing the kind of neural network to be used in the trunk and/or branch of the DeepONet. This is beneficial as it has been shown many times that different types of problems require different kinds of network architectures for effective learning. In this work, we design an efficient neural operator based on the DeepONet architecture. We introduce U-Net enhanced DeepONet (U-DeepONet) for learning the solution operator of highly complex CO2-water two-phase flow in heterogeneous porous media. The U-DeepONet is more accurate in predicting gas saturation and pressure buildup than the state-of-the-art U-Net based Fourier Neural Operator (U-FNO) and the Fourier-enhanced Multiple-Input Operator (Fourier-MIONet) trained on the same dataset. Moreover, our U-DeepONet is significantly more efficient in training times than both the U-FNO (more than 18 times faster) and the Fourier-MIONet (more than 5 times faster), while consuming less computational resources. We also show that the U-DeepONet is more data efficient and better at generalization than both the U-FNO and the Fourier-MIONet. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. SCALABLE APPROXIMATION AND SOLVERS FOR IONIC ELECTRODIFFUSION IN CELLULAR GEOMETRIES.
- Author
-
BENEDUSI, PIETRO, ELLINGSRUD, ADA JOHANNE, HERLYNG, HALVOR, and ROGNES, MARIE E.
- Subjects
- *
ELECTRODIFFUSION , *FINITE element method , *SCIENTIFIC computing , *EQUATIONS , *IONS - Abstract
The activity and dynamics of excitable cells are fundamentally regulated and moderated by extracellular and intracellular ion concentrations and their electric potentials. The increasing availability of dense reconstructions of excitable tissue at extreme geometric detail pose a new and clear scientific computing challenge for computational modeling of ion dynamics and transport. In this paper, we design, develop and evaluate a scalable numerical algorithm for solving the time-dependent and nonlinear KNP-EMI (Kirchhoff--Nernst--Planck extracellular-membrane-intracellular) equations describing ionic electrodiffusion for excitable cells with an explicit geometric representation of intracellular and extracellular compartments and interior interfaces. We also introduce and specify a set of model scenarios of increasing complexity suitable for benchmarking. Our solution strategy is based on an implicit-explicit discretization and linearization in time; a mixed finite element discretization of ion concentrations and electric potentials in intracellular and extracellular domains; and an algebraic multigrid-based, inexact block-diagonal preconditioner for GMRES. Numerical experiments with up to 108 unknowns per time step and up to 256 cores demonstrate that this solution strategy is robust and scalable with respect to the problem size, time discretization, and number of cores. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. THE PROMISE AND PITFALLS OF MACHINE LEARNING IN OCEAN REMOTE SENSING.
- Author
-
Gray, Patrick Clifton, Boss, Emmanuel, Prochaska, J. Xavier, Kerner, Hannah, Demeaux, Charlotte Begouen, and Lehahn, Yoav
- Subjects
- *
COMPUTER vision , *REMOTE sensing , *COMPUTER science , *DISTANCE education , *SCIENTIFIC computing - Abstract
The proliferation of easily accessible machine learning algorithms and their apparent successes at inference and classification in computer vision and the sciences has motivated their increased adoption in ocean remote sensing. Our field, however, runs the risk of developing these models on limited training datasets--with sparse geographical and temporal sampling or ignoring the real data dimensionality--thereby constructing over-fitted or non-generalized algorithms. These models may perform poorly in new regimes or on new, anomalous phenomena that emerge in a changing climate. We highlight these issues and strategies for mitigating them, share a few heuristics to help users develop intuition for machine learning methods, and provide a vision for areas we believe are underexplored at the intersection of machine learning and ocean remote sensing. The ocean is a complex physical-biogeochemical system that we cannot mechanistically model well despite our best efforts. Machine learning has the potential to play an important role in improved process understanding, but we must always ask what we are learning after the model has learned. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Physics-informed quantum neural network for solving forward and inverse problems of partial differential equations.
- Author
-
Xiao, Y., Yang, L. M., Shu, C., Chew, S. C., Khoo, B. C., Cui, Y. D., and Liu, Y. Y.
- Subjects
- *
PARTIAL differential equations , *INVERSE problems , *TRIGONOMETRIC functions , *SCIENTIFIC computing , *LOGIC - Abstract
Recently, physics-informed neural networks (PINNs) have aroused an upsurge in the field of scientific computing including solving partial differential equations (PDEs), which convert the task of solving PDEs into an optimization challenge by adopting governing equations and definite conditions or observation data as loss functions. Essentially, the underlying logic of PINNs is based on the universal approximation and differentiability properties of classical neural networks (NNs). Recent research has revealed that quantum neural networks (QNNs), known as parameterized quantum circuits, also exhibit universal approximation and differentiability properties. This observation naturally suggests the application of PINNs to QNNs. In this work, we introduce a physics-informed quantum neural network (PI-QNN) by employing the QNN as the function approximator for solving forward and inverse problems of PDEs. The performance of the proposed PI-QNN is evaluated by various forward and inverse PDE problems. Numerical results indicate that PI-QNN demonstrates superior convergence over PINN when solving PDEs with exact solutions that are strongly correlated with trigonometric functions. Moreover, its accuracy surpasses that of PINN by two to three orders of magnitude, while requiring fewer trainable parameters. However, the computational time of PI-QNN exceeds that of PINN due to its operation on classical computers. This limitation may improve with the advent of commercial quantum computers in the future. Furthermore, we briefly investigate the impact of network architecture on PI-QNN performance by examining two different QNN architectures. The results suggest that increasing the number of trainable network layers can enhance the expressiveness of PI-QNN. However, an excessive number of data encoding layers significantly increases computational time, rendering the marginal gains in performance insufficient to compensate for the shortcomings in computational efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Residual and Unmodeled Ocean Tide Signal From 20+ Years of GRACE and GRACE‐FO Global Gravity Field Models.
- Author
-
Koch, Igor, Duwe, Mathias, and Flury, Jakob
- Subjects
- *
ICE sheet thawing , *WATER depth , *TERRITORIAL waters , *ORBITS (Astronomy) , *SCIENTIFIC computing , *OCEAN color - Abstract
We analyze remaining ocean tide signal in K/Ka‐band range‐rate (RR) postfit residuals, obtained after estimation of monthly gravity field solutions from 21.5 years of Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow‐On sensor data. Low‐pass filtered and numerically differentiated residuals are assigned to 5°×5° $\mathrm{5}{}^{\circ}\times \mathrm{5}{}^{\circ}$ grids and a spectral analysis is performed using Lomb‐Scargle periodograms. We identified enhanced amplitudes at over 30 ocean tide periods. Spectral replicas revealed several tides from sub‐semidiurnal bands. Increased ocean tide amplitudes are located in expected regions, that is, in high‐latitude, coastal and shallow water regions, although some tides also show distinct patterns over the open ocean. While most identified tides are considered during processing, and therefore the amplitudes represent residual signal w.r.t. the ocean tide model, several unmodeled tides were found, including astronomical degree‐3 tides M31 ${{}^{3}\mathrm{M}}_{1}$, N32 ${{}^{3}\mathrm{N}}_{2}$, L32 ${{}^{3}\mathrm{L}}_{2}$, M33 ${{}^{3}\mathrm{M}}_{3}$, and radiational and/or compound tides S3 ${\mathrm{S}}_{3}$, R3 ${\mathrm{R}}_{3}$/SK3 ${/\mathrm{S}\mathrm{K}}_{3}$, T3 ${\mathrm{T}}_{3}$/SP3 ${/\mathrm{S}\mathrm{P}}_{3}$, 2SM2 ${\mathrm{2}\mathrm{S}\mathrm{M}}_{2}$ and 2MK3 ${\mathrm{2}\mathrm{M}\mathrm{K}}_{3}$/MO3 ${/\mathrm{M}\mathrm{O}}_{3}$. The astronomical degree‐3 tides were observed on a global level for the first time a few years ago in altimeter data. We are unaware of any global data‐constrained solutions for the other tides. The amplitude patterns of these tides exhibit similarities to purely hydrodynamic solutions, and altimeter observations (astronomical degree‐3 only). The sensitivity of the satellites to these rather small tidal effects demands their inclusion into the gravity field recovery processing to reduce orbit modeling errors and a possible aliasing. The conducted study shows enormous potential of RR postfit residuals analysis for validating ocean tide models and improving gravity field recovery processing strategies. Plain Language Summary: Ocean tide models describe periodic mass movements in the oceans caused by the gravitational attraction of the Moon and Sun, and other more complex effects. These models are very important for describing satellite orbits and computing scientific products from satellite data, as ocean mass variations affect satellite motion in space. Ocean tide models are not error‐free, particularly in polar regions where high‐quality observations are lacking. This study analyzed 21.5 years of residual distance changes between the two satellites of the Gravity Recovery and Climate Experiment and its follow‐on mission to identify ocean tide model errors. We identified more than 30 tidal frequencies at which the applied model for the description of the satellite orbits shows noticeable errors. Several of these tidal frequencies are rather minor phenomena that have only recently been observed globally. Others have never been observed globally in satellite observations. The results can be used to verify and enhance ocean tide models, and to adjust orbit modeling strategies. Both are essential to advance the quality of satellite data products, and, for example, to improve our understanding about ice sheet melting, sea‐level change and other processes on Earth. Key Points: Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow‐On K/Ka‐band range‐rate postfit residuals are analyzed for residual and unmodeled ocean tide signalMore than 30 prominent tidal frequencies from different bands were detectedRange‐rate postfit residuals analysis has enormous potential for ocean tide model validation and improvement of gravity field recovery [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Accelerating imaging research at large‐scale scientific facilities through scientific computing.
- Author
-
Wang, Chunpeng, Li, Xiaoyun, Wan, Rongzheng, Chen, Jige, Ye, Jing, Li, Ke, Li, Aiguo, Tai, Renzhong, and Sepe, Alessandro
- Subjects
- *
ONLINE data processing , *SYNCHROTRON radiation , *SCIENTIFIC computing , *PROCESS capability , *COMPUTER workstation clusters - Abstract
To date, computed tomography experiments, carried‐out at synchrotron radiation facilities worldwide, pose a tremendous challenge in terms of the breadth and complexity of the experimental datasets produced. Furthermore, near real‐time three‐dimensional reconstruction capabilities are becoming a crucial requirement in order to perform high‐quality and result‐informed synchrotron imaging experiments, where a large amount of data is collected and processed within a short time window. To address these challenges, we have developed and deployed a synchrotron computed tomography framework designed to automatically process online the experimental data from the synchrotron imaging beamlines, while leveraging the high‐performance computing cluster capabilities to accelerate the real‐time feedback to the users on their experimental results. We have, further, integrated it within a modern unified national authentication and data management framework, which we have developed and deployed, spanning the entire data lifecycle of a large‐scale scientific facility. In this study, the overall architecture, functional modules and workflow design of our synchrotron computed tomography framework are presented in detail. Moreover, the successful integration of the imaging beamlines at the Shanghai Synchrotron Radiation Facility into our scientific computing framework is also detailed, which, ultimately, resulted in accelerating and fully automating their entire data processing pipelines. In fact, when compared with the original three‐dimensional tomography reconstruction approaches, the implementation of our synchrotron computed tomography framework led to an acceleration in the experimental data processing capabilities, while maintaining a high level of integration with all the beamline processing software and systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Energy and Scientific Workflows: Smart Scheduling and Execution.
- Author
-
WARADE, MEHUL, LEE, KEVIN, RANAWEERA, CHATHURIKA, and SCHNEIDER, JEAN-GUY
- Subjects
HIGH performance computing ,PARALLEL programming ,COMPUTER workstation clusters ,ENERGY consumption ,SCIENTIFIC computing ,WORKFLOW management systems - Abstract
Energy-efficient computation is an increasingly important target in modern-day computing. Scientific computation is conducted using scientific workflows that are executed on highly scalable compute clusters. The execution of these workilows is generally geared towards optimizing run-time performance with the energy footprint of the execution being ignored. Evidently. minimizing both execution time as well as energy consumption does not have to be mutually exclusive. The aim of the research presented in this paper is to highlight the benefits of energy-aware scientific workflow execution. In this paper. a set of requirements for an energy-aware scheduler are outlined and a conceptual architecture for the scheduler is presented. The evaluation of the conceptual architecture was performed by developing a proof of concept scheduler which was able to achieve around 49.97% reduction in the energy consumption of the computation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Making Computer Science Accessible through Universal Design for Learning in Inclusive Education.
- Author
-
Salgarayeva, Gulnaz and Makhanova, Aigul
- Subjects
UNIVERSAL design ,INCLUSIVE education ,INFORMATION technology ,SCIENTIFIC computing ,COMPUTER science students - Abstract
The field of technology and computer science (CS) is developing dynamically. Just as anyone can learn computers at any age, students with special educational needs (SEN) also aspire to acquire IT (information technology) knowledge on an equal footing with all other students. However, one of the obstacles facing students with SEN is the lack of educational materials and programs for CS in secondary schools. The authors have designed teaching materials and assignments that promote inclusion. This study aims to evaluate the impact of teaching resources developed based on universal design for learning (UDL) to make the school's CS course accessible to all students. The experiment involved 16 students and five teachers. For 8 weeks, students studied computer science using training materials based on UDL. Assessment of knowledge outcome indicators, particularly programming skills, was conducted before and after the experiment. After studying computer science through specific tasks, the interviewees demonstrated a higher level of assimilation of the subject, as indicated by the subsequent test results (mean = 12.13, standard deviation = 1.20), compared to the pre-experimental test (mean = 8.94, standard deviation = 1.12). The study demonstrated that using special UDLbased tasks to teach CS makes it more accessible and has a positive impact on students with special educational needs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. The accelerated tensor Kaczmarz algorithm with adaptive parameters for solving tensor systems.
- Author
-
Liao, Yimou, Li, Wen, and Yang, Dan
- Subjects
- *
SCIENTIFIC computing , *ARTIFICIAL intelligence , *SEPARATION of variables , *IMAGE processing - Abstract
Solving tensor systems is a common task in scientific computing and artificial intelligence. In this paper, we propose a tensor randomized average Kaczmarz method with adaptive parameters that exponentially converges to the unique least Frobenius norm solution of a given consistent tensor system under the t-product structure. In order to accelerate convergence, a tensor average Kaczmarz method based on stochastic heavy ball momentum technique (tAKSHBM) is proposed. The tAKSHBM method utilizes iterative information to update parameters instead of relying on prior information, addressing the problem in the adaptive learning of parameters. Additionally, the tAKSHBM method based on Fourier transform is proposed, which can be effectively implemented in a distributed environment. It is proven that the iteration sequences generated by all the proposed methods are convergent for given consistent tensor systems. Finally, we conduct experiments on both synthetic data and practical applications to support our theoretical results and demonstrate the effectiveness of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. PT-ESM: A Parameter-Testing and Integration Framework for Earth System Models Oriented towards High-Performance Computing.
- Author
-
Guo, Jiaxu, Hu, Liang, Xu, Gaochao, Hu, Juncheng, and Che, Xilong
- Subjects
- *
ATMOSPHERIC models , *SCIENTIFIC computing , *RESEARCH personnel , *SENSITIVITY analysis , *SWINDLERS & swindling - Abstract
High-performance computing (HPC) plays a crucial role in scientific computing, and the efficient utilization of HPC to accomplish computational tasks remains a focal point of research. This study addresses the issue of parameter tuning for Earth system models by proposing a comprehensive solution based on the concept of scientific workflows. This solution encompasses detailed methods from sensitivity analysis to parameter tuning and incorporates various approaches to enhance result accuracy. We validated the reliability of our methods using five cases in the Single Column Atmosphere Model (SCAM). Specifically, we investigated the influence of fluctuations of 11 typical parameters on 10 output variables. The experimental results show that the magnitude of the impact on the results varies significantly when different parameters are perturbed. These findings will help researchers develop more reasonable parameterization schemes for different regions and seasons. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Improving resource utilization and fault tolerance in large simulations via actors.
- Author
-
Klenk, Kyle and Spiteri, Raymond J.
- Subjects
- *
SCIENTIFIC computing , *SCIENTIFIC models , *ACTORS , *CONTINENTS - Abstract
Large simulations with many independent sub-simulations are common in scientific computing. There are numerous challenges, however, associated with performing such simulations in shared computing environments. For example, sub-simulations may have wildly varying completion times or not complete at all, leading to unpredictable runtimes as well as unbalanced and inefficient use of human and computational resources. In this study, we use the actor model of concurrent computation to improve both the resource utilization and fault tolerance for large-scale scientific computing simulations. More specifically, we use actors in the SUMMA model to manage a large-scale hydrological simulation over the North American continent with over 500,000 independent sub-simulations. We find that the actors implementation outperforms a standard array job submission as well as the job submission tool GNU Parallel by better balancing the computational load across processors. The actors implementation also improves fault tolerance and can eliminate the user intervention required to detect and re-submit failed jobs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.