4,942 results on '"scientific computing"'
Search Results
2. Extending GPU-accelerated Gaussian integrals in the TeraChem software package to f type orbitals: Implementation and applications.
- Author
-
Wang, Yuanheng, Hait, Diptarka, Johnson, K. Grace, Fajen, O. Jonathan, Zhang, Juncheng Harry, Guerrero, Rubén D., and Martínez, Todd J.
- Subjects
- *
ANGULAR momentum (Mechanics) , *TRANSITION metal complexes , *WATER clusters , *DENSITY functional theory , *SCIENTIFIC computing , *GRAPHICS processing units - Abstract
The increasing availability of graphics processing units (GPUs) for scientific computing has prompted interest in accelerating quantum chemical calculations through their use. However, the complexity of integral kernels for high angular momentum basis functions often limits the utility of GPU implementations with large basis sets or for metal containing systems. In this work, we report the implementation of f function support in the GPU-accelerated TeraChem software package through the development of efficient kernels for the evaluation of Hamiltonian integrals. The high efficiency of the resulting code is demonstrated through density functional theory (DFT) calculations on increasingly large organic molecules and transition metal complexes, as well as coupled cluster singles and doubles calculations on water clusters. Preliminary investigations into Ni(I) catalysis with DFT and the photochemistry of MnH(CH3) with complete active space self-consistent field are also carried out. Overall, our GPU-accelerated software appears to be well-suited for fast simulation of large transition metal containing systems, as well as organic molecules. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Discovering 3D Hidden Elasticity in Isotropic and Transversely Isotropic Materials with Physics-informed UNets
- Author
-
Kamali, Ali and Laksari, Kaveh
- Subjects
Engineering ,Biomedical Engineering ,Bioengineering ,Biomedical Imaging ,Biotechnology ,digital volume correlation ,model-based elastography ,physics-informed deep learning ,scientific computing ,tissue biomechanics - Abstract
Three-dimensional variation in structural components or fiber alignments results in complex mechanical property distribution in tissues and biomaterials. In this paper, we use a physics-informed UNet-based neural network model (El-UNet) to discover the three-dimensional (3D) internal composition and space-dependent material properties of heterogeneous isotropic and transversely isotropic materials without a priori knowledge of the composition. We then show the capabilities of El-UNet by validating against data obtained from finite-element simulations of two soft tissues, namely, brain tissue and articular cartilage, under various loading conditions. We first simulated compressive loading of 3D brain tissue comprising of distinct white matter and gray matter mechanical properties undergoing small strains with isotropic linear elastic behavior, where El-UNet reached mean absolute relative errors under 1.5 % for elastic modulus and Poisson's ratio estimations across the 3D volume. We showed that the 3D solution achieved by El-UNet was superior to relative stiffness mapping by inverse of axial strain and two-dimensional plane stress/plane strain approximations. Additionally, we simulated a transversely isotropic articular cartilage with known fiber orientations undergoing compressive loading, and accurately estimated the spatial distribution of all five material parameters, with mean absolute relative errors under 5 %. Our work demonstrates the application of the computationally efficient physics-informed El-UNet in 3D elasticity imaging and provides methods for translation to experimental 3D characterization of soft tissues and other materials. The proposed El-UNet offers a powerful tool for both in vitro and ex vivo tissue analysis, with potential extensions to in vivo diagnostics. STATEMENT OF SIGNIFICANCE: Elasticity imaging is a technique that reconstructs mechanical properties of tissue using deformation and force measurements. Given the complexity of this reconstruction, most existing methods have mostly focused on 2D problems. Our work is the first implementation of physics-informed UNets to reconstruct three-dimensional material parameter distributions for isotropic and transversely isotropic linear elastic materials by having deformation and force measurements. We comprehensively validate our model using synthetic data generated using finite element models of biological tissues with high bio-fidelity-the brain and articular cartilage. Our method can be implemented in elasticity imaging scenarios for in vitro and ex vivo mechanical characterization of biomaterials and biological tissues, with potential extensions to in vivo diagnostics.
- Published
- 2024
4. Transforming Science Through Software: Improving While Delivering 100×
- Author
-
Gerber, Richard, Gottlieb, Steven, Heroux, Michael A, and McInnes, Lois Curfman
- Subjects
Information and Computing Sciences ,Software Engineering ,Affordable and Clean Energy ,Special issues and sections ,Ecosystems ,Computational modeling ,Scientific computing ,Programming ,Productivity ,Software development management ,US Department of Energy ,Exascale computing ,Numerical and Computational Mathematics ,Computation Theory and Mathematics ,Distributed Computing ,Fluids & Plasmas ,Engineering ,Information and computing sciences - Published
- 2024
5. A Cast of Thousands: How the IDEAS Productivity Project Has Advanced Software Productivity and Sustainability
- Author
-
McInnes, Lois Curfman, Heroux, Michael A, Bernholdt, David E, Dubey, Anshu, Gonsiorowski, Elsa, Gupta, Rinku, Marques, Osni, Moulton, J David, Nam, Hai Ah, Norris, Boyana, Raybourn, Elaine M, Willenbring, Jim, Almgren, Ann, Bartlett, Roscoe A, Cranfill, Kita, Fickas, Stephen, Frederick, Don, Godoy, William F, Grubel, Patricia A, Hartman-Baker, Rebecca, Huebl, Axel, Lynch, Rose, Malviya-Thakur, Addi, Milewicz, Reed, Miller, Mark C, Mundt, Miranda R, Palmer, Erik, Parete-Koon, Suzanne, Phinney, Megan, Riley, Katherine, Rogers, David M, Sims, Benjamin, Stevens, Deborah, and Watson, Gregory R
- Subjects
Information and Computing Sciences ,Software Engineering ,Networking and Information Technology R&D (NITRD) ,Decent Work and Economic Growth ,Affordable and Clean Energy ,Software development management ,Sustainable development ,Productivity ,Ecosystems ,Next generation networking ,Technological innovation ,Scientific computing ,ATAP-2024 ,ATAP-GENERAL ,ATAP-AMP ,Numerical and Computational Mathematics ,Computation Theory and Mathematics ,Distributed Computing ,Fluids & Plasmas ,Engineering ,Information and computing sciences - Abstract
Computational and data-enabled science and engineering are revolutionizing advances throughout science and society, at all scales of computing. For example, teams in the U.S. Department of Energy's Exascale Computing Project have been tackling new frontiers in modeling, simulation, and analysis by exploiting unprecedented exascale computing capabilities-building an advanced software ecosystem that supports next-generation applications and addresses disruptive changes in computer architectures. However, concerns are growing about the productivity of the developers of scientific software. Members of the Interoperable Design of Extreme-scale Application Software project serve as catalysts to address these challenges through fostering software communities, incubating and curating methodologies and resources, and disseminating knowledge to advance developer productivity and software sustainability. This article discusses how these synergistic activities are advancing scientific discovery-mitigating technical risks by building a firmer foundation for reproducible, sustainable science at all scales of computing, from laptops to clusters to exascale and beyond.
- Published
- 2024
6. Randomly pivoted Cholesky: Practical approximation of a kernel matrix with few entry evaluations.
- Author
-
Chen, Yifan, Epperly, Ethan N., Tropp, Joel A., and Webber, Robert J.
- Subjects
- *
SCIENCE education , *SCIENTIFIC computing , *MACHINE learning , *ARITHMETIC , *SEMIDEFINITE programming , *ALGORITHMS - Abstract
The randomly pivoted Cholesky algorithm (RPCholesky) computes a factorized rank‐k$k$ approximation of an N×N$N \times N$ positive‐semidefinite (psd) matrix. RPCholesky requires only (k+1)N$(k + 1)N$ entry evaluations and O(k2N)$\mathcal {O}(k^2 N)$ additional arithmetic operations, and it can be implemented with just a few lines of code. The method is particularly useful for approximating a kernel matrix. This paper offers a thorough new investigation of the empirical and theoretical behavior of this fundamental algorithm. For matrix approximation problems that arise in scientific machine learning, experiments show that RPCholesky matches or beats the performance of alternative algorithms. Moreover, RPCholesky provably returns low‐rank approximations that are nearly optimal. The simplicity, effectiveness, and robustness of RPCholesky strongly support its use in scientific computing and machine learning applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Upper triangulation-based infinity norm bounds for the inverse of Nekrasov matrices with applications.
- Author
-
Gao, Lei, Gu, Xian-Ming, Jia, Xiudan, and Li, Chaoqian
- Subjects
- *
MATRIX inversion , *MATRIX norms , *SCIENTIFIC computing , *LINEAR complementarity problem - Abstract
The infinity norm bounds for the inverse of Nekrasov matrices play an important role in scientific computing. We in this paper propose a triangulation-based approach that can easily be implemented to seek sharper infinity norm bounds for the inverse of Nekrasov matrices. With the help of such sharper bounds, new error estimates for the linear complementarity problem of Nekrasov matrices are presented, and a new infinity norm estimate of the iterative matrix of parallel-in-time methods for an all-at-once system from Volterra partial integral-differential problems is given. Finally, these new bounds are compared with other state-of-the-art results so that the effectiveness of our proposed results is verified. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Generation of normal distributions revisited.
- Author
-
Umeda, Takayuki
- Subjects
- *
CUMULATIVE distribution function , *RANDOM numbers , *GAUSSIAN distribution , *RANDOM sets , *SCIENTIFIC computing - Abstract
Normally distributed random numbers are commonly used in scientific computing in various fields. It is important to generate a set of random numbers as close to a normal distribution as possible for reducing initial fluctuations. Two types of samples from a uniform distribution are examined as source samples for inverse transform sampling methods. Three types of inverse transform sampling methods with new approximations of inverse cumulative distribution functions are also discussed for converting uniformly distributed source samples to normally distributed samples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Sustainability-integrated value stream mapping with process mining.
- Author
-
Horsthofer-Rauch, Julia, Guesken, Sarah Ranjana, Weich, Jonas, Rauch, Alexander, Bittner, Maik, Schulz, Julia, and Zaeh, Michael F.
- Subjects
VALUE stream mapping ,SCIENTIFIC computing ,CLIMATE change ,PROCESS mining ,MANUFACTURING processes - Abstract
Value stream mapping is a well-established tool for analyzing and optimizing value streams in production. In its conventional form, it requires a high level of manual effort and is often inefficient in volatile and high-variance environments. The idea of digitizing value stream mapping to increase efficiency has thus been put forward. A common means suggested for digitization is Process Mining, a field related to Data Science and Process Management. Furthermore, adding sustainability aspects to value stream mapping has also been subject to research. Regarding the ongoing climate crisis and companies' endeavors to improve overall sustainability, integrating sustainability into value stream mapping must be deemed equally relevant. This research paper provides an overview of the state of the art of Process Mining-based and sustainability-integrated value stream mapping, proposes a framework for a combined approach, and presents technical details for the implementation of such an approach, including a validation from practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Data science and automation in the process of theorizing: Machine learning's power of induction in the co-duction cycle.
- Author
-
Kolkman, Daan, Lee, Gwendolyn K., and van Witteloostuijn, Arjen
- Subjects
- *
SCIENTIFIC computing , *DATA science , *MACHINE performance , *DATA analysis , *ABDUCTION - Abstract
Recent calls to take up data science either revolve around the superior predictive performance associated with machine learning or the potential of data science techniques for exploratory data analysis. Many believe that these strengths come at the cost of explanatory insights, which form the basis for theorization. In this paper, we show that this trade-off is false. When used as a part of a full research process, including inductive, deductive and abductive steps, machine learning can offer explanatory insights and provide a solid basis for theorization. We present a systematic five-step theory-building and theory-testing cycle that consists of: 1. Element identification (reduction); 2. Exploratory analysis (induction); 3. Hypothesis development (retroduction); 4. Hypothesis testing (deduction); and 5. Theorization (abduction). We demonstrate the usefulness of this approach, which we refer to as co-duction, in a vignette where we study firm growth with real-world observational data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Mathematical modeling of electroosmotically driven peristaltic propulsion due to transverse deflections of two periodically deformable curved tubes of unequal wavelengths.
- Author
-
Yadav, Pramod Kumar and Roshan, Muhammad
- Subjects
- *
ELECTRIC double layer , *ELECTRIC potential , *SCIENTIFIC computing , *SHEARING force , *FLUID flow - Abstract
The present study aims to investigate the viscid fluid propulsion due to the electroosmosis and transverse deflections of the sinusoidally deformable tubes of unequal wavelengths in the presence of electro-kinetic forces. This situation is estimated from the physical model of physiological fluid flow through a tubular structure in which an artificial flexible tube is being inserted. In this model, both peristaltically deforming tubes are taken in a curved configuration. The flow-governing momentum equations are simplified by the approximation of the long wavelength as compared to the outer tube's radius, whereas the Debye–Hückel approximation is used to simplify the equations that govern the electric potential distribution. Here, the authors have used the DSolve command in the scientific computing software MATHEMATICA 14 to obtain the expressions for electric potential and axial velocity of viscid fluid. In this work, the authors have analyzed the impact of various controlling parameters, such as the electro-physical parameters, curvature parameter, radius ratio, wavelength ratio, and amplitude ratios, on the various flow quantities graphically during the transport of viscid fluid through a curved endoscope. Here, contour plots are also drawn to visualize the streamlines and to observe the impacts of the control parameters on fluid trapping. During the analysis of the results, a noteworthy outcome extracted from the present model is that an increment in electro-physical parameters, such as Helmholtz–Smoluchowski velocity and the Debye–Hückel parameter, are responsible for enhancement in the shear stress at the inner tube's wall and the axial velocity under the influence of electro-kinetic forces. This is because of the electric double layer (EDL) thickness, which gets reduced on strengthening the Debye–Hückel parameter. This reduced EDL thickness is responsible for the enhancement in the axial velocity of the transporting viscid fluid. The present model also suggests that the axial velocity of viscid fluid can be reduced by enhancing the ratio of wavelengths of waves that travel down the walls of the outer curved tube and the inner curved tube. The above-mentioned results can play a significant role in developing and advancing the endoscopes that will be useful in many biomedical processes, such as gastroscopy, colonoscopy, and laparoscopy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. The design of a scientific data management system based on DOMAS at CSNS-II (preliminary stage).
- Author
-
Hu, Peng, Wang, Li, Tang, Ming, Li, Yakang, Chen, Juan, Hu, Hao, Wang, Haofan, Zhuang, Bo, Qi, Fazhi, and Zhang, Junrong
- Subjects
- *
DATABASE management , *DATA management , *DATABASES , *NEUTRON sources , *SCIENTIFIC computing - Abstract
At the second stage of China Spallation Neutron Source (CSNS-II), it is predicted that 2 PB raw experimental data will be produced annually from twenty instruments. Scientific computing puts forward higher requirements for data sharing, utilization, retrieval, analysis efficiency, and security. However, the existing data management system (DMS) based on ICAT has several limitations including poor scalability of metadata database, imperfect data-management lifecycle and inflexible API. To ensure the accuracy, usability, scalability and efficiency of CSNS-II experimental data, a new scientific data management system is therefore designed based on the DOMAS framework developed by the Computing Center of IHEP. The data acquisition, transmission, storage and service systems are re-designed and tailored specifically for CSNS-II. Upon its completion, the new DMS will overcome the existing challenges and offer functions such as online display, search functionality and rapid download capabilities for metadata, raw data and analyzed data; flexible and user-friendly authorization; and data lifecycle management. Ultimately, the implementation of the new Data Management System (DMS) is expected to enhance the efficiency of experimental data analysis, propelling CSNS-II to achieve international advanced standards. Furthermore, it aims to reinforce self-reliance and technological strength in the field of science and technology at a high level in China. The development and deployment of the new DMS begin at the end of 2023. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. 基于超算的多模式计算融合支撑系统.
- Author
-
卢宇彤 and 陈志广
- Subjects
ARTIFICIAL intelligence ,ELECTRONIC data processing ,SCIENTIFIC computing ,BIG data ,TELECOMMUNICATION systems - Abstract
Copyright of Acta Scientiarum Naturalium Universitatis Sunyatseni / Zhongshan Daxue Xuebao is the property of Sun-Yat-Sen University and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
14. ssc-cdi : A Memory-Efficient, Multi-GPU Package for Ptychography with Extreme Data.
- Author
-
Tonin, Yuri Rossi, Peixinho, Alan Zanoni, Brandao-Junior, Mauro Luiz, Ferraz, Paola, and Miqueles, Eduardo Xavier
- Subjects
X-ray imaging ,SCIENTIFIC computing ,INTEGRATED software ,SYNCHROTRONS ,EXPERTISE ,PYTHON programming language - Abstract
We introduce ssc-cdi, an open-source software package from the Sirius Scientific Computing family, designed for memory-efficient, single-node multi-GPU ptychography reconstruction. ssc-cdi offers a range of reconstruction engines in Python version 3.9.2 and C++/CUDA. It aims at developing local expertise and customized solutions to meet the specific needs of beamlines and user community of the Brazilian Synchrotron Light Laboratory (LNLS). We demonstrate ptychographic reconstruction of beamline data and present benchmarks for the package. Results show that ssc-cdi effectively handles extreme datasets typical of modern X-ray facilities without significantly compromising performance, offering a complementary approach to well-established packages of the community and serving as a robust tool for high-resolution imaging applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. On Verified Automated Reasoning in Propositional Logic: Teaching Sequent Calculus to Computer Science Students.
- Author
-
Lund, Simon Tobias and Villadsen, Jørgen
- Subjects
COMPUTER science students ,PROPOSITION (Logic) ,SCIENTIFIC computing ,SYSTEMS software ,MATHEMATICIANS ,SOFTWARE verification - Abstract
As the complexity of software systems is ever increasing, so is the need for practical tools for formal verification. Among these are automatic theorem provers, capable of solving various reasoning problems automatically, and proof assistants, capable of deriving more complex results when guided by a mathematician/programmer. In this paper we consider using the latter to build the former. In the proof assistant Isabelle/HOL we combine functional programming and logical program verification to build a theorem prover for propositional logic. We also consider how such a prover can be used to solve a reasoning task without much mental labor. The development is extended with a formalized proof system for writing machine-checked sequent calculus proofs. We consider how this can be used to teach computer science students about logic, automated reasoning and proof assistants. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. MDSA: A Dynamic and Greedy Approach to Solve the Minimum Dominating Set Problem.
- Author
-
Okumuş, Fatih and Karcı, Şeyda
- Subjects
DOMINATING set ,GRAPH theory ,SCIENTIFIC computing ,TIME complexity ,DYNAMIC programming - Abstract
The graph theory is one of the fundamental structures in computer science used to model various scientific and engineering problems. Many problems within the graph theory are categorized as NP-hard and NP-complete. One such problem is the minimum dominating set (MDS) problem, which seeks to identify the minimum possible subsets in a graph such that every other node in the subset is directly connected to a node in this subset. Due to its inherent complexity, developing an efficient polynomial-time method to address the MDS problem remains a significant challenge in graph theory. This paper introduces a novel algorithm that utilizes a centrality measure known as the Malatya Centrality to effectively address the MDS problem. The proposed algorithm, called the Malatya Dominating Set Algorithm (MDSA), leverages centrality values to identify dominating sets within a graph. It extends the Malatya centrality by incorporating a second-level centrality measure, which enhances the identification of dominating nodes. Through a systematic and algorithmic approach, these centrality values are employed to pinpoint the elements of the dominating set. The MDSA uniquely integrates greedy and dynamic programming strategies. At each step, the algorithm selects the most optimal (or near-optimal) node based on the centrality values (greedy approach) while updating the neighboring nodes' criteria to influence subsequent decisions (dynamic programming). The proposed algorithm demonstrates efficient performance, particularly in large-scale graphs, with time and space requirements scaling proportionally with the size of the graph and its average degree. Experimental results indicate that our algorithm outperforms existing methods, especially in terms of time complexity when applied to large datasets, showcasing its effectiveness in addressing the MDS problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. A study on I-lacunary statistical convergence of multiset sequences.
- Author
-
DEMİR, Nihal and GÜMÜŞ, Hafize
- Subjects
- *
SET theory , *SCIENTIFIC computing , *COMPUTER science , *EVERYDAY life , *MATHEMATICS - Abstract
In classical set theory, elements of the set are written once but the sets in which the same item is repeated several times in daily life are in all areas of our lives. These sets are called multisets and are studied in many fields such as Mathematics, Physics, Chemistry, and Computer Sciences. Sequences consisting of elements of these sets are called multiset sequences. In this paper, we study the concept of I-lacunary statistical convergence of multiset sequences and investigate some important results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Notes on Modified Planar Kelvin–Stuart Models: Simulations, Applications, Probabilistic Control on the Perturbations.
- Author
-
Kyurkchiev, Nikolay, Zaevski, Tsvetelin, Iliev, Anton, Kyurkchiev, Vesselin, and Rahnev, Asen
- Subjects
- *
APPROXIMATION theory , *WEB-based user interfaces , *ANTENNAS (Electronics) , *SCIENTIFIC computing , *INTEGRALS - Abstract
In this paper, we propose a new modified planar Kelvin–Stuart model. We demonstrate some modules for investigating the dynamics of the proposed model. This will be included as an integral part of a planned, much more general Web-based application for scientific computing. Investigations in light of Melnikov's approach are considered. Some simulations and applications are also presented. The proposed new modifications of planar Kelvin–Stuart models contain many free parameters (the coefficients g i , i = 1 , 2 , ... , N ), which makes them attractive for use in engineering applications such as the antenna feeder technique (a possible generating and simulating of antenna factors) and the theory of approximations (a possible good approximation of a given electrical stage). The probabilistic control of the perturbations is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Local multiset dimension of corona product on tree graphs.
- Author
-
Alfarisi, Ridho, Susilowati, Liliek, Dafik, and Kristiana, Arika Indah
- Subjects
- *
TREE graphs , *SCIENTIFIC computing , *COMPUTER science , *MULTIPLICITY (Mathematics) , *ROBOTS - Abstract
One of the topics of distance in graphs is resolving set problem. This topic has many applications in science and technology namely navigation robots, chemistry structure, and computer sciences. Suppose the set W = { s 1 , s 2 , ... , s k } ⊂ V (G) , the vertex representations of x ∈ V (G) is r m (x | W) = { d (x , s 1) , d (x , s 2) , ... , d (x , s k) } , where d (x , s i) is the length of the shortest path of the vertex x and the vertex in W together with their multiplicity. The set W is called a local m -resolving set of graphs G if r m (v | W) ≠ r m (u | W) for u v ∈ E (G). The local m -resolving set having minimum cardinality is called the local multiset basis and its cardinality is called the local multiset dimension of G , denoted by m d l (G). In our paper, we determine the establish bounds of local multiset dimension of graph resulting corona product of tree graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Anderson acceleration with approximate calculations: Applications to scientific computing.
- Author
-
Lupo Pasini, Massimiliano and Laiu, M. Paul
- Subjects
- *
BOLTZMANN'S equation , *SCIENTIFIC computing , *LINEAR systems , *PROBLEM solving , *HEURISTIC - Abstract
Summary: We provide rigorous theoretical bounds for Anderson acceleration (AA) that allow for approximate calculations when applied to solve linear problems. We show that, when the approximate calculations satisfy the provided error bounds, the convergence of AA is maintained while the computational time could be reduced. We also provide computable heuristic quantities, guided by the theoretical error bounds, which can be used to automate the tuning of accuracy while performing approximate calculations. For linear problems, the use of heuristics to monitor the error introduced by approximate calculations, combined with the check on monotonicity of the residual, ensures the convergence of the numerical scheme within a prescribed residual tolerance. Motivated by the theoretical studies, we propose a reduced variant of AA, which consists in projecting the least‐squares used to compute the Anderson mixing onto a subspace of reduced dimension. The dimensionality of this subspace adapts dynamically at each iteration as prescribed by the computable heuristic quantities. We numerically show and assess the performance of AA with approximate calculations on: (i) linear deterministic fixed‐point iterations arising from the Richardson's scheme to solve linear systems with open‐source benchmark matrices with various preconditioners and (ii) non‐linear deterministic fixed‐point iterations arising from non‐linear time‐dependent Boltzmann equations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Building and Sustaining a Community Resource for Best Practices in Scientific Software: The Story of BSSw.io.
- Author
-
Gupta, Rinku, Bernholdt, David E., Bartlett, Roscoe A., Grubel, Patricia A., Heroux, Michael A., McInnes, Lois Curfman, Miller, Mark C., Salim, Kasia, Shuler, Jean, Stevens, Deborah, Watson, Gregory R., and Wolfenbarger, Paul R.
- Abstract
The development of scientific software—a cornerstone of long-term collaboration and scientific progress—parallels the development of other types of software but still poses distinct challenges, especially in high-performance computing. Although web searches yield numerous resources on software engineering, there is still a scarcity specifically for scientific software development. This article introduces the Better Scientific Software site (https://bssw.io), a platform that hosts a community of researchers, developers, and practitioners who share their experiences and insights on scientific software development. Since 2017, this collaborative hub has gained traction within the scientific computing community, attracting a growing number of readers and contributors eager to share ideas and elevate their software development practices. In sharing the BSSw.io site's story, we hope to encourage further growth of the BSSw.io community through both readership and contributors, with a long-term goal of fostering culture change by increasing emphasis on best practices in scientific software. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. On the convergence properties of generalized Szász–Kantorovich type operators involving Frobenious–Euler–Simsek-type polynomials.
- Author
-
Agyuz, Erkan
- Subjects
EULER polynomials ,GENERATING functions ,SCIENTIFIC computing ,APPROXIMATION error ,POLYNOMIALS - Abstract
This work focuses on the study of approximation properties of functions by Szász type operators involving Frobenius–Euler–Simsek-type polynomials, which have become more popular recently because of their special characteristics and functional organization. The convergence properties such as uniformly convergence and pointwise convergence in terms of modulus of continuity and Peetre- K functional are investigated with the help of these sequences of operators in depth. This paper also includes the estimation of the error of the approximation of these sequences of operators to some particular class of functions. The estimates are depicted using the Maple scientific computing program and presented in tables. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. UPC++ v1.0 Specification, Revision 2023.9.0
- Author
-
Bonachea, Dan and Kamil, Amir
- Subjects
Exascale Computing ,Library specification ,parallel distributed programming ,PGAS ,scientific computing ,UPC++ - Abstract
UPC++ is a C++ library providing classes and functions that support Partitioned Global Address Space (PGAS) programming. The key communication facilities in UPC++ are one-sided Remote Memory Access (RMA) and Remote Procedure Call (RPC). All communication operations are syntactically explicit and default to non-blocking; asynchrony is managed through the use of futures, promises and continuation callbacks, enabling the programmer to construct a graph of operations to execute asynchronously as high-latency dependencies are satisfied. A global pointer abstraction provides system-wide addressability of shared memory, including host and accelerator memories. The parallelism model is primarily process-based, but the interface is thread-safe and designed to allow efficient and expressive use in multi-threaded applications. The interface is designed for extreme scalability throughout, and deliberately avoids design features that could inhibit scalability.
- Published
- 2023
24. UPC++ v1.0 Programmer’s Guide, Revision 2023.9.0
- Author
-
Bachan, John, Baden, Scott B, Bonachea, Dan, Corbino, Johnny, Grossman, Jonathan, Hargrove, Paul H, Hofmeyr, Steven, Jacquelin, Mathias, Kamil, Amir, Van Straalen, Brian, and Waters, Daniel
- Subjects
Exascale Computing ,GASNet ,Library Programmer's Guide ,parallel distributed programming ,PGAS ,scientific computing ,UPC++ - Abstract
UPC++ is a C++ library that supports Partitioned Global Address Space (PGAS) programming. It is designed for writing efficient, scalable parallel programs on distributed-memory parallel computers. The key communication facilities in UPC++ are one-sided Remote Memory Access (RMA) and Remote Procedure Call (RPC). The UPC++ control model is single program, multiple-data (SPMD), with each separate constituent process having access to local memory as it would in C++. The PGAS memory model additionally provides one-sided RMA communication to a global address space, which is allocated in shared segments that are distributed over the processes. UPC++ also features Remote Procedure Call (RPC) communication, making it easy to move computation to operate on data that resides on remote processes.UPC++ was designed to support exascale high-performance computing, and the library interfaces and implementation are focused on maximizing scalability. In UPC++, all communication operations are syntactically explicit, which encourages programmers to consider the costs associated with communication and data movement. Moreover, all communication operations are asynchronous by default, encouraging programmers to seek opportunities for overlapping communication latencies with other useful work. UPC++ provides expressive and composable abstractions designed for efficiently managing aggressive use of asynchrony in programs. Together, these design principles are intended to enable programmers to write applications using UPC++ that perform well even on hundreds of thousands of cores.
- Published
- 2023
25. Computation of pairs of related Gauss-type quadrature rules.
- Author
-
Alqahtani, H., Borges, C.F., Djukić, D.Lj., Mutavdžić Djukić, R.M., Reichel, L., and Spalević, M.M.
- Subjects
- *
SCIENTIFIC computing - Abstract
The evaluation of Gauss-type quadrature rules is an important topic in scientific computing. To determine estimates or bounds for the quadrature error of a Gauss rule often another related quadrature rule is evaluated, such as an associated Gauss-Radau or Gauss-Lobatto rule, an anti-Gauss rule, an averaged rule, an optimal averaged rule, or a Gauss-Kronrod rule when the latter exists. We discuss how pairs of a Gauss rule and a related Gauss-type quadrature rule can be computed efficiently by a divide-and-conquer method. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
26. Model design in data science: engineering design to uncover design processes and anomalies.
- Author
-
Bordas, Antoine, Le Masson, Pascal, and Weil, Benoit
- Subjects
- *
SCIENTIFIC computing , *DESIGN science , *SCIENTIFIC literature , *DATA science , *ENGINEERING design - Abstract
In the current data-rich environment, valorizing of data has become a common task in data science and requires the design of a statistical model to transform input data into a desirable output. The literature in data science regarding the design of new models is abundant, while in parallel, other streams of literature such as epistemology of science, has shown the relevance of anomalies in model design processes. Anomalies are to be understood as unexpected observations in data, an historical example being the discovery of Mercury based on its famous anomalous precession perihelion. Therefore, this paper addresses the various design processes in data science and their relationships to anomalies. To do so, we conceptualize what designing a data science model means, and we derive three design processes based on the latest theories in engineering design. This allows us to formulate assumptions regarding the relationships between each design process and anomalies, which we test with several case studies. Notably, three processes for the design of models in data science are identified and, for each of them, the following information is provided: (1) the various knowledge leveraged and generated and (2) the specific relations with anomalies. From a theoretical standpoint, this work is one of the first applications of design methods in data science. This work paves the way for more research at the intersection of engineering design and data science, which could enrich both fields. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
27. Two improved nonlinear conjugate gradient methods with application in conditional model regression function.
- Author
-
Elhamid, Mehamdia Abd, Yacine, Chaib, and Tahar, Bechouat
- Subjects
CONJUGATE gradient methods ,NONLINEAR equations ,SCIENTIFIC computing ,LINEAR equations ,REGRESSION analysis - Abstract
The conjugate gradient (CG) method is one of the most important ideas in scientific computing, it is applied to solve linear systems of equations and nonlinear optimization problems. In this paper, based on a variant of Dai-Yuan (DY) method and Fletcher-Reeves (FR) method, two modified CG methods (named IDY and IFR) are presented and analyzed. The search direction of the presented methods fulfills the sufficient descent condition at each iteration. We establish the global convergence of the proposed algorithms under normal assumptions and strong Wolfe line search. Preliminary elementary numerical experiment results are presented, demonstrating the promise and the effectiveness of the proposed methods. Finally, the proposed methods are further extended to solve the problem of conditional model regression function. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
28. Accelerating imaging research at large-scale scientific facilities through scientific computing
- Author
-
Chunpeng Wang, Xiaoyun Li, Rongzheng Wan, Jige Chen, Jing Ye, Ke Li, Aiguo Li, Renzhong Tai, and Alessandro Sepe
- Subjects
scientific computing ,synchrotron ,imaging ,automation ,tomography ,Nuclear and particle physics. Atomic energy. Radioactivity ,QC770-798 ,Crystallography ,QD901-999 - Abstract
To date, computed tomography experiments, carried-out at synchrotron radiation facilities worldwide, pose a tremendous challenge in terms of the breadth and complexity of the experimental datasets produced. Furthermore, near real-time three-dimensional reconstruction capabilities are becoming a crucial requirement in order to perform high-quality and result-informed synchrotron imaging experiments, where a large amount of data is collected and processed within a short time window. To address these challenges, we have developed and deployed a synchrotron computed tomography framework designed to automatically process online the experimental data from the synchrotron imaging beamlines, while leveraging the high-performance computing cluster capabilities to accelerate the real-time feedback to the users on their experimental results. We have, further, integrated it within a modern unified national authentication and data management framework, which we have developed and deployed, spanning the entire data lifecycle of a large-scale scientific facility. In this study, the overall architecture, functional modules and workflow design of our synchrotron computed tomography framework are presented in detail. Moreover, the successful integration of the imaging beamlines at the Shanghai Synchrotron Radiation Facility into our scientific computing framework is also detailed, which, ultimately, resulted in accelerating and fully automating their entire data processing pipelines. In fact, when compared with the original three-dimensional tomography reconstruction approaches, the implementation of our synchrotron computed tomography framework led to an acceleration in the experimental data processing capabilities, while maintaining a high level of integration with all the beamline processing software and systems.
- Published
- 2024
- Full Text
- View/download PDF
29. U-DeepONet: U-Net enhanced deep operator network for geologic carbon sequestration.
- Author
-
Diab, Waleed and Al Kobaisi, Mohammed
- Subjects
- *
ARTIFICIAL neural networks , *GEOLOGICAL carbon sequestration , *POROUS materials , *TWO-phase flow , *SCIENCE education , *SCIENTIFIC computing - Abstract
Learning operators with deep neural networks is an emerging paradigm for scientific computing. Deep Operator Network (DeepONet) is a modular operator learning framework that allows for flexibility in choosing the kind of neural network to be used in the trunk and/or branch of the DeepONet. This is beneficial as it has been shown many times that different types of problems require different kinds of network architectures for effective learning. In this work, we design an efficient neural operator based on the DeepONet architecture. We introduce U-Net enhanced DeepONet (U-DeepONet) for learning the solution operator of highly complex CO2-water two-phase flow in heterogeneous porous media. The U-DeepONet is more accurate in predicting gas saturation and pressure buildup than the state-of-the-art U-Net based Fourier Neural Operator (U-FNO) and the Fourier-enhanced Multiple-Input Operator (Fourier-MIONet) trained on the same dataset. Moreover, our U-DeepONet is significantly more efficient in training times than both the U-FNO (more than 18 times faster) and the Fourier-MIONet (more than 5 times faster), while consuming less computational resources. We also show that the U-DeepONet is more data efficient and better at generalization than both the U-FNO and the Fourier-MIONet. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. SCALABLE APPROXIMATION AND SOLVERS FOR IONIC ELECTRODIFFUSION IN CELLULAR GEOMETRIES.
- Author
-
BENEDUSI, PIETRO, ELLINGSRUD, ADA JOHANNE, HERLYNG, HALVOR, and ROGNES, MARIE E.
- Subjects
- *
ELECTRODIFFUSION , *FINITE element method , *SCIENTIFIC computing , *EQUATIONS , *IONS - Abstract
The activity and dynamics of excitable cells are fundamentally regulated and moderated by extracellular and intracellular ion concentrations and their electric potentials. The increasing availability of dense reconstructions of excitable tissue at extreme geometric detail pose a new and clear scientific computing challenge for computational modeling of ion dynamics and transport. In this paper, we design, develop and evaluate a scalable numerical algorithm for solving the time-dependent and nonlinear KNP-EMI (Kirchhoff--Nernst--Planck extracellular-membrane-intracellular) equations describing ionic electrodiffusion for excitable cells with an explicit geometric representation of intracellular and extracellular compartments and interior interfaces. We also introduce and specify a set of model scenarios of increasing complexity suitable for benchmarking. Our solution strategy is based on an implicit-explicit discretization and linearization in time; a mixed finite element discretization of ion concentrations and electric potentials in intracellular and extracellular domains; and an algebraic multigrid-based, inexact block-diagonal preconditioner for GMRES. Numerical experiments with up to 108 unknowns per time step and up to 256 cores demonstrate that this solution strategy is robust and scalable with respect to the problem size, time discretization, and number of cores. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. THE PROMISE AND PITFALLS OF MACHINE LEARNING IN OCEAN REMOTE SENSING.
- Author
-
Gray, Patrick Clifton, Boss, Emmanuel, Prochaska, J. Xavier, Kerner, Hannah, Demeaux, Charlotte Begouen, and Lehahn, Yoav
- Subjects
- *
COMPUTER vision , *REMOTE sensing , *COMPUTER science , *DISTANCE education , *SCIENTIFIC computing - Abstract
The proliferation of easily accessible machine learning algorithms and their apparent successes at inference and classification in computer vision and the sciences has motivated their increased adoption in ocean remote sensing. Our field, however, runs the risk of developing these models on limited training datasets--with sparse geographical and temporal sampling or ignoring the real data dimensionality--thereby constructing over-fitted or non-generalized algorithms. These models may perform poorly in new regimes or on new, anomalous phenomena that emerge in a changing climate. We highlight these issues and strategies for mitigating them, share a few heuristics to help users develop intuition for machine learning methods, and provide a vision for areas we believe are underexplored at the intersection of machine learning and ocean remote sensing. The ocean is a complex physical-biogeochemical system that we cannot mechanistically model well despite our best efforts. Machine learning has the potential to play an important role in improved process understanding, but we must always ask what we are learning after the model has learned. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Physics-informed quantum neural network for solving forward and inverse problems of partial differential equations.
- Author
-
Xiao, Y., Yang, L. M., Shu, C., Chew, S. C., Khoo, B. C., Cui, Y. D., and Liu, Y. Y.
- Subjects
- *
PARTIAL differential equations , *INVERSE problems , *TRIGONOMETRIC functions , *SCIENTIFIC computing , *LOGIC - Abstract
Recently, physics-informed neural networks (PINNs) have aroused an upsurge in the field of scientific computing including solving partial differential equations (PDEs), which convert the task of solving PDEs into an optimization challenge by adopting governing equations and definite conditions or observation data as loss functions. Essentially, the underlying logic of PINNs is based on the universal approximation and differentiability properties of classical neural networks (NNs). Recent research has revealed that quantum neural networks (QNNs), known as parameterized quantum circuits, also exhibit universal approximation and differentiability properties. This observation naturally suggests the application of PINNs to QNNs. In this work, we introduce a physics-informed quantum neural network (PI-QNN) by employing the QNN as the function approximator for solving forward and inverse problems of PDEs. The performance of the proposed PI-QNN is evaluated by various forward and inverse PDE problems. Numerical results indicate that PI-QNN demonstrates superior convergence over PINN when solving PDEs with exact solutions that are strongly correlated with trigonometric functions. Moreover, its accuracy surpasses that of PINN by two to three orders of magnitude, while requiring fewer trainable parameters. However, the computational time of PI-QNN exceeds that of PINN due to its operation on classical computers. This limitation may improve with the advent of commercial quantum computers in the future. Furthermore, we briefly investigate the impact of network architecture on PI-QNN performance by examining two different QNN architectures. The results suggest that increasing the number of trainable network layers can enhance the expressiveness of PI-QNN. However, an excessive number of data encoding layers significantly increases computational time, rendering the marginal gains in performance insufficient to compensate for the shortcomings in computational efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Residual and Unmodeled Ocean Tide Signal From 20+ Years of GRACE and GRACE‐FO Global Gravity Field Models.
- Author
-
Koch, Igor, Duwe, Mathias, and Flury, Jakob
- Subjects
- *
ICE sheet thawing , *WATER depth , *TERRITORIAL waters , *ORBITS (Astronomy) , *SCIENTIFIC computing , *OCEAN color - Abstract
We analyze remaining ocean tide signal in K/Ka‐band range‐rate (RR) postfit residuals, obtained after estimation of monthly gravity field solutions from 21.5 years of Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow‐On sensor data. Low‐pass filtered and numerically differentiated residuals are assigned to 5°×5° $\mathrm{5}{}^{\circ}\times \mathrm{5}{}^{\circ}$ grids and a spectral analysis is performed using Lomb‐Scargle periodograms. We identified enhanced amplitudes at over 30 ocean tide periods. Spectral replicas revealed several tides from sub‐semidiurnal bands. Increased ocean tide amplitudes are located in expected regions, that is, in high‐latitude, coastal and shallow water regions, although some tides also show distinct patterns over the open ocean. While most identified tides are considered during processing, and therefore the amplitudes represent residual signal w.r.t. the ocean tide model, several unmodeled tides were found, including astronomical degree‐3 tides M31 ${{}^{3}\mathrm{M}}_{1}$, N32 ${{}^{3}\mathrm{N}}_{2}$, L32 ${{}^{3}\mathrm{L}}_{2}$, M33 ${{}^{3}\mathrm{M}}_{3}$, and radiational and/or compound tides S3 ${\mathrm{S}}_{3}$, R3 ${\mathrm{R}}_{3}$/SK3 ${/\mathrm{S}\mathrm{K}}_{3}$, T3 ${\mathrm{T}}_{3}$/SP3 ${/\mathrm{S}\mathrm{P}}_{3}$, 2SM2 ${\mathrm{2}\mathrm{S}\mathrm{M}}_{2}$ and 2MK3 ${\mathrm{2}\mathrm{M}\mathrm{K}}_{3}$/MO3 ${/\mathrm{M}\mathrm{O}}_{3}$. The astronomical degree‐3 tides were observed on a global level for the first time a few years ago in altimeter data. We are unaware of any global data‐constrained solutions for the other tides. The amplitude patterns of these tides exhibit similarities to purely hydrodynamic solutions, and altimeter observations (astronomical degree‐3 only). The sensitivity of the satellites to these rather small tidal effects demands their inclusion into the gravity field recovery processing to reduce orbit modeling errors and a possible aliasing. The conducted study shows enormous potential of RR postfit residuals analysis for validating ocean tide models and improving gravity field recovery processing strategies. Plain Language Summary: Ocean tide models describe periodic mass movements in the oceans caused by the gravitational attraction of the Moon and Sun, and other more complex effects. These models are very important for describing satellite orbits and computing scientific products from satellite data, as ocean mass variations affect satellite motion in space. Ocean tide models are not error‐free, particularly in polar regions where high‐quality observations are lacking. This study analyzed 21.5 years of residual distance changes between the two satellites of the Gravity Recovery and Climate Experiment and its follow‐on mission to identify ocean tide model errors. We identified more than 30 tidal frequencies at which the applied model for the description of the satellite orbits shows noticeable errors. Several of these tidal frequencies are rather minor phenomena that have only recently been observed globally. Others have never been observed globally in satellite observations. The results can be used to verify and enhance ocean tide models, and to adjust orbit modeling strategies. Both are essential to advance the quality of satellite data products, and, for example, to improve our understanding about ice sheet melting, sea‐level change and other processes on Earth. Key Points: Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow‐On K/Ka‐band range‐rate postfit residuals are analyzed for residual and unmodeled ocean tide signalMore than 30 prominent tidal frequencies from different bands were detectedRange‐rate postfit residuals analysis has enormous potential for ocean tide model validation and improvement of gravity field recovery [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Accelerating imaging research at large‐scale scientific facilities through scientific computing.
- Author
-
Wang, Chunpeng, Li, Xiaoyun, Wan, Rongzheng, Chen, Jige, Ye, Jing, Li, Ke, Li, Aiguo, Tai, Renzhong, and Sepe, Alessandro
- Subjects
- *
ONLINE data processing , *SYNCHROTRON radiation , *SCIENTIFIC computing , *PROCESS capability , *COMPUTER workstation clusters - Abstract
To date, computed tomography experiments, carried‐out at synchrotron radiation facilities worldwide, pose a tremendous challenge in terms of the breadth and complexity of the experimental datasets produced. Furthermore, near real‐time three‐dimensional reconstruction capabilities are becoming a crucial requirement in order to perform high‐quality and result‐informed synchrotron imaging experiments, where a large amount of data is collected and processed within a short time window. To address these challenges, we have developed and deployed a synchrotron computed tomography framework designed to automatically process online the experimental data from the synchrotron imaging beamlines, while leveraging the high‐performance computing cluster capabilities to accelerate the real‐time feedback to the users on their experimental results. We have, further, integrated it within a modern unified national authentication and data management framework, which we have developed and deployed, spanning the entire data lifecycle of a large‐scale scientific facility. In this study, the overall architecture, functional modules and workflow design of our synchrotron computed tomography framework are presented in detail. Moreover, the successful integration of the imaging beamlines at the Shanghai Synchrotron Radiation Facility into our scientific computing framework is also detailed, which, ultimately, resulted in accelerating and fully automating their entire data processing pipelines. In fact, when compared with the original three‐dimensional tomography reconstruction approaches, the implementation of our synchrotron computed tomography framework led to an acceleration in the experimental data processing capabilities, while maintaining a high level of integration with all the beamline processing software and systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Energy and Scientific Workflows: Smart Scheduling and Execution.
- Author
-
WARADE, MEHUL, LEE, KEVIN, RANAWEERA, CHATHURIKA, and SCHNEIDER, JEAN-GUY
- Subjects
HIGH performance computing ,PARALLEL programming ,COMPUTER workstation clusters ,ENERGY consumption ,SCIENTIFIC computing ,WORKFLOW management systems - Abstract
Energy-efficient computation is an increasingly important target in modern-day computing. Scientific computation is conducted using scientific workflows that are executed on highly scalable compute clusters. The execution of these workilows is generally geared towards optimizing run-time performance with the energy footprint of the execution being ignored. Evidently. minimizing both execution time as well as energy consumption does not have to be mutually exclusive. The aim of the research presented in this paper is to highlight the benefits of energy-aware scientific workflow execution. In this paper. a set of requirements for an energy-aware scheduler are outlined and a conceptual architecture for the scheduler is presented. The evaluation of the conceptual architecture was performed by developing a proof of concept scheduler which was able to achieve around 49.97% reduction in the energy consumption of the computation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Making Computer Science Accessible through Universal Design for Learning in Inclusive Education.
- Author
-
Salgarayeva, Gulnaz and Makhanova, Aigul
- Subjects
UNIVERSAL design ,INCLUSIVE education ,INFORMATION technology ,SCIENTIFIC computing ,COMPUTER science students - Abstract
The field of technology and computer science (CS) is developing dynamically. Just as anyone can learn computers at any age, students with special educational needs (SEN) also aspire to acquire IT (information technology) knowledge on an equal footing with all other students. However, one of the obstacles facing students with SEN is the lack of educational materials and programs for CS in secondary schools. The authors have designed teaching materials and assignments that promote inclusion. This study aims to evaluate the impact of teaching resources developed based on universal design for learning (UDL) to make the school's CS course accessible to all students. The experiment involved 16 students and five teachers. For 8 weeks, students studied computer science using training materials based on UDL. Assessment of knowledge outcome indicators, particularly programming skills, was conducted before and after the experiment. After studying computer science through specific tasks, the interviewees demonstrated a higher level of assimilation of the subject, as indicated by the subsequent test results (mean = 12.13, standard deviation = 1.20), compared to the pre-experimental test (mean = 8.94, standard deviation = 1.12). The study demonstrated that using special UDLbased tasks to teach CS makes it more accessible and has a positive impact on students with special educational needs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. The accelerated tensor Kaczmarz algorithm with adaptive parameters for solving tensor systems.
- Author
-
Liao, Yimou, Li, Wen, and Yang, Dan
- Subjects
- *
SCIENTIFIC computing , *ARTIFICIAL intelligence , *SEPARATION of variables , *IMAGE processing - Abstract
Solving tensor systems is a common task in scientific computing and artificial intelligence. In this paper, we propose a tensor randomized average Kaczmarz method with adaptive parameters that exponentially converges to the unique least Frobenius norm solution of a given consistent tensor system under the t-product structure. In order to accelerate convergence, a tensor average Kaczmarz method based on stochastic heavy ball momentum technique (tAKSHBM) is proposed. The tAKSHBM method utilizes iterative information to update parameters instead of relying on prior information, addressing the problem in the adaptive learning of parameters. Additionally, the tAKSHBM method based on Fourier transform is proposed, which can be effectively implemented in a distributed environment. It is proven that the iteration sequences generated by all the proposed methods are convergent for given consistent tensor systems. Finally, we conduct experiments on both synthetic data and practical applications to support our theoretical results and demonstrate the effectiveness of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. PT-ESM: A Parameter-Testing and Integration Framework for Earth System Models Oriented towards High-Performance Computing.
- Author
-
Guo, Jiaxu, Hu, Liang, Xu, Gaochao, Hu, Juncheng, and Che, Xilong
- Subjects
- *
ATMOSPHERIC models , *SCIENTIFIC computing , *RESEARCH personnel , *SENSITIVITY analysis , *SWINDLERS & swindling - Abstract
High-performance computing (HPC) plays a crucial role in scientific computing, and the efficient utilization of HPC to accomplish computational tasks remains a focal point of research. This study addresses the issue of parameter tuning for Earth system models by proposing a comprehensive solution based on the concept of scientific workflows. This solution encompasses detailed methods from sensitivity analysis to parameter tuning and incorporates various approaches to enhance result accuracy. We validated the reliability of our methods using five cases in the Single Column Atmosphere Model (SCAM). Specifically, we investigated the influence of fluctuations of 11 typical parameters on 10 output variables. The experimental results show that the magnitude of the impact on the results varies significantly when different parameters are perturbed. These findings will help researchers develop more reasonable parameterization schemes for different regions and seasons. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Improving resource utilization and fault tolerance in large simulations via actors.
- Author
-
Klenk, Kyle and Spiteri, Raymond J.
- Subjects
- *
SCIENTIFIC computing , *SCIENTIFIC models , *ACTORS , *CONTINENTS - Abstract
Large simulations with many independent sub-simulations are common in scientific computing. There are numerous challenges, however, associated with performing such simulations in shared computing environments. For example, sub-simulations may have wildly varying completion times or not complete at all, leading to unpredictable runtimes as well as unbalanced and inefficient use of human and computational resources. In this study, we use the actor model of concurrent computation to improve both the resource utilization and fault tolerance for large-scale scientific computing simulations. More specifically, we use actors in the SUMMA model to manage a large-scale hydrological simulation over the North American continent with over 500,000 independent sub-simulations. We find that the actors implementation outperforms a standard array job submission as well as the job submission tool GNU Parallel by better balancing the computational load across processors. The actors implementation also improves fault tolerance and can eliminate the user intervention required to detect and re-submit failed jobs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. The Analysis Of Master Thesis Studies By Using The Computer Assisted Teaching Method In The Field Of Science Education.
- Author
-
ŞAHİN, Ramazan and YAZICI, Mustafa
- Subjects
COMPUTERS in education ,COMPUTER assisted instruction ,ACADEMIC achievement testing ,COMPUTER science education ,SCIENTIFIC computing - Abstract
Copyright of Inonu University Journal of the Faculty of Education (INUJFE) is the property of Inonu University Journal of the Faculty of Education and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
41. Fostering Greater Persistence Among Underserved Computer Science Undergraduates: A Descriptive Study of the I-PASS Project.
- Author
-
Mickelson, Roslyn Arlin, Mikkelsen, Ian, Dorodchi, Mohsen, Cukic, Bojan, and Horn, Tytianna
- Subjects
COLLEGE environment ,IDENTITY (Psychology) ,INSTITUTIONAL environment ,SCIENTIFIC computing ,COMPUTER science ,MENTORING - Abstract
Female, Black, Latinx, Native American, low-income, and rural students remain underrepresented among computer science undergraduate degree recipients. Along with student, family, and secondary school characteristics, college organizational climate, curricula, and instructional practices shape undergraduates' experiences that foster persistence until graduation. Our quasi-experimental project, Improving the Persistence and Success of Students from Underrepresented Populations in Computer Science (I-PASS), is designed to augment students' persistence until they earn their computer science degree. Drawing on prior research, including Tinto's model of effective institutional actions for retention, I-PASS Scholars—all low-income, female and/or members of underserved demographics groups— receive a four-year scholarship; mentoring, tutoring, advising; and opportunities to integrate into the academic and social life of the campus. Students' written reflections and attitude surveys suggest I-PASS's components foster their retention by, among other mechanisms, enhancing their computer science identity development and sense of belonging in the major. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Validated integration of semilinear parabolic PDEs.
- Author
-
van den Berg, Jan Bouwe, Breden, Maxime, and Sheombarsing, Ray
- Subjects
PARTIAL differential equations ,BOUNDARY value problems ,ORBITS (Astronomy) ,SCIENTIFIC computing ,PARABOLIC differential equations - Abstract
Integrating evolutionary partial differential equations (PDEs) is an essential ingredient for studying the dynamics of the solutions. Indeed, simulations are at the core of scientific computing, but their mathematical reliability is often difficult to quantify, especially when one is interested in the output of a given simulation, rather than in the asymptotic regime where the discretization parameter tends to zero. In this paper we present a computer-assisted proof methodology to perform rigorous time integration for scalar semilinear parabolic PDEs with periodic boundary conditions. We formulate an equivalent zero-finding problem based on a variation of constants formula in Fourier space. Using Chebyshev interpolation and domain decomposition, we then finish the proof with a Newton–Kantorovich type argument. The final output of this procedure is a proof of existence of an orbit, together with guaranteed error bounds between this orbit and a numerically computed approximation. We illustrate the versatility of the approach with results for the Fisher equation, the Swift–Hohenberg equation, the Ohta–Kawasaki equation and the Kuramoto–Sivashinsky equation. We expect that this rigorous integrator can form the basis for studying boundary value problems for connecting orbits in partial differential equations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. The development of new effcient iterative methods for the solution of absolute value equations.
- Author
-
Ali, Rashid, Awwad, Fuad A., and Ismail, Emad A. A.
- Subjects
ABSOLUTE value ,SCIENTIFIC computing ,MANAGEMENT science ,EQUATIONS ,ENGINEERING - Abstract
The use of absolute value equations (AVEs) is widespread across a wide range of fields, including scientific computing, management science, and engineering. Our aim in this study is to introduce two new methods for solving AVEs and to explore their convergence characteristics. Furthermore, numerical experiments will be carried out to demonstrate their feasibility, robustness, and effcacy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Employing technology-enhanced feedback and scaffolding to support the development of deep science understanding using computer simulations.
- Author
-
Kaldaras, Leonora, Wang, Karen D., Nardo, Jocelyn E., Price, Argenta, Perkins, Katherine, Wieman, Carl, and Salehi, Shima
- Subjects
PSYCHOLOGICAL feedback ,SCIENTIFIC computing ,COMPUTER simulation ,CONSTRUCTIVISM (Education) ,COGNITIVE development ,EDUCATIONAL outcomes - Abstract
Constructivist learning theories consider deep understanding of the content to be the result of engagement in relevant learning activities with appropriate scaffolding that provides the learner with timely and substantive feedback. However, any group of students has a variety of levels of knowledge and cognitive development, which makes providing appropriate individual-level scaffolding and feedback challenging in the classroom. Computer simulations can help meet this challenge by providing technology-enhanced embedded scaffolding and feedback via specific simulation design. The use of computer simulations does not, however, guarantee development of deep science understanding. Careful research-driven design of the simulation and the accompanying teaching structure both play critical roles in achieving the desired learning outcomes. In this paper, we discuss the capabilities of computer simulations and the issues that can impact the learning outcomes when combining technology-enhanced scaffolding and feedback with external teaching structures. We conclude with suggestions of promising research avenues on simulation design and their use in the classroom to help students achieve deep science understanding. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. DYNAMICS OF A NEW CLASS OF OSCILLATORS: MELNIKOV’S APPROACH, POSSIBLE APPLICATION TO ANTENNA ARRAY THEORY.
- Author
-
Kyurkchiev, Nikolay, Zaevski, Tsvetelin, Iliev, Anton, Kyurkchiev, Vesselin, and Rahnev, Asen
- Subjects
- *
ANTENNA arrays , *ANTENNA radiation patterns , *WEB-based user interfaces , *SCIENTIFIC computing , *POSSIBILITY - Abstract
In this paper, we propose a new class of extended oscillators. Some investigations based on the Melnikov’s approach are applied for identifying some chaotic possibilities. We demonstrate also some specialized modules for investigating the dynamics of these oscillators. One possible application that Melnikov functions may find in the modeling and synthesis of radiating antenna patterns is also discussed. This will be included as an integral part of a planned much more general Web-based application for scientific computing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Operator Learning Using Random Features: A Tool for Scientific Computing.
- Author
-
Nelsen, Nicholas H. and Stuart, Andrew M.
- Subjects
- *
SCIENTIFIC computing , *RANDOM operators , *PARTIAL differential equations , *KRIGING , *SUPERVISED learning - Abstract
Supervised operator learning centers on the use of training data, in the form of inputoutput pairs, to estimate maps between infinite-dimensional spaces. It is emerging as a powerful tool to complement traditional scientific computing, which may often be framed in terms of operators mapping between spaces of functions. Building on the classical random features methodology for scalar regression, this paper introduces the function-valued random features method. This leads to a supervised operator learning architecture that is practical for nonlinear problems yet is structured enough to facilitate efficient training through the optimization of a convex, quadratic cost. Due to the quadratic structure, the trained model is equipped with convergence guarantees and error and complexity bounds, properties that are not readily available for most other operator learning architectures. At its core, the proposed approach builds a linear combination of random operators. This turns out to be a low-rank approximation of an operator-valued kernel ridge regression algorithm, and hence the method also has strong connections to Gaussian process regression. The paper designs function-valued random features that are tailored to the structure of two nonlinear operator learning benchmark problems arising from parametric partial differential equations. Numerical results demonstrate the scalability, discretization invariance, and transferability of the function-valued random features method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. SINGLE-PASS NYSTRÖM APPROXIMATION IN MIXED PRECISION.
- Author
-
CARSON, ERIN and DAUŽICKAITĖ, IEVA
- Subjects
- *
LOW-rank matrices , *MATRIX multiplications , *SCIENTIFIC computing , *MATRICES (Mathematics) , *INTUITION - Abstract
Low rank matrix approximations appear in a number of scientific computing applications. We consider the Nyström method for approximating a positive semidefinite matrix A. In the case that A is very large or its entries can only be accessed once, a single-pass version may be necessary. In this work, we perform a complete rounding error analysis of the single-pass Nyström method in two precisions, where the computation of the expensive matrix product with A is assumed to be performed in the lower of the two precisions. Our analysis gives insight into how the sketching matrix and shift should be chosen to ensure stability, implementation aspects which have been commented on in the literature but not yet rigorously justified. We further develop a heuristic to determine how to pick the lower precision, which confirms the general intuition that the lower the desired rank of the approximation, the lower the precision we can use without detriment. We also demonstrate that our mixed precision Nyström method can be used to inexpensively construct limited memory preconditioners for the conjugate gradient method and derive a bound the condition number of the resulting preconditioned coefficient matrix. We present numerical experiments on a set of matrices with various spectral decays and demonstrate the utility of our mixed precision approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. QUANTUM ALGORITHMS FOR MULTISCALE PARTIAL DIFFERENTIAL EQUATIONS.
- Author
-
JUNPENG HU, SHI JIN, and LEI ZHANG
- Subjects
- *
PARTIAL differential equations , *TIME complexity , *SCIENTIFIC computing , *ALGORITHMS , *EQUATIONS - Abstract
Partial differential equation (PDE) models with multiple temporal/spatial scales are prevalent in several disciplines such as physics, engineering, and many others. These models are of great practical importance but notoriously difficult to solve due to prohibitively small mesh and time step sizes limited by the scaling parameter and CFL condition. Another challenge in scientific computing could come from curse-of-dimensionality. In this paper, we aim to provide a quantum algorithm, based on either direct approximations of the original PDEs or their homogenized models, for prototypical multiscale problems in PDEs, including elliptic, parabolic, and hyperbolic PDEs. To achieve this, we will lift these problems to higher dimensions and leverage the recently developed Schrödingerization based quantum simulation algorithms to efficiently reduce the computational cost of the resulting high-dimensional and multiscale problems. We will examine the error contributions arising from discretization, homogenization, and relaxation, and analyze and compare the complexities of these algorithms in order to identify the best algorithms in terms of complexities for different equations in different regimes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Automatic generation of ARM NEON micro-kernels for matrix multiplication.
- Author
-
Alaejos, Guillermo, Martínez, Héctor, Castelló, Adrián, Dolz, Manuel F., Igual, Francisco D., Alonso-Jordá, Pedro, and Quintana-Ortí, Enrique S.
- Subjects
- *
MATRIX multiplications , *LINEAR algebra , *NEON , *SCIENTIFIC computing , *C++ , *DEEP learning , *KERNEL operating systems - Abstract
General matrix multiplication (gemm) is a fundamental kernel in scientific computing and current frameworks for deep learning. Modern realisations of gemm are mostly written in C, on top of a small, highly tuned micro-kernel that is usually encoded in assembly. The high performance realisation of gemm in linear algebra libraries in general include a single micro-kernel per architecture, usually implemented by an expert. In this paper, we explore a couple of paths to automatically generate gemm micro-kernels, either using C++ templates with vector intrinsics or high-level Python scripts that directly produce assembly code. Both solutions can integrate high performance software techniques, such as loop unrolling and software pipelining, accommodate any data type, and easily generate micro-kernels of any requested dimension. The performance of this solution is tested on three ARM-based cores and compared with state-of-the-art libraries for these processors: BLIS, OpenBLAS and ArmPL. The experimental results show that the auto-generation approach is highly competitive, mainly due to the possibility of adapting the micro-kernel to the problem dimensions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Modified Sombor Spectral Radii and Modified Sombor Energies of Splitting and Shadow Graphs.
- Author
-
BILAL, Ahmad and MOBEEN MUNIR, Muhammad
- Subjects
INTERMOLECULAR forces ,EIGENVALUES ,COMPUTER science ,APPLICATION software ,SCIENTIFIC computing - Abstract
Gutman et al. introduced Sombor and Modified Somber index because of the rapidly growing applications in chemistry and network analysis. Graph energy ε(G) and the spectral radius ℘(G) of graph G are essential components that are associated with the eigenvalues of the matrix of graph G and, chemically, with the intermolecular forces. These graph invariants have many useful applications in computer sciences, networking, and molecular computing. There are numerous variants of the ε(G) and ℘(G) attained by substituting another matrix in place of an adjacency matrix. Modified Sombor energy, MSε(G) is defined as the sum of absolute eigenvalues of the modified Sombor matrix or in other words we can say that modified Sombor energy, MSε(G) is the trace norm of modified Sombor matrix. Modified Sombor spectral radius, ℘MS is defined as the largest absolute eigenvalue of the modified Sombor matrix. The major focus of this article is on the MSε(G), ℘MS of the generalized shadow and splitting graphs. The only realistic problem in which we are particularly interested in how MSε(Splt(G)) and MSε(Sht(G)) are comparable to MSε(G). On similar lines we are also interested in how ℘MS(Splt(G)) and ℘MS(Sht(G)) are comparable to ℘MS(G). We were able to address these challenges by focusing on splitting and shadow graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.