299 results on '"Scientific software"'
Search Results
2. Characterising reproducibility debt in scientific software: A systematic literature review
- Author
-
Hassan, Zara, Treude, Christoph, Norrish, Michael, Williams, Graham, and Potanin, Alex
- Published
- 2025
- Full Text
- View/download PDF
3. Chapter One: Predictive microbiology through the last century. From paper to Excel and towards AI.
- Author
-
Garre, Alberto, Fernández, Pablo, Grau-Noguer, Eduard, Guillén, Silvia, Portaña, Samuel, Possas, Arícia, and Vila, Montserrat
- Abstract
This chapter provides a historical perspective on predictive microbiology: from its inception till its current state, and including potential future developments. A look back to its origins in the 1920s underlies that scientists at the time had great ideas that could not be developed due to the lack of proper technologies. Indeed, predictive microbiology advancements mostly halted till the 1980s, when computing machines became broadly available, evidencing how these technologies were an enabler of predictive microbiology. Nowadays, predictive microbiology is a mature scientific field. There is a general consensus on experimental and computational methodologies, with software tools implementing these principles in a user-friendly manner. As a result, predictive microbiology is currently a useful tool for researchers, food industries and food safety legislators. On the other hand, this methodology has some important limitations that would be hard to solve without a reconsideration of some of its basic principles. In this sense, Artificial Intelligence and Data Science present great promise to advance predictive microbiology even further. Nevertheless, this would require the development of a novel conceptual framework that accommodates these novel technologies into predictive microbiology. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
4. High-performance finite elements with MFEM.
- Author
-
Andrej, Julian, Atallah, Nabil, Bäcker, Jan-Phillip, Camier, Jean-Sylvain, Copeland, Dylan, Dobrev, Veselin, Dudouit, Yohann, Duswald, Tobias, Keith, Brendan, Kim, Dohyun, Kolev, Tzanio, Lazarov, Boyan, Mittal, Ketan, Pazner, Will, Petrides, Socratis, Shiraiwa, Syun'ichi, Stowell, Mark, and Tomov, Vladimir
- Subjects
- *
FINITE element method , *COMPUTATIONAL physics , *DISCRETIZATION methods , *C++ , *SUPERCOMPUTERS - Abstract
The MFEM (Modular Finite Element Methods) library is a high-performance C++ library for finite element discretizations. MFEM supports numerous types of finite element methods and is the discretization engine powering many computational physics and engineering applications across a number of domains. This paper describes some of the recent research and development in MFEM, focusing on performance portability across leadership-class supercomputing facilities, including exascale supercomputers, as well as new capabilities and functionality, enabling a wider range of applications. Much of this work was undertaken as part of the Department of Energy's Exascale Computing Project (ECP) in collaboration with the Center for Efficient Exascale Discretizations (CEED). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Introduction of the Capsules environment to support further growth of the SBGrid structural biology software collection.
- Author
-
Herre, Carol, Ho, Alex, Eisenbraun, Ben, Vincent, James, Nicholson, Thomas, Boutsioukis, Giorgos, Meyer, Peter A., Ottaviano, Michelle, Krause, Kurt L., Key, Jason, and Sliz, Piotr
- Subjects
- *
APPLICATION software , *COMPUTER software , *CONSORTIA , *BIOLOGY , *RESEARCH personnel , *SYNTHETIC biology , *COMPUTER software reusability - Abstract
The expansive scientific software ecosystem, characterized by millions of titles across various platforms and formats, poses significant challenges in maintaining reproducibility and provenance in scientific research. The diversity of independently developed applications, evolving versions and heterogeneous components highlights the need for rigorous methodologies to navigate these complexities. In response to these challenges, the SBGrid team builds, installs and configures over 530 specialized software applications for use in the on‐premises and cloud‐based computing environments of SBGrid Consortium members. To address the intricacies of supporting this diverse application collection, the team has developed the Capsule Software Execution Environment, generally referred to as Capsules. Capsules rely on a collection of programmatically generated bash scripts that work together to isolate the runtime environment of one application from all other applications, thereby providing a transparent cross‐platform solution without requiring specialized tools or elevated account privileges for researchers. Capsules facilitate modular, secure software distribution while maintaining a centralized, conflict‐free environment. The SBGrid platform, which combines Capsules with the SBGrid collection of structural biology applications, aligns with FAIR goals by enhancing the findability, accessibility, interoperability and reusability of scientific software, ensuring seamless functionality across diverse computing environments. Its adaptability enables application beyond structural biology into other scientific fields. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. General resource manager for computationally demanding scientific software (MARE)
- Author
-
Guo, Xinchen, Charles, James, Narendra, Namita, Klimeck, Gerhard, and Kubis, Tillmann
- Published
- 2024
- Full Text
- View/download PDF
7. AWARE-Light: a smartphone tool for experience sampling and digital phenotyping.
- Author
-
van Berkel, Niels, D'Alfonso, Simon, Kurnia Susanto, Rio, Ferreira, Denzil, and Kostakos, Vassilis
- Subjects
- *
HUMAN behavior , *MOBILE apps , *ECOLOGICAL momentary assessments (Clinical psychology) , *INTEGRATED software , *MENTAL health - Abstract
Due to their widespread adoption, frequent use, and diverse sensor capabilities, smartphones have become a powerful tool for academic studies focused on sampling human behaviour. While packing many technological advances, the need for researchers to develop their own software packages in order to run smartphone-based studies has resulted in a clear barrier to entry for researchers without the financial means, time, or technical knowledge required to overcome this technical barrier. We present AWARE-Light, a new smartphone application for data collection from study participants, which is accompanied by a website that provides any researcher the possibility to easily configure their own study. To highlight the possibilities of our tool, we present a research scenario on digital phenotyping for mental health. Furthermore, we describe the methodological configuration possibilities offered by our tool, and complement the technical configuration possibilities with recommendations from the existing literature. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. EESSI: A cross‐platform ready‐to‐use optimised scientific software stack.
- Author
-
Dröge, Bob, Holanda Rusu, Victor, Hoste, Kenneth, van Leeuwen, Caspar, O'Cais, Alan, and Röblitz, Thomas
- Subjects
COMPUTER software installation ,ARTIFICIAL intelligence ,COMPUTER software ,NUCLEOTIDE sequencing ,SUPERCOMPUTERS ,LANDSCAPE changes ,MICROPROCESSORS - Abstract
Getting scientific software installed correctly and ensuring it performs well has been a ubiquitous problem for several decades now, which is compounded currently by the changing landscape of computational science with the (re‐)emergence of different microprocessor families, and the expansion to additional scientific domains like artificial intelligence and next‐generation sequencing. The European Environment for Scientific Software Installations (EESSI) project aims to provide a ready‐to‐use stack of scientific software installations that can be leveraged easily on a variety of platforms, ranging from personal workstations to cloud environments and supercomputer infrastructure, without making compromises with respect to performance. In this article, we provide a detailed overview of the project, highlight potential use cases, and demonstrate that the performance of the provided scientific software installations can be competitive with system‐specific installations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Leveraging big data of immune checkpoint blockade response identifies novel potential targets.
- Author
-
Bareche, Y., Kelly, D., Abbas-Aghababazadeh, F., Nakano, M., Esfahani, P.N., Tkachuk, D., Mohammad, H., Samstein, R., Lee, C.-H., Morris, L.G.T., Bedard, P.L., Haibe-Kains, B., and Stagg, J.
- Subjects
- *
IMMUNE checkpoint proteins , *RNA-binding proteins , *BIG data , *WEB-based user interfaces , *CELL death - Abstract
The development of immune checkpoint blockade (ICB) has changed the way we treat various cancers. While ICB produces durable survival benefits in a number of malignancies, a large proportion of treated patients do not derive clinical benefit. Recent clinical profiling studies have shed light on molecular features and mechanisms that modulate response to ICB. Nevertheless, none of these identified molecular features were investigated in large enough cohorts to be of clinical value. Literature review was carried out to identify relevant studies including clinical dataset of patients treated with ICB [anti-programmed cell death protein 1 (PD-1)/programmed death-ligand 1 (PD-L1), anti-cytotoxic T-lymphocyte antigen 4 (CTLA-4) or the combination] and available sequencing data. Tumor mutational burden (TMB) and 37 previously reported gene expression (GE) signatures were computed with respect to the original publication. Biomarker association with ICB response (IR) and survival (progression-free survival/overall survival) was investigated separately within each study and combined together for meta-analysis. We carried out a comparative meta-analysis of genomic and transcriptomic biomarkers of IRs in over 3600 patients across 12 tumor types and implemented an open-source web application (predictIO.ca) for exploration. TMB and 21/37 gene signatures were predictive of IRs across tumor types. We next developed a de novo GE signature (PredictIO) from our pan-cancer analysis and demonstrated its superior predictive value over other biomarkers. To identify novel targets, we computed the T-cell dysfunction score for each gene within PredictIO and their ability to predict dual PD-1/CTLA-4 blockade in mice. Two genes, F2RL1 (encoding protease-activated receptor-2) and RBFOX2 (encoding RNA-binding motif protein 9), were concurrently associated with worse ICB clinical outcomes, T-cell dysfunction in ICB-naive patients and resistance to dual PD-1/CTLA-4 blockade in preclinical models. Our study highlights the potential of large-scale meta-analyses in identifying novel biomarkers and potential therapeutic targets for cancer immunotherapy. • Large pan-cancer GE meta-analysis identified a de novo GE signature, called PredictIO. • PredictIO demonstrated better and more consistent ability to predict IR as compared to TMB and other signatures. • F2RL1 and RBFOX2 exhibited strong association with ICB resistance, T-cell dysfunction and pro-tumoral microenvironment. • F2RL1 and RBFOX2 appear as an interesting therapeutic target to improve ICB efficacy. • An open-source web application (PredictIO.ca) was developed, allowing researcher to explore their biomarker of interest. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. CACPPAF, a COMSOL application to characterize polyelectrolyte properties of actin filaments
- Author
-
Santiago Manrique-Bedoya and Marcelo Marucho
- Subjects
Comsol application ,Actin filament ,Polyelectrolyte properties ,Electrochemical interactions ,Scientific software ,Electrochemical phenomena ,Computer software ,QA76.75-76.765 - Abstract
We present an interactive COMSOL web application that allows both expert and non-expert users to numerically evaluate the electric potential, ionic concentration distribution, velocity profile, and ionic current along a molecular structure surface characterizing actin filaments. This online computational and visualization tool runs on a high performance server (http://marucholab.physics.utsa.edu:2036), that enables users to perform multiple analyses and comparisons without compromising computational resources. As a unique feature, the multiphysics formulation accounts for the filament surface roughness, the finite filament size, and the ionic condensation, providing a deeper understanding of the electrochemical phenomena taking place at the interface between the irregular charged shape of the filament and its biological environment. Overall, the interactive component allows investigators to characterize polyelectrolyte properties of healthy and abnormal actin filaments in physiological and pathological conditions.
- Published
- 2022
- Full Text
- View/download PDF
11. Evolving requirements for materials modelling software and underlying method developments: an inventory and future outlook [version 2; peer review: 2 approved]
- Author
-
Ilian Todorov
- Subjects
materials modelling ,molecular simulations ,method development ,scientific software ,computer simualtions ,HPC ,eng ,Science ,Social Sciences - Abstract
This European Materials Modelling Council (EMMC) study provides an outline of the survey intent and ambitions, followed by an analysis of the results and a follow up discussion, focused on the future perspectives of the EMMC. The survey covers materials modelling and characterisation communities in both academia and industry. It provides a profile of the surveyed players in these communities and a scaled measure on their usage of computational methodologies. The survey outcomes include: (i) summary views of the recent as well as perceived future trends of materials modelling and its associated fields, with respect to two focus areas surveyed, Model Development and Software, (ii) the main adoption factors and associated bottlenecks for computational methods and software, (iii) the most targeted materials properties and digital twins approaches, and (iv) the wider communities expectations of how EMMC can help facilitate, fulfil and drive further the European Materials Modelling Roadmap to the benefit of the European Commission’s (ECs’) research and innovation.
- Published
- 2022
- Full Text
- View/download PDF
12. Evolving requirements for materials modelling software and underlying method developments: an inventory and future outlook [version 1; peer review: 2 approved]
- Author
-
Ilian Todorov
- Subjects
materials modelling ,molecular simulations ,method development ,scientific software ,computer simualtions ,HPC ,eng ,Science ,Social Sciences - Abstract
This European Materials Modelling Council (EMMC) study provides an outline of the survey intent and ambitions, followed by an analysis of the results and a follow up discussion, focused on the future perspectives of the EMMC. The survey covers materials modelling and characterisation communities in both academia and industry. It provides a profile of the surveyed players in these communities and a scaled measure on their usage of computational methodologies. The survey outcomes include: (i) summary views of the recent as well as perceived future trends of materials modelling and its associated fields, with respect to two focus areas surveyed, Model Development and Software, (ii) the main adoption factors and associated bottlenecks for computational methods and software, (iii) the most targeted materials properties and digital twins approaches, and (iv) the wider communities expectations of how EMMC can help facilitate, fulfil and drive further the European Materials Modelling Roadmap to the benefit of the European Commission’s (ECs’) research and innovation.
- Published
- 2022
- Full Text
- View/download PDF
13. A refinement strategy for identification of scientific software from bioinformatics publications.
- Author
-
Jiang, Lu, Kang, Xinyu, Huang, Shan, and Yang, Bo
- Abstract
In the field of bioinformatics, a large number of classical software becomes a necessary research tool. To measure the influence of scientific software as one kind of important intellectual products, a few strategies have been proposed to identify the software names from full texts of papers to collect the usage data of packages in bioinformatics research. However, the performance of these strategies is limited because of the highly imbalance of data in the full texts. This study proposes EnsembleSVMs-CRF, a two-step refinement strategy based on ensemble learning that gradually increases the sentences that contain software mentions to improve the performance of named entity recognition. The experiment on the bioinformatics corpus shows that the performance of EnsembleSVMs-CRF, in terms of the local F1 (78.81%) and the global F1-A (73.49%), is superior to the rule-based bootstrapping method and direct CRF. Application of this strategy to the articles published between 2013 and 2017 in 27 bioinformatics journals extracted 8,239 unique packages. The most popular 50 packages thus identified demonstrate that most of them are professional software which generally requires inter-discipline knowledge, rather than programming skill. Meanwhile, we found that researchers in bioinformatics tend to use free scientific software, and the application of general software is increasing compared with professional software. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Food modelling strategies and approaches for knowledge transfer.
- Author
-
Kansou, Kamal, Laurier, Wim, Charalambides, Maria N., Della-Valle, Guy, Djekic, Ilija, Feyissa, Aberham Hailu, Marra, Francesco, Thomopoulos, Rallou, and Bredeweg, Bert
- Subjects
- *
KNOWLEDGE transfer , *EDUCATION software , *EMBEDDINGS (Mathematics) , *ONLINE education , *TEACHING models - Abstract
Scientific software incorporates models that capture fundamental domain knowledge. This software is becoming increasingly more relevant as an instrument for food research. However, scientific software is currently hardly shared among and (re-)used by stakeholders in the food domain, which hampers effective dissemination of knowledge, i.e. knowledge transfer. This paper reviews selected approaches, best practices, hurdles and limitations regarding knowledge transfer via software and the mathematical models embedded in it to provide points of reference for the food community. The paper focusses on three aspects. Firstly, the publication of digital objects on the web, which offers valorisation software as a scientific asset. Secondly, building transferrable software as way to share knowledge through collaboration with experts and stakeholders. Thirdly, developing food engineers' modelling skills through the use of food models and software in education and training. • Re-use of models and software should favour knowledge transfer in the food domain. • Suitable diffusion channel depends on the type of model. • Diffusion channels relying on web-based technologies are most promising solutions. • Building scientific software from shared knowledge facilitates knowledge transfer. • Initiatives for teaching modelling online are being developed by food scientists. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Visionary: a framework for analysis and visualization of provenance data.
- Author
-
de Oliveira, Weiner, Braga, Regina, David, José Maria N., Stroele, Victor, Campos, Fernanda, and Castro, Gabriellla
- Subjects
DATA visualization ,SOFTWARE visualization ,SECURITY systems ,ONTOLOGY ,DATA analysis - Abstract
Provenance is recognized as a central challenge to establish the reliability and provide security in computational systems. In scientific workflows, provenance is considered essential to support experiments' reproducibility, interpretation of results, and problem diagnosis. We consider that these requirements can also be used in new application domains, such as software processes and IoT. However, for a better understanding and use of provenance data, efficient and user-friendly mechanisms are needed. Ontology, complex networks, and software visualization can help in this process by generating new data insights and strategic information for decision-making. This paper presents the Visionary framework, designed to assist in the understanding and use of provenance data through ontologies, complex network analysis, and software visualization techniques. The framework captures the provenance data and generates new information using ontologies and structural analysis of the provenance graph. The visualization presents and highlights inferences and results obtained with the data analysis. Visionary is an application domain-free framework adapted to any system that uses the PROV provenance model. Evaluations were carried out, and some evidence was found that the framework assists in the understanding and analysis of provenance data when decision-making is needed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Guidelines for collaborative development of sustainable data treatment software.
- Author
-
Wuttke, Joachim, Cottrell, Stephen, Gonzalez, Miguel A., Kaestner, Anders, Markvardsen, Anders, Rod, Thomas H., Rozyczko, Piotr, and Vardanyan, Gagik
- Subjects
- *
SUSTAINABLE development , *COMPUTER software quality control , *SOFTWARE engineering , *COMPUTER software development , *SOFTWARE engineers - Abstract
Software development for data reduction and analysis at large research facilities is increasingly professionalized, and internationally coordinated. To foster software quality and sustainability, and to facilitate collaboration, representatives from software groups of European neutron and muon facilities have agreed on a set of guidelines for development practices, infrastructure, and functional and non-functional product properties. These guidelines have been derived from actual practices in software projects from the EU funded consortium 'Science and Innovation with Neutrons in Europe in 2020' (SINE2020), and have been enriched through extensive literature review. Besides guiding the work of the professional software engineers in our computing groups, we hope to influence scientists who are willing to contribute their own data treatment software to our community. Moreover, this work may also provide inspiration to scientific software development beyond the neutron and muon field. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Simulating Events in Requirements Engineering by Using Pre-conceptual-Schema-based Components from Scientific Software Domain Representation.
- Author
-
C., Paola A. Noreña and J., Carlos M. Zapata
- Subjects
SCIENTIFIC software ,SYSTEM analysis software ,COMPUTER software ,STAKEHOLDERS ,BUSINESS analysts - Published
- 2021
18. Link-based approach to study scientific software usage: the case of VOSviewer.
- Author
-
Orduña-Malea, Enrique and Costas, Rodrigo
- Abstract
Scientific software is a fundamental player in modern science, participating in all stages of scientific knowledge production. Software occasionally supports the development of trivial tasks, while at other instances it determines procedures, methods, protocols, results, or conclusions related with the scientific work. The growing relevance of scientific software as a research product with value of its own has triggered the development of quantitative science studies of scientific software. The main objective of this study is to illustrate a link-based webometric approach to characterize the online mentions to scientific software across different analytical frameworks. To do this, the bibliometric software VOSviewer is used as a case study. Considering VOSviewer's official website as a baseline, online mentions to this website were counted in three different analytical frameworks: academic literature via Google Scholar (988 mentioning publications), webpages via Majestic (1,330 mentioning websites), and tweets via Twitter (267 mentioning tweets). Google scholar mentions shows how VOSviewer is used as a research resource, whilst mentions in webpages and tweets show the interest on VOSviewer's website from an informational and a conversational point of view. Results evidence that URL mentions can be used to gather all sorts of online impacts related to non-traditional research objects, like software, thus expanding the analytical scientometric toolset by incorporating a novel digital dimension. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Towards a taxonomy of Roxygen documentation in R packages
- Author
-
Vidoni, Melina and Codabux, Zadia
- Published
- 2023
- Full Text
- View/download PDF
20. The SoftWipe tool and benchmark for assessing coding standards adherence of scientific software.
- Author
-
Zapletal, Adrian, Höhler, Dimitri, Sinz, Carsten, and Stamatakis, Alexandros
- Subjects
- *
SCIENTIFIC software , *OPEN source software , *RESEARCH , *BENCHMARKING (Management) , *COMPUTER software - Abstract
Scientific software from all areas of scientific research is pivotal to obtaining novel insights. Yet the coding standards adherence of scientific software is rarely assessed, even though it might lead to incorrect scientific results in the worst case. Therefore, we have developed an open source tool and benchmark called SoftWipe, that provides a relative software coding standards adherence ranking of 48 computational tools from diverse research areas. SoftWipe can be used in the review process of software papers and to inform the scientific software selection process. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. Chemoinformatics and structural bioinformatics in OCaml
- Author
-
Francois Berenger, Kam Y. J. Zhang, and Yoshihiro Yamanishi
- Subjects
Chemoinformatics ,Structural bioinformatics ,Bisector tree ,Scientific software ,Software prototyping ,Open source ,Information technology ,T58.5-58.64 ,Chemistry ,QD1-999 - Abstract
Abstract Background OCaml is a functional programming language with strong static types, Hindley–Milner type inference and garbage collection. In this article, we share our experience in prototyping chemoinformatics and structural bioinformatics software in OCaml. Results First, we introduce the language, list entry points for chemoinformaticians who would be interested in OCaml and give code examples. Then, we list some scientific open source software written in OCaml. We also present recent open source libraries useful in chemoinformatics. The parallelization of OCaml programs and their performance is also shown. Finally, tools and methods useful when prototyping scientific software in OCaml are given. Conclusions In our experience, OCaml is a programming language of choice for method development in chemoinformatics and structural bioinformatics.
- Published
- 2019
- Full Text
- View/download PDF
22. Assessment of NRCAN PPP online service in determination of crustal velocity: case study Northern Egypt GNSS Network.
- Author
-
AbouAly, Nadia, Elhussien, Mahmoud, Rabah, Mostafa, and Zidan, Zaki
- Abstract
Precise point positioning (PPP) has the ability to give precise positioning with high accuracy. That may be an alternative to precise Deferential GNSS (DGNSS), in addition to being a low-cost alternative among all processing strategies of Global Navigation Satellite System (GNSS), especially when online services are used. To see the assessment study of PPP in the determination of crustal velocity, 66 days of GNSS data distributed in the period 2013–2015 were used. Composing a network of eight stations called the Northern Egyptian Permanent GNSS Network (N-EPGN) with nine IGS stations was processed. Scientific software GAMIT/GLOBK and Bernese used to calculate the final precise coordinate and the associated velocity of each station in the International Terrestrial Reference Frame (ITRF 2008). Each campaign consists of 3 days. These data were processed by PPP approach. The final precise coordinate and the associated velocity of each EPGN station estimated. The final results compare to those different methods of analysis and programs refer to high level of agreement between the coordinates and velocity which confirm that PPP approach can be applied for the investigation of crustal deformation. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
23. Towards adopting open and data‐driven science practices in bed form dynamics research, and some steps to this end.
- Author
-
Gutierrez, Ronald R., Lefebvre, Alice, Núñez‐González, Francisco, and Avila, Humberto
- Subjects
OPEN source software ,SCIENTIFIC community ,EARTH sciences ,SCIENTIFIC models ,DATA analysis ,DEEP learning - Abstract
In recent years, open and data‐driven science has fostered very important scientific breakthroughs. This study describes the challenges and opportunities for the scientific community devoted to bed form dynamics research in adopting such scientific paradigms through, for example, engineered data sharing, formal recognition of scientists who collect the data, and collaborative development of free accessible software. It highlights that once these actions are completed, the potential application of deep learning techniques could substantially improve bed form models and the scientific understanding of bed form dynamics. Likewise, it discusses the potential of Bedforms‐ATM, a free available software, to standardize some bed form data analysis techniques. We propose that the technical challenges be tackled by following scholarly accepted/proposed standards (e.g. FAIR Guiding Principles, Geoscience Papers of the Future), using the body of knowledge being built on the matter by some institutions (e.g. Federation of Earth Science Information Partners), and through technical discussions at scientific meetings such as MARID. © 2020 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. Sustainable Research Software Hand-Over.
- Author
-
FEHR, J., HIMPE, C., RAVE, S., and SAAK, J.
- Subjects
SCIENTIFIC software ,REPRODUCIBLE research ,COMPUTER software reusability ,SOFTWARE engineering ,RESEARCH - Abstract
Scientific software projects evolve rapidly in their initial development phase, yet at the end of a funding period, the completion of a research project, thesis, or publication, further engagement in the project may slow down or cease completely. To retain the invested effort for the sciences, this software needs to be preserved or handed over to a succeeding developer or team, such as the next generation of (PhD) students. Comparable guides provide top-down recommendations for project leads. This paper intends to be a bottom-up approach for sustainable hand-over processes from a developer's perspective. An important characteristic in this regard is the project's size, by which this guideline is structured. Furthermore, checklists are provided, which can serve as a practical guide for implementing the proposed measures. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. A GENERIC FINITE ELEMENT FRAMEWORK ON PARALLEL TREE-BASED ADAPTIVE MESHES.
- Author
-
BADIA, SANTIAGO, MARTÍN, ALBERTO F., NEIVA, ERIC, and VERDUGO, FRANCESC
- Subjects
- *
DATA structures , *NUMERICAL integration , *LIBRARY software , *DISCRETE systems , *LINEAR systems - Abstract
In this work we formally derive and prove the correctness of the algorithms and data structures in a parallel, distributed-memory, generic finite element framework that supports h-adaptivity on computational domains represented as forest-of-trees. The framework is grounded on a rich representation of the adaptive mesh suitable for generic finite elements that is built on top of a low-level, light-weight forest-of-trees data structure handled by a specialized, highly parallel adaptive meshing engine, for which we have identified the requirements it must fulfill to be coupled into our framework. Atop this two-layered mesh representation, we build the rest of the data structures required for the numerical integration and assembly of the discrete system of linear equations. We consider algorithms that are suitable for both subassembled and fully assembled distributed data layouts of linear system matrices. The proposed framework has been implemented within the FEMPAR scientific software library, using p4est as a practical forest-of-octrees demonstrator. A strong scaling study of this implementation when applied to Poisson and Maxwell problems reveals remarkable scalability up to 32.2K CPU cores and 482.2M degrees of freedom. Besides, a comparative performance study of FEMPAR and the state-of-the-art deal.II finite element software shows at least comparative performance, and at most a factor of 2{3 improvement in the h-adaptive approximation of a Poisson problem with first- and second-order Lagrangian finite elements, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
26. Understanding the Application of Science Mapping Tools in LIS and Non-LIS Domains.
- Author
-
Lou, Wen, Zhang, Jie, Li, Kai, and He, Jiangen
- Subjects
- *
APPLIED sciences , *SCHOLARLY communication , *CONCEPT mapping , *LIBRARY science , *ATHLETIC fields , *INFORMATION science - Abstract
As a scientific field, scientific mapping offers a set of standardized methods and tools which can be consistently adopted by researchers in different knowledge domains to answer their own research questions. This study examined the scientific articles that applied science mapping tools (SMT) to analyze scientific domains and the citations of these application articles. To understand the roles of these application articles in scholarly communication, we analyzed 496 application articles and their citations from 14 SMT by classifying them into library and information science (LIS) and other fields (non-LIS) in terms of both publication venues and analyzed domains. In our study, we found that science mapping, a topic that is deeply situated in the LIS field, has gained increasing attention from various non-LIS scientific fields over the last few years, especially since 2012. Science mapping application studies practically grew up in LIS domain and spread to other fields. The application articles within and outside of the LIS fields played different roles in advancing the application of science mapping and knowledge discovery. Especially, we have discovered the important role of articles, which studied non-LIS domains but published in LIS journals, in advancing the application of SMTs. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
27. Electronic lab notebooks: can they replace paper?
- Author
-
Samantha Kanza, Cerys Willoughby, Nicholas Gibbins, Richard Whitby, Jeremy Graham Frey, Jana Erjavec, Klemen Zupančič, Matjaž Hren, and Katarina Kovač
- Subjects
Electronic lab notebooks (ELNs) ,Notebooking software ,Cloud ,Semantic web ,Scientific software ,Information technology ,T58.5-58.64 ,Chemistry ,QD1-999 - Abstract
Abstract Despite the increasingly digital nature of society there are some areas of research that remain firmly rooted in the past; in this case the laboratory notebook, the last remaining paper component of an experiment. Countless electronic laboratory notebooks (ELNs) have been created in an attempt to digitise record keeping processes in the lab, but none of them have become a ‘key player’ in the ELN market, due to the many adoption barriers that have been identified in previous research and further explored in the user studies presented here. The main issues identified are the cost of the current available ELNs, their ease of use (or lack of it) and their accessibility issues across different devices and operating systems. Evidence suggests that whilst scientists willingly make use of generic notebooking software, spreadsheets and other general office and scientific tools to aid their work, current ELNs are lacking in the required functionality to meet the needs of the researchers. In this paper we present our extensive research and user study results to propose an ELN built upon a pre-existing cloud notebook platform that makes use of accessible popular scientific software and semantic web technologies to help overcome the identified barriers to adoption.
- Published
- 2017
- Full Text
- View/download PDF
28. Development of the software tool Sample Size for Arbitrary Distributions and exemplarily applying it for calculating minimum numbers of moss samples used as accumulation indicators for atmospheric deposition.
- Author
-
Wosniok, Werner, Nickel, Stefan, and Schröder, Winfried
- Subjects
SOFTWARE development tools ,ATMOSPHERIC deposition ,MONTE Carlo method ,ARITHMETIC mean ,DATA distribution - Abstract
Background: Do we measure enough to calculate statistically valid characteristic values from random sample measurements, or do we measure too much—without any further increase in knowledge? This question is actually one of the key issues of every empirical measurement design, but is rarely investigated in environmental monitoring. Results: In this study, the methodology used for the design of the German Moss Survey 2015 network to determine statistically valid minimum sample numbers (MSN) for the calculation of the arithmetic mean value in compliance with certain accuracy requirements was further developed for data that are neither normally nor lognormally distributed. The core element of the procedure for estimating MSN without prerequisite to the distribution of data is an iterative Monte Carlo simulation. The methodological principle consists of using reference data (values measured in Moss Surveys preceding that in 2015) for a series of MSN candidate values to determine what accuracy would be achieved with these, and then calculating the MSN with which the specified accuracy requirement is met from a quadratic function between MSN candidates and their accuracy. The program Sample Size for Arbitrary Distributions (SSAD) was developed for the calculation of the MSN in the open programming language R. Conclusions: The SSAD procedure closes a gap in the existing methodology for calculating statistically valid minimum sample numbers. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. Experiences with a Flexible User Research Process to Build Data Change Tools.
- Author
-
Paine, Drew, Ghoshal, Devarshi, and Ramakrishnan, Lavanya
- Subjects
SCIENTIFIC software ,SCIENTIFIC knowledge ,ETHNOLOGY ,COMPUTER systems ,COMPUTER files - Abstract
Scientific software development processes are understood to be distinct from commercial software development practices due to uncertain and evolving states of scientific knowledge. Sustaining these software products is a recognized challenge, but under-examined is the usability and usefulness of such tools to their scientific end users. User research is a well-established set of techniques (e.g., interviews, mockups, usability tests) applied in commercial software projects to develop foundational, generative, and evaluative insights about products and the people who use them. Currently these approaches are not commonly applied and discussed in scientific software development work. The use of user research techniques in scientific environments can be challenging due to the nascent, fluid problem spaces of scientific work, varying scope of projects and their user communities, and funding/economic constraints on projects. In this paper, we reflect on our experiences undertaking a multi-method user research process in the Deduce project. The Deduce project is investigating data change to develop metrics, methods, and tools that will help scientists make decisions around data change. There is a lack of common terminology since the concept of systematically measuring and managing data change is under explored in scientific environments. To bridge this gap we conducted user research that focuses on user practices, needs, and motivations to help us design and develop metrics and tools for data change. This paper contributes reflections and the lessons we have learned from our experiences. We offer key takeaways for scientific software project teams to effectively and flexibly incorporate similar processes into their projects. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. A modern approach to QENS data analysis in Mantid.
- Author
-
Mukhopadhyay, Sanghamitra, Hewer, Brandon, Howells, Spencer, and Markvardsen, Anders
- Subjects
- *
DATA analysis , *QUASI-elastic scattering , *NEUTRON scattering , *MOMENTUM transfer , *SOFTWARE architecture - Abstract
Data analysis of Quasi-elastic Neutron Scattering (QENS) experiments often requires multiple steps involving fitting the elastic and quasi-elastic parts of spectra with several empirical functions and analytical models. Parameters of those models can be interdependent and also dependent on the momentum transfer vector Q. Here we present a modern data analysis interface dedicated for QENS data analysis implemented within the open source software Mantid. The interface has been implemented using the state-of-the-art design pattern Model-View-Presenter (MVP). The MVP, an architectural software design pattern, facilitates automated unit tests as well as decoupling of the business logic, presentation logic and the graphical interface. Several models are implemented for analysing both elastic and quasi-elastic parts of the dynamical scattering function S (Q , ω) and intermediate scattering function I (Q , t). To understand the nature of dynamics in a QENS experiments, several models are also implemented for elastic incoherent structure factor (EISF) and jump diffusions. The interface has been validated by analysing a sample of liquid water at room temperature. The nature of hydrogen bond dynamics of the hydrogen bonded organic ferroelectric 2, 4, 5 − Br 3 imidazole, for the first time has been analysed using this newly implemented software. It is found that protons exhibit localised random motion in this material. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. Blended Training on Scientific Software: A Study on How Scientific Data are Generated
- Author
-
Efrosyni-Maria Skordaki and Susan Bainbridge
- Subjects
blended learning ,grounded theory ,scientific software ,training ,distance learning ,snowball sampling ,Special aspects of education ,LC8-6691 - Abstract
This paper presents the results of a research study on scientific software training in blended learning environments. The investigation focused on training approaches followed by scientific software users whose goal is the reliable application of such software. A key issue in current literature is the requirement for a theory-substantiated training framework that will support knowledge sharing among scientific software users. This study followed a grounded theory research design in a qualitative methodology. Snowball sampling as well as purposive sampling methods were employed. Input from respondents with diverse education and experience was collected and analyzed with constant comparative analysis. The scientific software training cycle that results from this research encapsulates specific aptitudes and strategies that affect the users’ in-depth understanding and professional growth regarding scientific software applications. The findings of this study indicate the importance of three key themes in designing training methods for successful application of scientific software: (a) responsibility in comprehension; (b) discipline; and (c) ability to adapt.
- Published
- 2018
- Full Text
- View/download PDF
32. Community Organizations: Changing the Culture in Which Research Software Is Developed and Sustained.
- Author
-
Katz, Daniel S., McInnes, Lois Curfman, Bernholdt, David E., Mayes, Abigail Cabunoc, Hong, Neil P. Chue, Duckles, Jonah, Gesing, Sandra, Heroux, Michael A., Hettrick, Simon, Jimenez, Rafael C., Pierce, Marlon, Weaver, Belinda, and Wilkins-Diehr, Nancy
- Subjects
COMPUTER engineering ,COMMUNITY organization ,COMPUTER software research - Abstract
Software is the key crosscutting technology that enables advances in mathematics, computer science, and domain-specific science and engineering to achieve robust simulations and analysis for science, engineering, and other research fields. However, software itself has not traditionally received focused attention from research communities; rather, software has evolved organically and inconsistently, with its development largely as by-products of other initiatives. Moreover, challenges in scientific software are expanding due to disruptive changes in computer hardware, increasing scale and complexity of data, and demands for more complex simulations involving multiphysics, multiscale modeling and outer-loop analysis. In recent years, community members have established a range of grass-roots organizations and projects to address these growing technical and social challenges in software productivity, quality, reproducibility, and sustainability. This article provides an overview of such groups and discusses opportunities to leverage their synergistic activities while nurturing work toward emerging software ecosystems. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
33. Automated Morphological Feature Assessment for Zebrafish Embryo Developmental Toxicity Screens.
- Author
-
Teixidó, Elisabet, Kießling, Tobias R, Krupp, Eckart, Quevedo, Celia, Muriana, Arantza, and Scholz, Stefan
- Subjects
- *
ZEBRA danio embryos , *ALTERNATIVE toxicity testing , *MORPHOMETRICS , *IMAGE analysis , *DEVELOPMENTAL toxicology , *SCIENTIFIC software - Abstract
Detection of developmental phenotypes in zebrafish embryos typically involves a visual assessment and scoring of morphological features by an individual researcher. Subjective scoring could impact results and be of particular concern when phenotypic effect patterns are also used as a diagnostic tool to classify compounds. Here we introduce a quantitative morphometric approach based on image analysis of zebrafish embryos. A software called FishInspector was developed to detect morphological features from images collected using an automated system to position zebrafish embryos. The analysis was verified and compared with visual assessments of 3 participating laboratories using 3 known developmental toxicants (methotrexate, dexamethasone, and topiramate) and 2 negative compounds (loratadine and glibenclamide). The quantitative approach exhibited higher sensitivity and made it possible to compare patterns of effects with the potential to establish a grouping and classification of developmental toxicants. Our approach improves the robustness of phenotype scoring and reliability of assay performance and, hence, is anticipated to improve the predictivity of developmental toxicity screening using the zebrafish embryo. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
34. Challenges of measuring software impact through citations: An examination of the lme4 R package.
- Author
-
Li, Kai, Chen, Pei-Ying, and Yan, Erjia
- Subjects
COMPUTER software ,CITATION analysis ,SCIENTIFIC software ,DATA analysis ,RESEARCH - Abstract
Highlights • This study examines the distribution of citations to different citable objects related to lme4. • Multiplicity of citable objects connected to software is a challenge to measuring its impact. • The reassignment of the citation format of lme4 catalyzed its changing citation behavior. Abstract The rise of software as a research object is mirrored by increasing interests in quantitative studies of scientific software. However, inconsistent citation practices have led most existing studies of this type to base their analysis of software impact on software name mentions, as identified in full-text publications. Despite its limitations, citation data exists in much greater quantities and covers a broader array of scientific fields than full-text data, and thus can support investigations with much wider scope. This paper aims to analyze the extent to which citation data can be used to reconstruct the impact of software. Specifically, we identify the variety of citable objects related to the lme4 R package and examine how the package's impact is dispersed across these objects. Our results shed light on a little-discussed challenge of using citation data to measure software impact: even within the category of formal citation, the same software object might be cited in different forms. We consider the implications of this challenge and propose a method to reconstruct the impact of lme4 through its citations nonetheless. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
35. Metamorphic Testing: A Simple Yet Effective Approach for Testing Scientific Software.
- Author
-
Kanewala, Upulee and Yueh Chen, Tsong
- Subjects
COMPUTER software testing ,SCIENTIFIC software ,END-user computing software ,COMPUTER software developers ,COMPUTER software development ,COMPUTER software quality control - Abstract
Testing scientific software is a difficult task due to their inherent complexity and the lack of test oracles. In addition, these software systems are usually developed by end-user developers who are not normally trained as professional software developers nor testers. These factors often lead to inadequate testing. Metamorphic testing (MT) is a simple yet effective testing technique for testing such applications. Even though MT is a wellknown technique in the software testing community, it is not very well utilized by the scientific software developers. The objective of this paper is to present MT as an effective technique for testing scientific software. To this end, we discuss why MT is an appropriate testing technique for scientists and engineers who are not primarily trained as software developers. Specifically, how it can be used to conduct systematic and effective testing on programs that do not have test oracles without requiring additional testing tools. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
36. Ten simple rules for documenting scientific software.
- Author
-
Lee, Benjamin D.
- Subjects
- *
SOFTWARE engineering , *COMPUTER software development , *RESEARCH teams , *SCIENTIFIC software , *DOCUMENTATION software - Abstract
The article presents rules to make software that is impactful and usable by using software engineering best practices. It mentions that modern integrated development environments will frequently generate documentation strings automatically as researchers write code, that takes the burden of the need to remember to write comments. It states that people more probably to use it as part of their research if they can quickly start playing with the tool.
- Published
- 2018
- Full Text
- View/download PDF
37. AbINS: The modern software for INS interpretation.
- Author
-
Dymkowski, Krzysztof, Parker, Stewart F., Fernandez-Alonso, Felix, and Mukhopadhyay, Sanghamitra
- Subjects
- *
INELASTIC neutron scattering , *VIBRATIONAL spectra , *COMPUTER software , *RAMAN spectroscopy , *DENSITY functional theory - Abstract
Abstract Inelastic neutron scattering (INS) spectroscopy, contrary to other vibrational spectroscopic techniques such as infrared or Raman spectroscopies, provides much richer microscopic insight into a material due to the absence of selection rules induced by the system's symmetry and v i a its dependence on both energy (E) and momentum (Q) transfer. First-principles density functional theory (DFT) based calculations are now routinely used to interpret infrared and Raman spectra. These calculations can also be used to interpret INS spectra, however, the need to include the neutron scattering cross sections, overtones and combination modes, together with instrument specific E- Q windows make the data analysis challenging. Here we present AbINS: a new generation of software to interpret INS spectra using ab initio phonon data. AbINS is an open-source package implemented as a plugin to the neutron data analysis software, Mantid and offers the facility to plot the full (Q , E) map for powder samples, with the option to extract individual atomic contributions. This option is then applied to analyse the vibrational spectrum of non-hydrogenous K 2 S i F 6 to extract atom-type contributions identifying T 1 g librational mode of the [ S i F 6 ] 2 − ion together with the T 2 u F–Si–F bending mode. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. COMPARATIVE STUDY OF FINITE ELEMENT METHODS USING THE TIME-ACCURACY-SIZE (TAS) SPECTRUM ANALYSIS.
- Author
-
CHANG, JUSTIN, FABIEN, MAURICE S., KNEPLEY, MATTHEW G., and MILLS, RICHARD T.
- Subjects
- *
FINITE element method , *COMPARATIVE studies , *SCIENTIFIC software - Abstract
We present a performance analysis appropriate for comparing algorithms using different numerical discretizations. By taking into account the total time-to-solution, numerical accuracy with respect to an error norm, and the computation rate, a cost-benefit analysis can be performed to determine which algorithm and discretization are particularly suited for an application. This work extends the performance spectrum model in [J. Chang et al., Concurrency and Computation Practice and Experience, 30 (2017), e4401] for interpretation of hardware and algorithmic trade-offs in numerical PDE simulation. As a proof-of-concept, popular finite element software packages are used to illustrate this analysis for Poisson's equation. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
39. Geppetto: a reusable modular open platform for exploring neuroscience data and models.
- Author
-
Cantarelli, Matteo, Marin, Boris, Quintana, Adrian, Earnshaw, Matt, Court, Robert, Gleeson, Padraig, Dura-Bernal, Salvador, Silver, R. Angus, and Idili, Giovanni
- Subjects
- *
OPEN source software , *NEUROSCIENCES , *MIDDLEWARE , *COMPUTER software , *GENE expression - Abstract
Geppetto is an open-source platform that provides generic middleware infrastructure for building both online and desktop tools for visualizing neuroscience models and data and managing simulations. Geppetto underpins a number of neuroscience applications, including Open Source Brain (OSB), Virtual Fly Brain (VFB), NEURON-UI and NetPyNE-UI. OSB is used by researchers to create and visualize computational neuroscience models described in NeuroML and simulate them through the browser. VFB is the reference hub for Drosophila melanogaster neural anatomy and imaging data including neuropil, segmented neurons, microscopy stacks and gene expression pattern data. Geppetto is also being used to build a new user interface for NEURON, a widely used neuronal simulation environment, and for NetPyNE, a Python package for network modelling using NEURON. Geppetto defines domain agnostic abstractions used by Geppetto is an open-source platform that provides generic middleware infrastructure for building both online and desktop tools for visualizing neuroscience models and data and managing simulations. Geppetto underpins a number of neuroscience applications, including Open Source Brain (OSB), Virtual Fly Brain (VFB), NEURON-UI and NetPyNE-UI. OSB is used by researchers to create and visualize computational neuroscience models described in NeuroML and simulate them through the browser. VFB is the reference hub for Drosophila melanogaster neural anatomy and imaging data including neuropil, segmented neurons, microscopy stacks and gene expression pattern data. Geppetto is also being used to build a new user interface for NEURON, a widely used neuronal simulation environment, and for NetPyNE, a Python package for network modelling using NEURON. Geppetto defines domain agnostic abstractions used by all these applications to represent their models and data and offers a set of modules and components to integrate, visualize and control simulations in a highly accessible way. The platform comprises a backend which can connect to external data sources, model repositories and simulators together with a highly customizable frontend. This article is part of a discussion meeting issue 'Connectome to behaviour: modelling C. elegans at cellular resolution'. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
40. Test-Driven Development in HPC Science: A Case Study.
- Author
-
Nanthaamornphong, Aziz and Carver, Jeffrey C.
- Subjects
HIGH performance computing ,SCIENTIFIC software ,SOFTWARE engineering ,SCIENTIFIC computing ,EULER'S numbers - Abstract
Many scientific software developers have applied software engineering practices in their work in recent years. Agile methods are gaining increased interest from both industry and academia, including scientific application domains. Test-driven development (TDD) and refactoring practices are critical to the success of agile methods. Although many scientific projects employ agile practices, the effect of TDD on scientific software development remains unknown and should thus be investigated. The authors investigated the effects of using TDD to develop scientific software in a high-performance computing environment, finding both advantages and disadvantages. In particular, they observed that developers face problems with writing unit tests and with a lack of experience with software engineering practices. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. AceTree: a major update and case study in the long term maintenance of open-source scientific software.
- Author
-
Katzman, Braden, Tang, Doris, Santella, Anthony, and Bao, Zhirong
- Subjects
- *
SCIENTIFIC software , *FLUORESCENCE microscopy , *COMPUTER software development , *EMBRYOLOGY , *RESEARCH - Abstract
Background: AceTree, a software application first released in 2006, facilitates exploration, curation and editing of tracked
C. elegans nuclei in 4-dimensional (4D) fluorescence microscopy datasets. Since its initial release, AceTree has been continuously used to interact with, edit and interpretC. elegans lineage data. In its 11 year lifetime, AceTree has been periodically updated to meet the technical and research demands of its community of users. This paper presents the newest iteration of AceTree which contains extensive updates, demonstrates the new applicability of AceTree in other developmental contexts, and presents its evolutionary software development paradigm as a viable model for maintaining scientific software. Results: Large scale updates have been made to the user interface for an improved user experience. Tools have been grouped according to functionality and obsolete methods have been removed. Internal requirements have been changed that enable greater flexibility of use both inC. elegans contexts and in other model organisms. Additionally, the original 3-dimensional (3D) viewing window has been completely reimplemented. The new window provides a new suite of tools for data exploration. Conclusion: By responding to technical advancements and research demands, AceTree has remained a useful tool for scientific research for over a decade. The updates made to the codebase have extended AceTree's applicability beyond its initial use inC. elegans and enabled its usage with other model organisms. The evolution of AceTree demonstrates a viable model for maintaining scientific software over long periods of time. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
42. Geodynamic diagnostics, scientific visualisation and StagLab 3.0.
- Author
-
Crameri, Fabio
- Subjects
- *
GEODYNAMICS , *SCIENTIFIC visualization , *SCIENTIFIC software , *MATHEMATICAL models - Abstract
Today's Geodynamic models can, often do, and sometimes have to become very complex. Their underlying, increasingly elaborate numerical codes produce a growing amount of raw data. Post-processing such data becomes therefore more and more challenging and time consuming. In addition, visualising processed data and results has, in times of coloured figures and a wealth of half-scientific software, become one of the weakest pillars of science, widely mistreated and ignored. Efficient and automated Geodynamic diagnostics and sensible, scientific visualisation, preventing common pitfalls, is thus more important than ever. Here, a collection of numerous diagnostics for plate tectonics and mantle dynamics is provided and a case for truly scientific visualisation is made. Amongst other diagnostics are a most accurate and robust plate-boundary identification, slab-polarity recognition, plate-bending derivation, surface-topography component splitting and mantle-plume detection. Thanks to powerful image processing tools and other elaborate algorithms, these and many other insightful diagnostics are conveniently derived from only a subset of the most basic parameter fields. A brand-new set of scientifically proof, perceptually uniform colour maps including "devon", "davos", "oslo" and "broc" is introduced and made freely available. These novel colour maps bring a significant advantage over misleading, non-scientific colour maps like "rainbow"', which is shown to introduce a visual error to the underlying data of up to 7.5%. Finally, StagLab (http://www.fabiocrameri.ch/software) is introduced, a software package that incorporates the whole suite of automated Geodynamic diagnostics and, on top of that, applies state-of-the-art, scientific visualisation to produce publication-ready figures and movies, all in a blink of an eye, all fully reproducible. StagLab, a simple, flexible, efficient and reliable tool, made freely available to everyone, is written in MatLab and adjustable for use with Geodynamic mantle-convection codes. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
43. Blended Training on Scientific Software: A Study on How Scientific Data are Generated.
- Author
-
Skordaki, Efrosyni-Maria and Bainbridge, Susan
- Subjects
SCIENTIFIC software ,TRAINING ,CLASSROOM environment ,APPLICATION software ,LITERATURE - Abstract
This paper presents the results of a research study on scientific software training in blended learning environments. The investigation focused on training approaches followed by scientific software users whose goal is the reliable application of such software. A key issue in current literature is the requirement for a theory-substantiated training framework that will support knowledge sharing among scientific software users. This study followed a grounded theory research design in a qualitative methodology. Snowball sampling as well as purposive sampling methods were employed. Input from respondents with diverse education and experience was collected and analyzed with constant comparative analysis. The scientific software training cycle that results from this research encapsulates specific aptitudes and strategies that affect the users' in-depth understanding and professional growth regarding scientific software applications. The findings of this study indicate the importance of three key themes in designing training methods for successful application of scientific software: (a) responsibility in comprehension; (b) discipline; and (c) ability to adapt. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. Radiative interaction of atmosphere and surface: Write-up with elements of code.
- Author
-
Korkin, Sergey and Lyapustin, Alexei
- Subjects
- *
GREEN'S functions , *RADIANCE , *SURFACE interactions , *RADIATIVE transfer , *ENVIRONMENTAL auditing , *REMOTE sensing , *ATMOSPHERE , *GEOGRAPHIC names - Abstract
• The matrix-operator method (MOM) is convenient for polarized atmospheric correction (AC). • A new modification of our vector radiative transfer (RT) code provides elements of MOM as standard output, while the surface model is now excluded from the RT solver. • Using this output, we account for polarization in the Green's function formalism for the AC. • The "equations + code" bundle helps document the algorithm and reproduce our results. In passive satellite remote sensing of the Earth, separation of the path radiance (atmosphere-only contribution) from the surface reflection remains a " significant challenge ". Recent literature names it among the gaps in radiative transfer (RT) topics that "require continued research in the near future ". The challenge comes from multiple reflections (bouncing) between the atmosphere and surface – radiative interaction. In this paper we use a known RT technique, the matrix-operator method (MOM), and a new modification of the monochromatic vector RT (vRT) code IPOL (Intensity and POLarization) to simulate the interaction of a plane-parallel atmosphere and a few widely used surface reflection models. Following the idea of the Green's function method, IPOL no longer takes the surface model parameters on input. Instead, it provides the path radiance, and the atmospheric reflection and transmission matrices as output. Despite many RT codes use the MOM formalism, this output does not seem common. The surface reflection matrix is computed externally. Therefore, this paper extends the Green's function atmospheric correction technique to the case of polarized light. Aiming clarity rather than performance, we explain in Python the structure of the surface matrices for the isotropic (Lambertian), directional unpolarized, and polarized ocean reflection models. We then combine these surface matrices and the precomputed IPOL output to get numerically accurate signal at the top of atmosphere (TOA) and test it vs. published benchmarks. Then, for each benchmark scenario we show how to get the surface from the TOA signal, i.e. perform the RT-based atmospheric correction. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Mailing list archives as useful primary sources for historians: looking for flame wars.
- Author
-
Hocquet, Alexandre and Wieber, Frédéric
- Subjects
- *
SCIENTIFIC software , *COMPUTATIONAL chemistry , *MAILING lists (Lists of addresses) , *TELEMATICS , *TELECOMMUNICATION - Abstract
This paper aims to show the potential of mailing list archives as primary sources for studying recent history of science. In order to focus on the debates regarding software within the computational chemistry community in the 1990s, the corpus we rely on consists in a scholarly mailing list, a typical corpus from its time, conceived, constructed and maintained by a community. The threaded conversations of the list also constitute a unique rhetorical form in its organisation which is technically bound to the Internet-based media of that time. We first present the issues at stake within our research topic and show how relevant is such a corpus to address them. We then discuss the "ethnographic" characteristics and the structure of the corpus. Its most interesting parts are the "flame wars", that is outbursts of heated, short and dense debates, in an ocean of evenly distributed polite messages. We unveil how the relevant flame wars are located and extracted by producing a graphical representation of the number of messages per day over time. Once flame wars are isolated, the messages exchanged by practitioners are studied precisely to comprehend the argumentative structure of the debates and the different viewpoints of actors. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
46. Quantitative Measurement of Scientific Software Quality: Definition of a Novel Quality Model.
- Author
-
Koteska, Bojana, Mishev, Anastas, and Pejov, Ljupco
- Subjects
SCIENTIFIC software ,PROBLEM solving ,APPLICATION software ,DIMENSIONAL analysis - Abstract
This paper presents a novel quality model, which provides a quantitative assessment of the attributes evaluated at each stage of development of scientific applications. This model is defined by selecting a set of attributes and metrics that affect the quality of applications. It is based on the established quality standards. The practical application and verification of the quality model is confirmed by two case studies. The first is an application for solving one-dimensional and two-dimensional Schrödinger equations, using the discrete variables representation method. The second is an application for calculating an ECG-derived heart rate and respiratory rate. The first application follows a development model for scientific applications, which includes some software engineering practices. The second application does not use a specific development model, rather, it is developed ad hoc. The quality of the applications is evaluated through comparative analyses using the proposed model. Based on software quality metrics, the results of this study indicate that the application for solving one-dimensional and two-dimensional Schrödinger equations produces more desirable results. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
47. An Original Smart-Grids Test Bed to Teach Feeder Automation Functions in a Distribution Grid.
- Author
-
Alvarez-Herault, Marie Cecile, Labonne, Antoine, Toure, Selle, Braconnier, Thierry, Debusschere, Vincent, Caire, Raphael, and Hadjsaid, Nouredine
- Subjects
- *
ELECTRIC power distribution grids , *SCIENTIFIC software , *LOAD forecasting (Electric power systems) , *ELECTRIC power system stability , *LOAD flow analysis (Electric power systems) - Abstract
This paper proposes the description of an original smart-grids test bed aimed at teaching novel feeder automation functions to students from both university and industry origins. With this test bed, a lab class proposes to students, first, to develop feeder automation functions using scientific software and, then, to experiment by practically testing them on an emulated distribution grid platform, called PREDIS. This platform includes real medium-voltage reduced-scale loads, generators, and a supervisory control and data acquisition system. The presented lab class is part of a dedicated complete pedagogic module with lectures and experiments. Through the development, the tests and the deployments of their own solutions in an actual distribution grid, the students learn by doing from theory to practice the complete chain of smart-grids solutions: from the electrical to the communication layers. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
48. Deliberate Individual Change Framework for Understanding Programming Practices in four Oceanography Groups.
- Author
-
Kuksenok, Kateryna, Aragon, Cecilia, Fogarty, James, Lee, Charlotte, and Neff, Gina
- Abstract
Computing affects how scientific knowledge is constructed, verified, and validated. Rapid changes in hardware capability, and software flexibility, are coupled with a volatile tool and skill set, particularly in the interdisciplinary scientific contexts of oceanography. Existing research considers the role of scientists as both users and producers of code. We focus on how an intentional, individually-initiated but socially-situated, process of uptake influences code written by scientists. We present an 18-month interview and observation study of four oceanography teams, with a focus on ethnographic shadowing of individuals undertaking code work. Through qualitative analysis, we developed a framework of deliberate individual change, which builds upon prior work on programming practices in science through the lens of sociotechnical infrastructures. We use qualitative vignettes to illustrate how our theoretical framework helps to understand changing programming practices. Our findings suggest that scientists use and produce software in a way that deliberately mitigates the potential pitfalls of their programming practice. In particular, the object and method of visualization is subject to restraint intended to prevent accidental misuse. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
49. ImageJ2: ImageJ for the next generation of scientific image data.
- Author
-
Rueden, Curtis T., Schindelin, Johannes, Hiner, Mark C., De Zonia, Barry E., Walter, Alison E., Arena, Ellen T., and Eliceiri, Kevin W.
- Subjects
- *
IMAGE analysis , *SCIENTIFIC software , *PHYSICAL sciences , *LIFE sciences , *ALGORITHMS - Abstract
Background: ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Results: We rewrote the entire ImageJ codebase, engineering a redesigned plugin mechanism intended to facilitate extensibility at every level, with the goal of creating a more powerful tool that continues to serve the existing community while addressing a wider range of scientific requirements. This next-generation ImageJ, called "ImageJ2" in places where the distinction matters, provides a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. Conclusions: Scientific imaging benefits from open-source programs that advance new method development and deployment to a diverse audience. ImageJ has continuously evolved with this idea in mind; however, new and emerging scientific requirements have posed corresponding challenges for ImageJ's development. The described improvements provide a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs. Future efforts will focus on implementing new algorithms in this framework and expanding collaborations with other popular scientific software suites. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
50. How is R cited in research outputs? Structure, impacts, and citation standard.
- Author
-
Li, Kai, Yan, Erjia, and Feng, Yuanyuan
- Subjects
SCIENTIFIC software ,COMPUTER software development ,COMPUTER programming ,INTEGRATED software - Abstract
This paper addresses software citation by analyzing how R and its packages are cited in a sample of PLoS papers. A codebook is developed to support a content analysis of the full-text papers. Our results indicate that the software R and its packages are inconsistently cited, as is the case with other scientific software. The inconsistency derives partly from the variety of citation standards currently used for software, and partly from fact that these standards are not well followed by authors on multiple levels. This work sheds light on the future development of software citation standards, especially given the present landscape of conflicting citation practices. Moreover, our approach furnishes a possible blueprint for dealing with the granularity of software entities in scientific citation: we consider citations of the core R software environment, of specific R packages, and of individual functions. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.