1,106 results
Search Results
2. Whole genome sequence of Vibrio cholerae directly from dried spotted filter paper.
- Author
-
Bénard, Angèle H. M., Guenou, Etienne, Fookes, Maria, Ateudjieu, Jerome, Kasambara, Watipaso, Siever, Matthew, Rebaudet, Stanislas, Boncy, Jacques, Adrien, Paul, Piarroux, Renaud, Sack, David A., Thomson, Nicholas, and Debes, Amanda K.
- Subjects
- *
CHOLERA , *TREPONEMA pallidum , *FILTER paper , *VIBRIO cholerae , *NUCLEOTIDE sequencing , *HEALTH facilities , *CHOLERA toxin - Abstract
Background: Global estimates for cholera annually approximate 4 million cases worldwide with 95,000 deaths. Recent outbreaks, including Haiti and Yemen, are reminders that cholera is still a global health concern. Cholera outbreaks can rapidly induce high death tolls by overwhelming the capacity of health facilities, especially in remote areas or areas of civil unrest. Recent studies demonstrated that stool specimens preserved on filter paper facilitate molecular analysis of Vibrio cholerae in resource limited settings. Specimens preserved in a rapid, low-cost, safe and sustainable manner for sequencing provides previously unavailable data about circulating cholera strains. This may ultimately provide new information to shape public policy response on cholera control and elimination. Methodology/Principal findings: Whole genome sequencing (WGS) recovered close to a complete sequence of the V. cholerae O1 genome with satisfactory genome coverage from stool specimens enriched in alkaline peptone water (APW) and V. cholerae culture isolates, both spotted on filter paper. The minimum concentration of V. cholerae DNA sufficient to produce quality genomic information was 0.02 ng/μL. The genomic data confirmed the presence or absence of genes of epidemiological interest, including cholera toxin and pilus loci. WGS identified a variety of diarrheal pathogens from APW-enriched specimen spotted filter paper, highlighting the potential for this technique to explore the gut microbiome, potentially identifying co-infections, which may impact the severity of disease. WGS demonstrated that these specimens fit within the current global cholera phylogenetic tree, identifying the strains as the 7th pandemic El Tor. Conclusions: WGS results allowed for mapping of short reads from APW-enriched specimen and culture isolate spotted filter papers this provided valuable molecular epidemiological sequence information on V. cholerae strains from remote, low-resource settings. These results identified the presence of co-infecting pathogens while providing rare insight into the specific V. cholerae strains causing outbreaks in cholera-endemic areas. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
3. Multi-Omics Analysis on Neurodevelopment in Preterm Neonates: A Protocol Paper.
- Author
-
Casavant, Sharon G., Chen, Jie, Xu, Wanli, Lainwala, Shabnam, Matson, Adam, Chen, Ming-Hui, Starkweather, Angela, Maas, Kendra, and Cong, Xiaomei S.
- Subjects
- *
INTESTINAL physiology , *ANTIBIOTICS , *FECAL analysis , *EVALUATION of medical care , *HUMAN growth , *NEONATAL necrotizing enterocolitis , *STATISTICAL power analysis , *DATABASES , *INFANT development , *NEONATAL intensive care , *PAIN measurement , *DNA , *SEQUENCE analysis , *GUT microbiome , *PHENOMENOLOGICAL biology , *MULTIPLE regression analysis , *HUMAN genome , *NEONATAL intensive care units , *GESTATIONAL age , *GENETIC variation , *NEURAL development , *PAIN threshold , *INFANT nutrition , *BIOINFORMATICS , *BIRTH weight , *CHILD psychopathology , *DESCRIPTIVE statistics , *MESSENGER RNA , *FACTOR analysis , *INFANT psychology , *DATA analysis software , *ORAL mucosa , *LONGITUDINAL method , *PSYCHOLOGICAL stress , *DISEASE risk factors - Abstract
Background: The gut microbiome is an important determinant of health and disease in preterm infants. Objectives: The objective of this article was to share our current protocol for other neonatal intensive care units to potentially expand their existing protocols, aiming to characterize the relationship between the intestinal microbiome and health outcomes in preterm infants. Methods : This prospective, longitudinal study planned to recruit 160 preterm infants born <32 weeks gestational age or weighing <1,500 g and admitted to one of two Level III/IV neonatal intensive care units. During the neonatal intensive care unit period, the primary measures included events of early life pain/stress, gut microbiome, host genetic variations, and neurobehavioral assessment. During follow-up visits, gut microbiome; pain sensitivity; and medical, growth, and developmental outcomes at 4, 8-12, and 18-24 months corrected age were measured. Discussion : As of February 14, 2020, 214 preterm infants have been recruited. We hypothesize that infants who experience greater levels of pain/stress will have altered gut microbiome, including potential adverse outcomes such as necrotizing enterocolitis and host genetic variations, feeding intolerance, and/or neurodevelopmental impairments. These will differ from the intestinal microbiome of preterm infants who do not develop these adverse outcomes. To test this hypothesis, we will determine how alterations in the intestinal microbiome affect the risk of developing necrotizing enterocolitis, feeding intolerance, and neurodevelopmental impairments in preterm infants. In addition, we will examine the interaction between the intestinal microbiome and host genetics in the regulation of intestinal health and neurodevelopmental outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. An Approach to Multi-Attribute Decision-Making Based on Single-Valued Neutrosophic Hesitant Fuzzy Aczel-Alsina Aggregation Operator.
- Author
-
Imran, Raiha, Ullah, Kifayat, Ali, Zeeshan, and Akram, Maria
- Subjects
- *
DECISION making , *NEUTROSOPHIC logic , *BIOINFORMATICS , *TRIANGULAR norms , *DNA - Abstract
A single-valued Neutrosophic hesitant fuzzy set (SVNHFS) is a combination of a singlevalued neutrosophic set (SVNS) and hesitant fuzzy set (HFS) that has been developed to address insufficient, unreliable, and vague environments in which each element has several possible options determined by the truthiness (𝓉ᴎ), indeterminacy (𝒾ᴎ) and falsity (ẝᴎ) value. By considering this, in this paper, we have proposed the Aczel-Alsina aggregation operator (AAAO) for SVNHFS, which is more flexible t-norm (Ŧ) and t-conorm (𝛻) than the other Ŧ, and 𝛻 due to the flexible nature of parameters to solve Multi-Attribute decision making (MADM) problems. Further, the score function (ȿ), accuracy function (ɑ), and certainty function (ϲ) of SVNHFS have been defined. In this paper, we proposed the single-valued neutrosophic hesitant fuzzy Aczel-Alsina Weighted Averaging Operator (SVNHFAAWA), single-valued neutrosophic hesitant fuzzy Aczel-Alsina Weighted ordered Averaging Operator (SVNHFAAWOA), and single-valued neutrosophic hesitant fuzzy Aczel-Alsina hybrid averaging operator (SVNHFAAHA). To testify to the reliability and stability of the newly created aggregation operator (AO), an application of MADM has been discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Unveiling Similarities in the Code of Life: A Detailed Exploration of DNA Sequence Matching Algorithm.
- Author
-
Shams, Mahmoud Y., Farag, Romany M., Aldawody, Dalia A., Khalid, Huda E., Essa, Ahmed K., El-Bakry, Hazem M., and Salama, A. A.
- Subjects
- *
NUCLEOTIDE sequence , *ALGORITHMS , *BIOINFORMATICS , *DNA , *COSINE function - Abstract
Identifying similar DNA sequences is crucial in various biological research endeavors. This paper delves into the intricate workings of a specific algorithm designed for this purpose. We provide a systematic explanation, exploring how the algorithm handles user input, reads stored DNA sequences, utilizes the Word2Vec model for vector representation, and calculates sequence similarity using diverse metrics like Cosine Similarity and Neutrosophic Distance. Additionally, the paper explores the incorporation of neutrosophic values to account for uncertainty in the comparisons. Finally, we discuss the extraction of results, including matched sequences, similarity scores, and accuracy measures. This in-depth exploration provides a clear understanding of the algorithm's capabilities and fosters its effective application in DNA sequence analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Sequence alignment software migration and performance evaluation based on DPCT.
- Author
-
LI Pei-zhen, ZHANG Yang, and CHEN Wen-bo
- Abstract
This paper explores the process of migrating CUDA programs to DPC++ using the GASAL2 sequence alignment software. The DPCT tool is utilized during the migration process to automatically convert CUDA APIs to DPC++ APIs. However, the migrated code still requires adaptation and modification to compile and run correctly. This paper evaluates the effectiveness of the DPCT tool in migrating CUDA programs to DPC++ and demonstrates the high-efficiency performance of DPC++ across different architectures. Experiments show that the migrated program maintains the accuracy of the original program and can run on heterogeneous devices with the Intel GPU architecture without code modification. At the same time, the migrated DPC++-based GASAL2 heterogeneous computing performance can reach approximately 90%-95% of the original CUDA-based GASAL2 computing performance, fully demonstrating the feasibility of DPC++ heterogeneous programming. The results provide a promising solution for cross-platform heterogeneous programming to fully utilize a wider range of hardware support [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. A prediction model of nonclassical secreted protein based on deep learning.
- Author
-
Zhang, Fan, Liu, Chaoyang, Wang, Binjie, He, Yiru, and Zhang, Xinhong
- Subjects
- *
CONVOLUTIONAL neural networks , *AMINO acid sequence , *FEATURE selection , *HYPERLINKS , *PROTEIN models , *DEEP learning - Abstract
Most of the current nonclassical proteins prediction methods involve manual feature selection, such as constructing features of samples based on the physicochemical properties of proteins and position‐specific scoring matrix (PSSM). However, these tasks require researchers to perform some tedious search work to obtain the physicochemical properties of proteins. This paper proposes an end‐to‐end nonclassical secreted protein prediction model based on deep learning, named DeepNCSPP, which employs the protein sequence information and sequence statistics information as input to predict whether it is a nonclassical secreted protein. The protein sequence information and sequence statistics information are extracted using bidirectional long‐ and short‐term memory and convolutional neural networks, respectively. Among the experiments conducted on the independent test dataset, DeepNCSPP achieved excellent results with an accuracy of 88.24%, Matthews coefficient (MCC) of 77.01%, and F1‐score of 87.50%. Independent test dataset testing and 10‐fold cross‐validation show that DeepNCSPP achieves competitive performance with state‐of‐the‐art methods and can be used as a reliable nonclassical secreted protein prediction model. A web server has been constructed for the convenience of researchers. The web link is https://www.deepncspp.top/. The source code of DeepNCSPP has been hosted on GitHub and is available online (https://github.com/xiaoliu166370/DEEPNCSPP). This paper proposes an end‐to‐end nonclassical secreted protein prediction model DeepNCSPP based on deep learning, which employs the protein sequence information and sequence statistics information as input to predict whether it is a nonclassical secreted protein. The protein sequence information and sequence statistics information are extracted using bidirectional long‐ and short‐term memory and convolutional neural networks, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. AI-Driven Deep Learning Techniques in Protein Structure Prediction.
- Author
-
Chen, Lingtao, Li, Qiaomu, Nasif, Kazi Fahim Ahmad, Xie, Ying, Deng, Bobin, Niu, Shuteng, Pouriyeh, Seyedamin, Dai, Zhiyu, Chen, Jiawei, and Xie, Chloe Yixin
- Subjects
- *
MACHINE learning , *PROTEIN structure prediction , *COMPUTATIONAL intelligence , *PROTEIN structure , *PROTEIN models , *DEEP learning - Abstract
Protein structure prediction is important for understanding their function and behavior. This review study presents a comprehensive review of the computational models used in predicting protein structure. It covers the progression from established protein modeling to state-of-the-art artificial intelligence (AI) frameworks. The paper will start with a brief introduction to protein structures, protein modeling, and AI. The section on established protein modeling will discuss homology modeling, ab initio modeling, and threading. The next section is deep learning-based models. It introduces some state-of-the-art AI models, such as AlphaFold (AlphaFold, AlphaFold2, AlphaFold3), RoseTTAFold, ProteinBERT, etc. This section also discusses how AI techniques have been integrated into established frameworks like Swiss-Model, Rosetta, and I-TASSER. The model performance is compared using the rankings of CASP14 (Critical Assessment of Structure Prediction) and CASP15. CASP16 is ongoing, and its results are not included in this review. Continuous Automated Model EvaluatiOn (CAMEO) complements the biennial CASP experiment. Template modeling score (TM-score), global distance test total score (GDT_TS), and Local Distance Difference Test (lDDT) score are discussed too. This paper then acknowledges the ongoing difficulties in predicting protein structure and emphasizes the necessity of additional searches like dynamic protein behavior, conformational changes, and protein–protein interactions. In the application section, this paper introduces some applications in various fields like drug design, industry, education, and novel protein development. In summary, this paper provides a comprehensive overview of the latest advancements in established protein modeling and deep learning-based models for protein structure predictions. It emphasizes the significant advancements achieved by AI and identifies potential areas for further investigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A short note on the paper of Liu et al. (2012). A relative Lempel-Ziv complexity: Application to comparing biological sequences. Chemical Physics Letters, volume 530, 19 March 2012, pages 107–112.
- Author
-
Arit, Turkan, Keskin, Burak, Firuzan, Esin, Cavas, Cagin Kandemir, Liu, Liwei, and Cavas, Levent
- Subjects
- *
BIOINFORMATICS , *SEQUENCE analysis , *CAULERPA taxifolia , *LEMPEL-Ziv algorithm , *QUANTUM mechanics - Abstract
The report entitled “L. Liu, D. Li, F. Bai, A relative Lempel-Ziv complexity: Application to comparing biological sequences, Chem. Phys. Lett. 530 (2012) 107–112” mentions on the powerful construction of phylogenetic trees based on Lempel-Ziv algorithm. On the other hand, the method explained in the paper does not give promising result on the data set on invasive Caulerpa taxifolia in the Mediterranean Sea. The phylogenetic trees are obtained by the proposed method of the aforementioned paper in this short note. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
10. Is <italic>PLOS ONE</italic> Attracting Highly Cited Papers in the Food Sciences? Comparing Authors' Most Cited Work to Their <italic>PLOS ONE</italic> Articles Published 2006-2016.
- Author
-
Stankus, Tony
- Subjects
- *
INFORMATION sharing , *FOOD safety , *BIOINFORMATICS , *IMMUNOINFORMATICS - Abstract
Since 2007, the Open Access journal
PLOS ONE has published 194,622 expert-reviewed scientific papers, including 85 related to the human food sciences such as food safety, general food science, and nutrition. Seventy-five of their corresponding authors had previously published. Their most cited articles in other journals were identified and citation counts compared against those earned by theirPLOS ONE papers. NoPLOS ONE food sciences paper has yet been cited more than an author's best effort in the field in other journals. More than half (38) of the 75PLOS ONE papers studied remain uncited even by their own authors. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
11. Phospholipid Acyltransferases: Characterization and Involvement of the Enzymes in Metabolic and Cancer Diseases.
- Author
-
Korbecki, Jan, Bosiacki, Mateusz, Pilarczyk, Maciej, Gąssowska-Dobrowolska, Magdalena, Jarmużek, Paweł, Szućko-Kociuba, Izabela, Kulik-Sajewicz, Justyna, Chlubek, Dariusz, and Baranowska-Bosiacka, Irena
- Subjects
- *
ENZYME metabolism , *ENZYME inhibitors , *METABOLIC disorders , *PHOSPHOLIPIDS , *ANTINEOPLASTIC agents , *ACYLTRANSFERASES , *ENZYMES , *BIOINFORMATICS , *METABOLISM , *TUMORS , *MOLECULAR biology , *PHARMACODYNAMICS , *CHEMICAL inhibitors - Abstract
Simple Summary: This review discusses the enzymatic processes governing the initial stages of the synthesis of glycerophospholipids (phosphatidylcholine, phosphatidylethanolamine, and phosphatidylserine) and triacylglycerol. The key enzymes analyzed include glycerol-3-phosphate acyltransferases (GPAT) and 1-acylglycerol-3-phosphate acyltransferases (AGPAT). Additionally, because most AGPATs have lysophospholipid acyltransferase (LPLAT) activity, enzymes involved in the Lands cycle with similar functions were also included. The review further explores the potential therapeutic implications of inhibiting these enzymes in the treatment of metabolic diseases and cancer. By elucidating the enzymatic pathways involved in lipid synthesis and their impact on various pathological conditions, the article contributes to the understanding of these processes and their potential as therapeutic targets. This review delves into the enzymatic processes governing the initial stages of glycerophospholipid (phosphatidylcholine, phosphatidylethanolamine, and phosphatidylserine) and triacylglycerol synthesis. The key enzymes under scrutiny include GPAT and AGPAT. Additionally, as most AGPATs exhibit LPLAT activity, enzymes participating in the Lands cycle with similar functions are also covered. The review begins by discussing the properties of these enzymes, emphasizing their specificity in enzymatic reactions, notably the incorporation of polyunsaturated fatty acids (PUFAs) such as arachidonic acid and docosahexaenoic acid (DHA) into phospholipids. The paper sheds light on the intricate involvement of these enzymes in various diseases, including obesity, insulin resistance, and cancer. To underscore the relevance of these enzymes in cancer processes, a bioinformatics analysis was conducted. The expression levels of the described enzymes were correlated with the overall survival of patients across 33 different types of cancer using the GEPIA portal. This review further explores the potential therapeutic implications of inhibiting these enzymes in the treatment of metabolic diseases and cancer. By elucidating the intricate enzymatic pathways involved in lipid synthesis and their impact on various pathological conditions, this paper contributes to a comprehensive understanding of these processes and their potential as therapeutic targets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Discussion on the paper 'Statistical contributions to bioinformatics: Design, modelling, structure learning and integration' by Jeffrey S. Morris and Veerabhadran Baladandayuthapani.
- Author
-
Houwing-Duistermaat, Jeanine J., Hae Won Uh, and Gusnanto, Arief
- Subjects
- *
BIOINFORMATICS , *STATISTICS , *GLYCOMICS , *PROTEIN structure , *COMPUTERS in biology - Abstract
Bioinformatics is an important research area for statisticians. This discussion provides some additional topics to the paper, namely on statistical contributions to detect differential expressed genes, for protein structure prediction, and for the analysis of highly correlated features in Glycomics datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
13. Epigenomic and transcriptomic approaches in the post-genomic era: path to novel targets for diagnosis and therapy of the ischaemic heart? Position Paper of the European Society of Cardiology Working Group on Cellular Biology of the Heart.
- Author
-
Perrino, Cinzia, Barabási, Albert-Laszló, Condorelli, Gianluigi, Davidson, Sean Michael, De Windt, Leon, Dimmeler, Stefanie, Engel, Felix Benedikt, Hausenloy, Derek John, Addison Hill, Joseph, Van Laake, Linda Wilhelmina, Lecour, Sandrine, Leor, Jonathan, Madonna, Rosalinda, Mayr, Manuel, Prunier, Fabrice, Geradus Sluijter, Joost Petrus, Schulz, Rainer, Thum, Thomas, Ytrehus, Kirsti, and Ferdinandy, Péter
- Subjects
- *
HEART failure , *HEART failure treatment , *MYOCARDIAL reperfusion , *DIAGNOSIS , *CORONARY disease , *CORONARY heart disease treatment , *REPERFUSION injury , *TREATMENT of reperfusion injuries , *THERAPEUTICS - Abstract
Despite advances in myocardial reperfusion therapies, acute myocardial ischaemia/reperfusion injury and consequent ischaemic heart failure represent the number one cause of morbidity and mortality in industrialized societies. Although different therapeutic interventions have been shown beneficial in preclinical settings, an effective cardio-protective or regenerative therapy has yet to be successfully introduced in the clinical arena. Given the complex pathophysiology of the ischaemic heart, large scale, unbiased, global approaches capable of identifying multiple branches of the signalling networks activated in the ischaemic/reperfused heart might be more successful in the search for novel diagnostic or therapeutic targets. High-throughput techniques allow high-resolution, genome-wide investigation of genetic variants, epigenetic modifications, and associated gene expression profiles. Platforms such as proteomics and metabolomics (not described here in detail) also offer simultaneous readouts of hundreds of proteins and metabolites. Isolated omics analyses usually provide Big Data requiring large data storage, advanced computational resources and complex bioinformatics tools. The possibility of integrating different omics approaches gives new hope to better understand the molecular circuitry activated by myocardial ischaemia, putting it in the context of the human 'diseasome'. Since modifications of cardiac gene expression have been consistently linked to pathophysiology of the ischaemic heart, the integration of epigenomic and transcriptomic data seems a promising approach to identify crucial disease networks. Thus, the scope of this Position Paper will be to highlight potentials and limitations of these approaches, and to provide recommendations to optimize the search for novel diagnostic or therapeutic targets for acute ischaemia/reperfusion injury and ischaemic heart failure in the post-genomic era. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
14. Optimized Python library for reconstruction of ensemble-based gene co-expression networks using multi-GPU.
- Author
-
López-Fernández, Aurelio, Gómez-Vela, Francisco A., del Saz-Navarro, María, Delgado-Chaves, Fernando M., and Rodríguez-Baena, Domingo S.
- Subjects
- *
GENE regulatory networks , *STEM cell transplantation , *PARALLEL programming , *GENE expression , *PYTHON programming language , *GRAPHICS processing units - Abstract
Gene co-expression networks are valuable tools for discovering biologically relevant information within gene expression data. However, analysing large datasets presents challenges due to the identification of nonlinear gene–gene associations and the need to process an ever-growing number of gene pairs and their potential network connections. These challenges mean that some experiments are discarded because the techniques do not support these intense workloads. This paper presents pyEnGNet, a Python library that can generate gene co-expression networks in High-performance computing environments. To do this, pyEnGNet harnesses CPU and multi-GPU parallel computing resources, efficiently handling large datasets. These implementations have optimised memory management and processing, delivering timely results. We have used synthetic datasets to prove the runtime and intensive workload improvements. In addition, pyEnGNet was used in a real-life study of patients after allogeneic stem cell transplantation with invasive aspergillosis and was able to detect biological perspectives in the study. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Investigating Enzyme Biochemistry by Deep Learning: A Computational Tool for a New Era.
- Author
-
Rayka, Milad, Mirzaei, Morteza, Farnoosh, Gholamreza, and Latifi, Ali Mohammad
- Subjects
- *
DEEP learning , *MACHINE learning , *SUPERVISED learning , *ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *BIOCHEMISTRY - Abstract
Enzymes are protein molecules that play a crucial role in various biological processes in living organisms. They function as catalysts in biological reactions such as digestion, metabolism, DNA replication and other physiological processes. Furthermore, enzymes are widely used in food production, pharmaceuticals and biofuel production. In these industries, they accelerate desired chemical reactions as biocatalysts. Therefore, applying computational methods and data-driven algorithms to predict enzyme properties is essential. Over the past decade, deep learning has made remarkable advancements in science and technology. Deep learning is a subset of machine learning algorithms that rely on artificial neural networks. These algorithms can be employed for supervised, semi-supervised and unsupervised learning. Here, to provide an update on the current literature, we provide an overview of various deep learning algorithms and recent advancements in their application to enzyme science. These applications can generally be categorized into diverse subjects: function prediction, enzyme kinetic parameters prediction, enzyme-substrate identification, condition optimization, thermophilic property prediction, enzyme catalytic site prediction and enzyme design. In conclusion, we discuss the convergence of enzyme science and deep learning, highlighting the potential opportunities and challenges. Artificial intelligence algorithms have diverse applications in enzyme science. In this review paper, we focus on deep learning algorithms and their implications on various tasks, such as enzyme design, kinetic parameters prediction, etc. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Mathematical Modeling in Bioinformatics: Application of an Alignment-Free Method Combined with Principal Component Analysis.
- Author
-
Bielińska-Wąż, Dorota, Wąż, Piotr, Błaczkowska, Agata, Mandrysz, Jan, Lass, Anna, Gładysz, Paweł, and Karamon, Jacek
- Subjects
- *
AMINO acid sequence , *ECHINOCOCCUS multilocularis , *MOMENTS of inertia , *PRINCIPAL components analysis , *CENTER of mass - Abstract
In this paper, an alignment-free bioinformatics technique, termed the 20D-Dynamic Representation of Protein Sequences, is utilized to investigate the similarity/dissimilarity between Baculovirus and Echinococcus multilocularis genome sequences. In this method, amino acid sequences are depicted as 20D-dynamic graphs, comprising sets of "material points" in a 20-dimensional space. The spatial distribution of these material points is indicative of the sequence characteristics and is quantitatively described by sequence descriptors akin to those employed in dynamics, such as coordinates of the center of mass of the 20D-dynamic graph and the tensor of the moment of inertia of the graph (defined as a symmetric matrix). Each descriptor unveils distinct features of similarity and is employed to establish similarity relations among the examined sequences, manifested either as a symmetric distance matrix ("similarity matrix"), a classification map, or a phylogenetic tree. The classification maps are introduced as a new way of visualizing the similarity relations obtained using the 20D-Dynamic Representation of Protein Sequences. Some classification maps are obtained using the Principal Component Analysis (PCA) for the center of mass coordinates and normalized moments of inertia of 20D-dynamic graphs as input data. Although the method operates in a multidimensional space, we also apply some visualization techniques, including the projection of 20D-dynamic graphs onto a 2D plane. Studies on model sequences indicate that the method is of high quality, both graphically and numerically. Despite the high similarity observed among the sequences of E. multilocularis, subtle discrepancies can be discerned on the 2D graphs. Employing this approach has led to the discovery of numerous new similarity relations compared to our prior study conducted at the DNA level, using the 4D-Dynamic Representation of DNA/RNA Sequences, another alignment-free bioinformatics method also introduced by us. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Bioinformatics programs are 31-fold over-represented among the highest impact scientific papers of the past two decades.
- Author
-
Wren, Jonathan D.
- Subjects
- *
BIOINFORMATICS , *GENE ontology , *DNA analysis , *GENETIC engineering , *BIOTECHNOLOGY - Abstract
Motivation: To analyze the relative proportion of bioinformatics papers and their nonbioinformatics counterparts in the top 20 most cited papers annually for the past two decades. Results: When defining bioinformatics papers as encompassing both those that provide software for data analysis or methods underlying data analysis software, we find that over the past two decades, more than a third (34%) of the most cited papers in science were bioinformatics papers, which is approximately a 31-fold enrichment relative to the total number of bioinformatics papers published. More than half of the most cited papers during this span were bioinformatics papers. Yet, the average 5-year JIF of top 20 bioinformatics papers was 7.7, whereas the average JIF for top 20 non-bioinformatics papers was 25.8, significantly higher (P<4.5 x 10-29). The 20-year trend in the average JIF between the two groups suggests the gap does not appear to be significantly narrowing. For a sampling of the journals producing top papers, bioinformatics journals tended to have higher Gini coefficients, suggesting that development of novel bioinformatics resources may be somewhat 'hit or miss'. That is, relative to other fields, bioinformatics produces some programs that are extremely widely adopted and cited, yet there are fewer of intermediate success. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
18. Discussant paper on 'Statistical contributions to bioinformatics: Design, modelling, structure learning and integration'.
- Author
-
Tianzhou Ma, Chi Song, and Tseng, George C.
- Subjects
- *
BIOINFORMATICS , *COMPUTATIONAL biology , *STATISTICS , *GENOMICS , *PHYLOGENY - Published
- 2017
- Full Text
- View/download PDF
19. Current data processing methods and reporting standards for untargeted analysis of volatile organic compounds using direct mass spectrometry: a systematic review.
- Author
-
Rosenthal, K, Lindley, MR, Turner, MA, Ratcliffe, E, and Hunsicker, E
- Subjects
- *
VOLATILE organic compounds , *MASS spectrometry , *SCIENCE databases , *FOOD safety - Abstract
Introduction: Untargeted direct mass spectrometric analysis of volatile organic compounds has many potential applications across fields such as healthcare and food safety. However, robust data processing protocols must be employed to ensure that research is replicable and practical applications can be realised. User-friendly data processing and statistical tools are becoming increasingly available; however, the use of these tools have neither been analysed, nor are they necessarily suited for every data type. Objectives: This review aims to analyse data processing and analytic workflows currently in use and examine whether methodological reporting is sufficient to enable replication. Methods: Studies identified from Web of Science and Scopus databases were systematically examined against the inclusion criteria. The experimental, data processing, and data analysis workflows were reviewed for the relevant studies. Results: From 459 studies identified from the databases, a total of 110 met the inclusion criteria. Very few papers provided enough detail to allow all aspects of the methodology to be replicated accurately, with only three meeting previous guidelines for reporting experimental methods. A wide range of data processing methods were used, with only eight papers (7.3%) employing a largely similar workflow where direct comparability was achievable. Conclusions: Standardised workflows and reporting systems need to be developed to ensure research in this area is replicable, comparable, and held to a high standard. Thus, allowing the wide-ranging potential applications to be realised. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Applications of artificial intelligence and bioinformatics methodologies in the analysis of ocular biofluid markers: a scoping review.
- Author
-
Pucchio, Aidan, Krance, Saffire H., Pur, Daiana R., Bhatti, Jasmine, Bassi, Arshpreet, Manichavagan, Karthik, Brahmbhatt, Shaily, Aggarwal, Ishita, Singh, Priyanka, Virani, Aleena, Stanley, Meagan, Miranda, Rafael N., and Felfeli, Tina
- Subjects
- *
EYE diseases , *ARTIFICIAL intelligence , *MACULAR degeneration , *DRY eye syndromes , *SUPERVISED learning , *BIOINFORMATICS - Abstract
Purpose: This scoping review summarizes the applications of artificial intelligence (AI) and bioinformatics methodologies in analysis of ocular biofluid markers. The secondary objective was to explore supervised and unsupervised AI techniques and their predictive accuracies. We also evaluate the integration of bioinformatics with AI tools. Methods: This scoping review was conducted across five electronic databases including EMBASE, Medline, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Web of Science from inception to July 14, 2021. Studies pertaining to biofluid marker analysis using AI or bioinformatics were included. Results: A total of 10,262 articles were retrieved from all databases and 177 studies met the inclusion criteria. The most commonly studied ocular diseases were diabetic eye diseases, with 50 papers (28%), while glaucoma was explored in 25 studies (14%), age-related macular degeneration in 20 (11%), dry eye disease in 10 (6%), and uveitis in 9 (5%). Supervised learning was used in 91 papers (51%), unsupervised AI in 83 (46%), and bioinformatics in 85 (48%). Ninety-eight papers (55%) used more than one class of AI (e.g. > 1 of supervised, unsupervised, bioinformatics, or statistical techniques), while 79 (45%) used only one. Supervised learning techniques were often used to predict disease status or prognosis, and demonstrated strong accuracy. Unsupervised AI algorithms were used to bolster the accuracy of other algorithms, identify molecularly distinct subgroups, or cluster cases into distinct subgroups that are useful for prediction of the disease course. Finally, bioinformatic tools were used to translate complex biomarker profiles or findings into interpretable data. Conclusion: AI analysis of biofluid markers displayed diagnostic accuracy, provided insight into mechanisms of molecular etiologies, and had the ability to provide individualized targeted therapeutic treatment for patients. Given the progression of AI towards use in both research and the clinic, ophthalmologists should be broadly aware of the commonly used algorithms and their applications. Future research may be aimed at validating algorithms and integrating them in clinical practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Designing electronic graphic symbol-based AAC systems: a scoping review. Part 1: system description.
- Author
-
Tönsing, Kerstin M., Bartram, Jessica, Morwane, Refilwe E., and Waller, Annalu
- Subjects
- *
MIDDLE-income countries , *MOBILE apps , *FACILITATED communication , *REHABILITATION , *COMPUTER graphics , *ASSISTIVE technology , *SYSTEMATIC reviews , *ALLIED health personnel , *MULTILINGUALISM , *BIOINFORMATICS , *LITERATURE reviews , *LOW-income countries , *LANGUAGE acquisition - Abstract
This is the first of two papers summarizing studies reporting on the design of electronic graphic symbol-based augmentative and alternative communication (AAC) systems, to determine the state of the field. The aim of this paper was to provide an overview of the general characteristics of the studies and to describe the features of the systems designed. A scoping review was conducted. A multifaceted search resulted in the identification of 28 studies meeting the selection criteria. Data were extracted relating to four areas of interest, namely (1) the general characteristics of the studies, (2) features of the systems designed, (3) availability of the systems to the public, and (4) the design processes followed. In this paper, findings relating to the first three areas are presented. Most study authors were affiliated to fields of engineering and/or computer science and came from high-income countries. Most studies reported the design of AAC applications loaded onto mobile technology devices. Common system features included customizable vocabulary items, the inclusion of graphic symbols from both established AAC libraries and other sources, a dynamic grid display, and the inclusion of digital and/or synthetic speech output. Few systems were available to the public. Limited justifications for many of the complex design decisions were provided in the studies, possibly due to limited involvement of rehabilitation professionals during the design process. Furthermore, few studies reported on the design of graphic symbol-based AAC systems specifically for middle- and low-income contexts and also for multilingual populations. Complex design decisions about electronic graphic symbol-based augmentative and alternative communication (AAC) systems should be made purposefully and with sufficient justification. Increased collaboration between designers and rehabilitation professionals during the design of electronic graphic symbol-based systems could improve the products. Design of AAC systems for populations residing in low and middle-income contexts and also for multilingual populations are urgently needed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Mitochondrial genome plasticity of mammalian species.
- Author
-
Biró, Bálint, Gál, Zoltán, Fekete, Zsófia, Klecska, Eszter, and Hoffmann, Orsolya Ivett
- Subjects
- *
MITOCHONDRIAL DNA , *MACHINE learning , *GENOMES - Abstract
There is an ongoing process in which mitochondrial sequences are being integrated into the nuclear genome. The importance of these sequences has already been revealed in cancer biology, forensic, phylogenetic studies and in the evolution of the eukaryotic genetic information. Human and numerous model organisms' genomes were described from those sequences point of view. Furthermore, recent studies were published on the patterns of these nuclear localised mitochondrial sequences in different taxa. However, the results of the previously released studies are difficult to compare due to the lack of standardised methods and/or using few numbers of genomes. Therefore, in this paper our primary goal is to establish a uniform mining pipeline to explore these nuclear localised mitochondrial sequences. Our results show that the frequency of several repetitive elements is higher in the flanking regions of these sequences than expected. A machine learning model reveals that the flanking regions' repetitive elements and different structural characteristics are highly influential during the integration process. In this paper, we introduce a general mining pipeline for all mammalian genomes. The workflow is publicly available and is believed to serve as a validated baseline for future research in this field. We confirm the widespread opinion, on - as to our current knowledge - the largest dataset, that structural circumstances and events corresponding to repetitive elements are highly significant. An accurate model has also been trained to predict these sequences and their corresponding flanking regions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Optimization and Performance Analysis of CAT Method for DNA Sequence Similarity Searching and Alignment.
- Author
-
Gancheva, Veska and Stoev, Hristo
- Subjects
- *
NUCLEOTIDE sequence , *DNA sequencing , *SHIFT registers , *PROCESS capability , *SEQUENCE alignment , *UPLOADING of data - Abstract
Bioinformatics is a rapidly developing field enabling scientific experiments via computer models and simulations. In recent years, there has been an extraordinary growth in biological databases. Therefore, it is extremely important to propose effective methods and algorithms for the fast and accurate processing of biological data. Sequence comparisons are the best way to investigate and understand the biological functions and evolutionary relationships between genes on the basis of the alignment of two or more DNA sequences in order to maximize the identity level and degree of similarity. This paper presents a new version of the pairwise DNA sequences alignment algorithm, based on a new method called CAT, where a dependency with a previous match and the closest neighbor are taken into consideration to increase the uniqueness of the CAT profile and to reduce possible collisions, i.e., two or more sequence with the same CAT profiles. This makes the proposed algorithm suitable for finding the exact match of a concrete DNA sequence in a large set of DNA data faster. In order to enable the usage of the profiles as sequence metadata, CAT profiles are generated once prior to data uploading to the database. The proposed algorithm consists of two main stages: CAT profile calculation depending on the chosen benchmark sequences and sequence comparison by using the calculated CAT profiles. Improvements in the generation of the CAT profiles are detailed and described in this paper. Block schemes, pseudo code tables, and figures were updated according to the proposed new version and experimental results. Experiments were carried out using the new version of the CAT method for DNA sequence alignment and different datasets. New experimental results regarding collisions, speed, and efficiency of the suggested new implementation are presented. Experiments related to the performance comparison with Needleman–Wunsch were re-executed with the new version of the algorithm to confirm that we have the same performance. A performance analysis of the proposed algorithm based on the CAT method against the Knuth–Morris–Pratt algorithm, which has a complexity of O(n) and is widely used for biological data searching, was performed. The impact of prior matching dependencies on uniqueness for generated CAT profiles is investigated. The experimental results from sequence alignment demonstrate that the proposed CAT method-based algorithm exhibits minimal deviation, which can be deemed negligible if such deviation is considered permissible in favor of enhanced performance. It should be noted that the performance of the CAT algorithm in terms of execution time remains stable, unaffected by the length of the analyzed sequences. Hence, the primary benefit of the suggested approach lies in its rapid processing capabilities in large-scale sequence alignment, a task that traditional exact algorithms would require significantly more time to perform. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Knowledge Graphs in Information Retrieval.
- Author
-
Dutkiewicz, Jakub and Jędrzejek, Czesław
- Subjects
- *
INFORMATION retrieval , *KNOWLEDGE graphs , *NATURAL language processing , *DATA integration , *BIOINFORMATICS - Abstract
This paper introduces an information retrieval model that leverages knowledge graphs, specifically tailored for Clinical Trials. In these scenarios, the document in question takes the form of a semi-structured clinical trial, containing details about enrolled patients, descriptions of experiments and procedures conducted during the trial, relevant diseases, and specific enrollment criteria. While the document retains a semi-structured format, the majority of the information is expressed in natural language. Queries in this context consist of specific patient characteristics, such as disease type, genetic information, and demographic data. The primary aim of this paper is to develop and utilize a knowledge graph capable of storing this information, including links to external resources like the Disease Ontology. We propose an Object-Relational model, which is then transformed into a knowledge graph. This graph is subsequently employed to identify semantic connections between concepts present in the clinical trials and those in the queries. These connections are then utilized to formulate a retrieval model for each aspect of the query. To achieve this, we design a relevance formula that incorporates weights to account for ontological relationships between concepts. We evaluate the effectiveness of our model by comparing the results with manual annotations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Big data bioinformatics discoveries: Machine learning approaches, tools, and perspectives.
- Author
-
Nenchovski, Boris Atanasov and Ivanova, Desislava
- Subjects
- *
MACHINE learning , *BIG data , *BIOINFORMATICS , *BIOINFORMATICS software , *DATA analysis - Abstract
This paper examines the analysis of the different data types within the field of Bioinformatics. The aim is to study and handpick machine learning algorithms that are best suited for handling large datasets and data compendia with its specific features. The paper proposes a specific informed approach for analyzing each bioinformatics data type, combining, and comparing existing knowledge on the use of machine learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Selected papers from the 15th and 16th international conference on Computational Intelligence Methods for Bioinformatics and Biostatistics.
- Author
-
Cazzaniga, Paolo, Raposo, Maria, Besozzi, Daniela, Merelli, Ivan, Staiano, Antonino, Ciaramella, Angelo, Rizzo, Riccardo, and Manzoni, Luca
- Subjects
- *
COMPUTATIONAL intelligence , *CONFERENCES & conventions , *MEDICAL informatics , *BIOINFORMATICS , *BIOMETRY , *BIOENGINEERING - Abstract
Computational intelligence methods for bioinformatics and biostatistics: 15th International Meeting, CIBB 2018, Caparica, Portugal, September 6-8, 2018, Revised Selected Papers. CIBB 2012 was organized in Houston (TX), then in Nice (France) in 2013, Cambridge (UK) in 2014, Naples (Italy) in 2015, Stirling (UK) in 2016, Cagliari (Italy) in 2017, Lisbon (Portugal) in 2018, and Bergamo (Italy) in 2019. This supplement contains seven revised and extended papers selected from CIBB 2018 and CIBB 2019, the 15th and 16th editions of the international conference on Computational Intelligence Methods for Bioinformatics and Biostatistics. [Extracted from the article]
- Published
- 2021
- Full Text
- View/download PDF
27. The top 100 papers.
- Author
-
Van Noorden, Richard, Maher, Brendan, and Nuzzo, Regina
- Subjects
- *
BIBLIOGRAPHICAL citations , *BIBLIOGRAPHICAL citation research , *BIOINFORMATICS , *PHYLOGENY , *RESEARCH , *CHARTS, diagrams, etc. - Abstract
The article discusses the most-cited researches in history. Topics include topics of papers and citations that has allowed them to reach the top the most cited research like Bioinformatics, Phylogenetics and Statistics, a chart showing the top 10 most cited researches and information on the first systematic effort to track citations the Science Citation Index (SCI).
- Published
- 2014
- Full Text
- View/download PDF
28. TFTF: An R-Based Integrative Tool for Decoding Human Transcription Factor–Target Interactions.
- Author
-
Wang, Jin
- Subjects
- *
GENE regulatory networks , *GENE expression , *GENETIC regulation , *REGULATOR genes , *PHENOTYPES - Abstract
Transcription factors (TFs) are crucial in modulating gene expression and sculpting cellular and organismal phenotypes. The identification of TF–target gene interactions is pivotal for comprehending molecular pathways and disease etiologies but has been hindered by the demanding nature of traditional experimental approaches. This paper introduces a novel web application and package utilizing the R program, which predicts TF–target gene relationships and vice versa. Our application integrates the predictive power of various bioinformatic tools, leveraging their combined strengths to provide robust predictions. It merges databases for enhanced precision, incorporates gene expression correlation for accuracy, and employs pan-tissue correlation analysis for context-specific insights. The application also enables the integration of user data with established resources to analyze TF–target gene networks. Despite its current limitation to human data, it provides a platform to explore gene regulatory mechanisms comprehensively. This integrated, systematic approach offers researchers an invaluable tool for dissecting the complexities of gene regulation, with the potential for future expansions to include a broader range of species. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Integration between Bioinformatics Algorithms and Neutrosophic Theory.
- Author
-
Farag, Romany M., Shams, Mahmoud Y., Aldawody, Dalia A., Khalid, Huda E., El-Bakry, Hazem M., and Salama, Ahmed A.
- Subjects
- *
COMPUTATIONAL biology , *ARTIFICIAL intelligence , *BIOINFORMATICS , *DATA mining , *DATABASES , *NUCLEIC acids , *BIOINFORMATICS software , *SYNTHETIC biology - Abstract
This paper presents a neutrosophic inference model for bioinformatics. The model is used to develop a system for accurate comparisons of human nucleic acids, where the new nucleic acid is compared to a database of old nucleic acids. The comparisons are analyzed in terms of accuracy, certainty, uncertainty, neutrality, and bias. The proposed system achieves good results and provides a reliable standard for future comparisons. It highlights the potential of neutrosophic inference models in bioinformatics applications. Data mining and bioinformatics play a crucial role in computational biology, with applications in scientific research and industrial development. Biological analysts rely on specialized tools and algorithms to collect, store, categorize, and analyze large volumes of unstructured data. Data mining techniques are used to extract valuable information from this data, aiding in the development of new therapies and understanding genetic relationships between organisms. Recent advancements in bioinformatics include gene expression tools, Bio sequencing, and Bio databases, which facilitate the extraction and analysis of vital biological information. These technologies contribute to the analysis of big data, identification of key bioinformatics insights, and generation of new biological knowledge. Data collection, analysis, and interpretation in this field involves the use of modern technologies such as cloud computing, machine learning, and artificial intelligence, enabling more efficient and accurate results. Ultimately, data mining and bioinformatics enhance our understanding of genetic relationships, aid in developing new therapies, and improve healthcare outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
30. A guide to single‐cell RNA sequencing analysis using web‐based tools for non‐bioinformatician.
- Author
-
Yarlagadda, Sagnik and Giorgio, Todd D.
- Subjects
- *
TRAINING of scientists , *USER interfaces , *NON-coding RNA , *NUCLEOTIDE sequencing , *RESEARCH personnel - Abstract
Single‐cell RNA sequencing (scRNA‐seq) is a technique that has proven to be a powerful tool for a wide range of fields and research studies. However, scRNA‐seq data analysis has been dominated by scientists highly trained in bioinformatics or those with extensive computational experience and understanding. Recently, this trend has begun to shift as more user‐friendly web‐based scRNA‐seq analysis tools have been developed that require little computational experience to use. However, barriers persist for nonbioinformaticians in using this technique. Complex, unfamiliar language and scarce comprehensive literature guidance to provide a framework for understanding scRNA‐seq analysis outputs are among the obstacles. This work introduces many popular web‐based tools for scRNA‐seq and provides a general overview of their user interfaces and features. Then, a comprehensive start‐to‐finish introductory scRNA‐seq analysis pipeline is described in detail, which aims to enable researchers to carry out scRNA‐seq analysis, regardless of computational experience. Companion video tutorials can be found at "EasyScRNAseqTutorials" on YouTube (https://www.youtube.com/@scrnaseqtutorials). However, as scRNA‐seq continues to penetrate new fields and expand in importance, there remains a need for more literature to help overcome barriers to its use by explaining further the highly complex and advanced analyses that are introduced within this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Malignant Melanoma: An Overview, New Perspectives, and Vitamin D Signaling.
- Author
-
Slominski, Radomir M., Kim, Tae-Kang, Janjetovic, Zorica, Brożyna, Anna A., Podgorska, Ewa, Dixon, Katie M., Mason, Rebecca S., Tuckey, Robert C., Sharma, Rahul, Crossman, David K., Elmets, Craig, Raman, Chander, Jetten, Anton M., Indra, Arup K., and Slominski, Andrzej T.
- Subjects
- *
MELANOMA treatment , *VITAMIN D metabolism , *RISK assessment , *MELANOMA , *ATTITUDES toward illness , *CANCER invasiveness , *MELANOGENESIS , *ARTIFICIAL intelligence , *CELLULAR signal transduction , *BIOINFORMATICS , *METASTASIS , *METABOLITES , *MOLECULAR biology , *CELL receptors , *DISEASE risk factors - Abstract
Simple Summary: Despite recent advances in diagnosis and therapy, malignant melanoma poses a significant problem both to clinicians and cancer researchers due to its resistance to therapy and unpredictable behavior. In this review, we discuss etiology, risk factors, diagnosis, prognosis, and therapy of melanoma, with a focus on new developments in these areas including bioinformatics. These are analyzed in the context of its unique metabolic characteristics and of recent advances in vitamin D biology with implications for melanoma. Active forms of vitamin D can prevent or inhibit melanoma development and progression, and can be used in therapy of this disease. Knowledge of patient vitamin D status and vitamin D signaling in the tumoral tissue can help in predicting the progression of the disease and in primary or adjuvant therapy. Therefore, vitamin D signaling represents a realistic target for the prevention or therapy of malignant melanoma. Melanoma, originating through malignant transformation of melanin-producing melanocytes, is a formidable malignancy, characterized by local invasiveness, recurrence, early metastasis, resistance to therapy, and a high mortality rate. This review discusses etiologic and risk factors for melanoma, diagnostic and prognostic tools, including recent advances in molecular biology, omics, and bioinformatics, and provides an overview of its therapy. Since the incidence of melanoma is rising and mortality remains unacceptably high, we discuss its inherent properties, including melanogenesis, that make this disease resilient to treatment and propose to use AI to solve the above complex and multidimensional problems. We provide an overview on vitamin D and its anticancerogenic properties, and report recent advances in this field that can provide solutions for the prevention and/or therapy of melanoma. Experimental papers and clinicopathological studies on the role of vitamin D status and signaling pathways initiated by its active metabolites in melanoma prognosis and therapy are reviewed. We conclude that vitamin D signaling, defined by specific nuclear receptors and selective activation by specific vitamin D hydroxyderivatives, can provide a benefit for new or existing therapeutic approaches. We propose to target vitamin D signaling with the use of computational biology and AI tools to provide a solution to the melanoma problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Assessing opportunities of SYCL for biological sequence alignment on GPU-based systems.
- Author
-
Costanzo, Manuel, Rucci, Enzo, García-Sanchez, Carlos, Naiouf, Marcelo, and Prieto-Matías, Manuel
- Subjects
- *
SEQUENCE alignment , *COMPUTATIONAL biology , *PROGRAMMING languages , *BIOINFORMATICS , *C++ , *BIOINFORMATICS software - Abstract
Bioinformatics and computational biology are two fields that have been exploiting GPUs for more than two decades, with being CUDA the most used programming language for them. However, as CUDA is an NVIDIA proprietary language, it implies a strong portability restriction to a wide range of heterogeneous architectures, like AMD or Intel GPUs. To face this issue, the Khronos group has recently proposed the SYCL standard, which is an open, royalty-free, cross-platform abstraction layer that enables the programming of a heterogeneous system to be written using standard, single-source C++ code. Over the past few years, several implementations of this SYCL standard have emerged, being oneAPI the one from Intel. This paper presents the migration process of the SW# suite, a biological sequence alignment tool developed in CUDA, to SYCL using Intel's oneAPI ecosystem. The experimental results show that SW# was completely migrated with a small programmer intervention in terms of hand-coding. In addition, it was possible to port the migrated code between different architectures (considering multiple vendor GPUs and also CPUs), with no noticeable performance degradation on five different NVIDIA GPUs. Moreover, performance remained stable when switching to another SYCL implementation. As a consequence, SYCL and its implementations can offer attractive opportunities for the bioinformatics community, especially considering the vast existence of CUDA-based legacy codes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Computational metadata generation methods for biological specimen image collections.
- Author
-
Karnani, Kevin, Pepper, Joel, Bakiş, Yasin, Wang, Xiaojun, Bart Jr., Henry, Breen, David E., and Greenberg, Jane
- Subjects
- *
BIOLOGICAL specimens , *METADATA , *CONVEX surfaces , *LABOR costs , *ERROR rates ,ILLINOIS state history - Abstract
Metadata is a key data source for researchers seeking to apply machine learning (ML) to the vast collections of digitized biological specimens that can be found online. Unfortunately, the associated metadata is often sparse and, at times, erroneous. This paper extends previous research conducted with the Illinois Natural History Survey (INHS) collection (7244 specimen images) that uses computational approaches to analyze image quality, and then automatically generates 22 metadata properties representing the image quality and morphological features of the specimens. In the research reported here, we demonstrate the extension of our initial work to University of the Wisconsin Zoological Museum (UWZM) collection (4155 specimen images). Further, we enhance our computational methods in four ways: (1) augmenting the training set, (2) applying contrast enhancement, (3) upscaling small objects, and (4) refining our processing logic. Together these new methods improved our overall error rates from 4.6 to 1.1%. These enhancements also allowed us to compute an additional set of 17 image-based metadata properties. The new metadata properties provide supplemental features and information that may also be used to analyze and classify the fish specimens. Examples of these new features include convex area, eccentricity, perimeter, skew, etc. The newly refined process further outperforms humans in terms of time and labor cost, as well as accuracy, providing a novel solution for leveraging digitized specimens with ML. This research demonstrates the ability of computational methods to enhance the digital library services associated with the tens of thousands of digitized specimens stored in open-access repositories world-wide by generating accurate and valuable metadata for those repositories. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. BERT-TFBS: a novel BERT-based model for predicting transcription factor binding sites by transfer learning.
- Author
-
Wang, Kai, Zeng, Xuan, Zhou, Jingwen, Liu, Fei, Luan, Xiaoli, and Wang, Xinglong
- Subjects
- *
TRANSCRIPTION factors , *LANGUAGE models , *CONVOLUTIONAL neural networks , *BINDING sites , *DEEP learning , *GENETIC transcription - Abstract
Transcription factors (TFs) are proteins essential for regulating genetic transcriptions by binding to transcription factor binding sites (TFBSs) in DNA sequences. Accurate predictions of TFBSs can contribute to the design and construction of metabolic regulatory systems based on TFs. Although various deep-learning algorithms have been developed for predicting TFBSs, the prediction performance needs to be improved. This paper proposes a bidirectional encoder representations from transformers (BERT)-based model, called BERT-TFBS, to predict TFBSs solely based on DNA sequences. The model consists of a pre-trained BERT module (DNABERT-2), a convolutional neural network (CNN) module, a convolutional block attention module (CBAM) and an output module. The BERT-TFBS model utilizes the pre-trained DNABERT-2 module to acquire the complex long-term dependencies in DNA sequences through a transfer learning approach, and applies the CNN module and the CBAM to extract high-order local features. The proposed model is trained and tested based on 165 ENCODE ChIP-seq datasets. We conducted experiments with model variants, cross-cell-line validations and comparisons with other models. The experimental results demonstrate the effectiveness and generalization capability of BERT-TFBS in predicting TFBSs, and they show that the proposed model outperforms other deep-learning models. The source code for BERT-TFBS is available at https://github.com/ZX1998-12/BERT-TFBS. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Overdominance in livestock breeding: examples and current status.
- Author
-
Nam Bui, Anh Phu, Hoang Tam, Trinh Lam, Phuong, Pham Thi, and Linh, Nguyen Thuy
- Subjects
- *
LIVESTOCK breeding , *GENETIC variation , *GENETIC polymorphisms , *BIOINFORMATICS , *GENETIC carriers - Abstract
Recent data have revealed that genetic variation could be attributed to overdominance, or heterozygote advantage. However, genomic survey showed that only a small number of genes that have polymorphisms maintained by overdominance which is consistent with many published papers. Google Web, Google scholar, NCBI Databases and OMIC Tools were used to obtain data for this review paper. Different key words were used to retrieve the required research articles and bioinformatics-based information, such as “overdominance’’ and “overdominance in animals’’. Research papers used for this review were published over the last 10 to 15 years and information regarding overdominance in livestock was considered for current review. It is hoped that in the future, more loci with overdominance will be discovered. In this review, we will illustrate eight examples of overdominance in livestock. We also want to emphasize that given a low number of reported cases in overdominance, it does not reflect the unimportance of heterozygote advantage in adaptive functions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
36. The papers presented at 7th Young Scientists School "Systems Biology and Bioinformatics" (SBB'15): Introductory Note.
- Author
-
Baranova, Ancha V. and Orlov, Yuriy L.
- Subjects
- *
ANNUAL meetings , *SCIENTISTS , *SYSTEMS biology , *BIOINFORMATICS , *CONFERENCES & conventions - Abstract
Information about several papers discussed in the 7th International Young Scientists School "Systems Biology and Bioinformatics" held in Novosibirsk, Russia on June 22-25, 2015, is presented. Topics include next generation sequencing (NGS), evolutionary bioinformatics, and systems biology and gene network modeling. The conference featured several research papers which include from A. I. Klimenko, A. V. Bryanskaya and T. M. Khlebodarova.
- Published
- 2016
- Full Text
- View/download PDF
37. Introduction to the selected papers from the 7th International Conference on Bioinformatics and Computational Biology (BICoB 2015).
- Author
-
Saeed, Fahad, Haspel, Nurit, and Al-Mubaid, Hisham
- Subjects
- *
BIOINFORMATICS , *COMPUTATIONAL biology - Published
- 2016
- Full Text
- View/download PDF
38. The deep learning applications in IoT-based bio- and medical informatics: a systematic literature review.
- Author
-
Amiri, Zahra, Heidari, Arash, Navimipour, Nima Jafari, Esmaeilpour, Mansour, and Yazdani, Yalda
- Subjects
- *
MEDICAL informatics , *DEEP learning , *CONVOLUTIONAL neural networks , *CLINICAL decision support systems , *GENERATIVE adversarial networks , *RECURRENT neural networks , *DRUG discovery - Abstract
Nowadays, machine learning (ML) has attained a high level of achievement in many contexts. Considering the significance of ML in medical and bioinformatics owing to its accuracy, many investigators discussed multiple solutions for developing the function of medical and bioinformatics challenges using deep learning (DL) techniques. The importance of DL in Internet of Things (IoT)-based bio- and medical informatics lies in its ability to analyze and interpret large amounts of complex and diverse data in real time, providing insights that can improve healthcare outcomes and increase efficiency in the healthcare industry. Several applications of DL in IoT-based bio- and medical informatics include diagnosis, treatment recommendation, clinical decision support, image analysis, wearable monitoring, and drug discovery. The review aims to comprehensively evaluate and synthesize the existing body of the literature on applying deep learning in the intersection of the IoT with bio- and medical informatics. In this paper, we categorized the most cutting-edge DL solutions for medical and bioinformatics issues into five categories based on the DL technique utilized: convolutional neural network, recurrent neural network, generative adversarial network, multilayer perception, and hybrid methods. A systematic literature review was applied to study each one in terms of effective properties, like the main idea, benefits, drawbacks, methods, simulation environment, and datasets. After that, cutting-edge research on DL approaches and applications for bioinformatics concerns was emphasized. In addition, several challenges that contributed to DL implementation for medical and bioinformatics have been addressed, which are predicted to motivate more studies to develop medical and bioinformatics research progressively. According to the findings, most articles are evaluated using features like accuracy, sensitivity, specificity, F-score, latency, adaptability, and scalability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Cell copper death: a new field of tumor prevention and treatment.
- Author
-
ZHANG Xiao-jing, HU Lin-xia, BU Qian, and SUN Dong-lei
- Subjects
- *
TUMOR treatment , *CELL death , *CELL death inhibition , *CANCER cell proliferation , *REGULATOR genes , *BIOINFORMATICS - Abstract
Objective Tumor is a major public health problem harmful to people's health. Cell copper death provides a new idea for the basic research of tumor. This paper reviews the latest progress of cell copper death in the field of tumor prevention and treatment. Methods The literatures related to copper death in tumor prevention and treatment were summarized, and the research status of copper death related genes in different types of tumors was reviewed. Results Ferredoxin 1 (FDXl), lipoic acid synthase (LIAS), dihydro lipocyte answers (DLAT), and dihydro lipoamide S-succinyl transferase (DLST) were the key regulatory genes of copper death, which were abnormally expressed in lung cancer, hepatocellular carcinoma, and breast cancer. Among them, the low expression of FDXl in tumor patients was related to the poor prognosis of the disease, while the overall survival time of cancer patients with high expression of DLA T and DLST genes was decreased. In addition, targeting copper death protein and promoting cell copper death inhibited the proliferation of cancer cells in vivo and in vitro. Conclusion Several genes related to copper death, such as FDXl, LIAS, DLAT, and DLST, are abnormally expressed in cancer patients and are related to the occurrence, development, metastasis, and prognosis of the tumor. Targeted promotion of cell copper death and inhibition of cancer cell proliferation provide a new direction for future basic research in the field of tumor, which has public health significance for tumor prevention and treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. PB-LKS: a python package for predicting phage–bacteria interaction through local K-mer strategy.
- Author
-
Qiu, Jingxuan, Nie, Wanchun, Ding, Hao, Dai, Jia, Wei, Yiwen, Li, Dezhi, Zhang, Yuxi, Xie, Junting, Tian, Xinxin, Wu, Nannan, and Qiu, Tianyi
- Subjects
- *
BACTERIOPHAGES , *DRUG resistance in bacteria , *GENETIC variation , *BACTERIAL diseases , *FORECASTING - Abstract
Bacteriophages can help the treatment of bacterial infections yet require in-silico models to deal with the great genetic diversity between phages and bacteria. Despite the tolerable prediction performance, the application scope of current approaches is limited to the prediction at the species level, which cannot accurately predict the relationship of phages across strain mutants. This has hindered the development of phage therapeutics based on the prediction of phage–bacteria relationships. In this paper, we present, PB-LKS, to predict the phage–bacteria interaction based on local K-mer strategy with higher performance and wider applicability. The utility of PB-LKS is rigorously validated through (i) large-scale historical screening, (ii) case study at the class level and (iii) in vitro simulation of bacterial antiphage resistance at the strain mutant level. The PB-LKS approach could outperform the current state-of-the-art methods and illustrate potential clinical utility in pre-optimized phage therapy design. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Computational Tools to Assist in Analyzing Effects of the SERPINA1 Gene Variation on Alpha-1 Antitrypsin (AAT).
- Author
-
Mróz, Jakub, Pelc, Magdalena, Mitusińska, Karolina, Chorostowska-Wynimko, Joanna, and Jezela-Stanek, Aleksandra
- Subjects
- *
TRYPSIN inhibitors , *SINGLE nucleotide polymorphisms , *COMPUTATIONAL neuroscience , *GENETIC variation , *PROTEIN structure , *INDIVIDUALIZED medicine - Abstract
In the rapidly advancing field of bioinformatics, the development and application of computational tools to predict the effects of single nucleotide variants (SNVs) are shedding light on the molecular mechanisms underlying disorders. Also, they hold promise for guiding therapeutic interventions and personalized medicine strategies in the future. A comprehensive understanding of the impact of SNVs in the SERPINA1 gene on alpha-1 antitrypsin (AAT) protein structure and function requires integrating bioinformatic approaches. Here, we provide a guide for clinicians to navigate through the field of computational analyses which can be applied to describe a novel genetic variant. Predicting the clinical significance of SERPINA1 variation allows clinicians to tailor treatment options for individuals with alpha-1 antitrypsin deficiency (AATD) and related conditions, ultimately improving the patient's outcome and quality of life. This paper explores the various bioinformatic methodologies and cutting-edge approaches dedicated to the assessment of molecular variants of genes and their product proteins using SERPINA1 and AAT as an example. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Exploring the role of ubiquitin regulatory X domain family proteins in cancers: bioinformatics insights, mechanisms, and implications for therapy.
- Author
-
Yang, Enyu, Fan, Xiaowei, Ye, Haihan, Sun, Xiaoyang, Ji, Qing, Ding, Qianyun, Zhong, Shulian, Zhao, Shuo, Xuan, Cheng, Fang, Meiyu, Ding, Xianfeng, and Cao, Jun
- Subjects
- *
UBIQUITIN , *PROTEIN domains , *BIOINFORMATICS , *MULTIOMICS , *TUMOR microenvironment - Abstract
UBXD family (UBXDF), a group of proteins containing ubiquitin regulatory X (UBX) domains, play a crucial role in the imbalance of proliferation and apoptotic in cancer. In this study, we summarised bioinformatics proof on multi-omics databases and literature on UBXDF's effects on cancer. Bioinformatics analysis revealed that Fas-associated factor 1 (FAF1) has the largest number of gene alterations in the UBXD family and has been linked to survival and cancer progression in many cancers. UBXDF may affect tumour microenvironment (TME) and drugtherapy and should be investigated in the future. We also summarised the experimental evidence of the mechanism of UBXDF in cancer, both in vitro and in vivo, as well as its application in clinical and targeted drugs. We compared bioinformatics and literature to provide a multi-omics insight into UBXDF in cancers, review proof and mechanism of UBXDF effects on cancers, and prospect future research directions in-depth. We hope that this paper will be helpful for direct cancer-related UBXDF studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Exploring the role of ubiquitin regulatory X domain family proteins in cancers: bioinformatics insights, mechanisms, and implications for therapy.
- Author
-
Yang, Enyu, Fan, Xiaowei, Ye, Haihan, Sun, Xiaoyang, Ji, Qing, Ding, Qianyun, Zhong, Shulian, Zhao, Shuo, Xuan, Cheng, Fang, Meiyu, Ding, Xianfeng, and Cao, Jun
- Subjects
- *
UBIQUITIN , *PROTEIN domains , *BIOINFORMATICS , *MULTIOMICS , *TUMOR microenvironment - Abstract
UBXD family (UBXDF), a group of proteins containing ubiquitin regulatory X (UBX) domains, play a crucial role in the imbalance of proliferation and apoptotic in cancer. In this study, we summarised bioinformatics proof on multi-omics databases and literature on UBXDF's effects on cancer. Bioinformatics analysis revealed that Fas-associated factor 1 (FAF1) has the largest number of gene alterations in the UBXD family and has been linked to survival and cancer progression in many cancers. UBXDF may affect tumour microenvironment (TME) and drugtherapy and should be investigated in the future. We also summarised the experimental evidence of the mechanism of UBXDF in cancer, both in vitro and in vivo, as well as its application in clinical and targeted drugs. We compared bioinformatics and literature to provide a multi-omics insight into UBXDF in cancers, review proof and mechanism of UBXDF effects on cancers, and prospect future research directions in-depth. We hope that this paper will be helpful for direct cancer-related UBXDF studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. A Review: Multi-Omics Approach to Studying the Association between Ionizing Radiation Effects on Biological Aging.
- Author
-
Ruprecht, Nathan A., Singhal, Sonalika, Schaefer, Kalli, Panda, Om, Sens, Donald, and Singhal, Sandeep K.
- Subjects
- *
PHYSIOLOGICAL effects of radiation , *IONIZING radiation , *MULTIOMICS , *DOUBLE-strand DNA breaks , *LITERATURE reviews , *COMPUTATIONAL biology , *OLD age - Abstract
Simple Summary: The effects of radiation exposure seem closely related to effects of old age—so much so that the idea of a radiation–age association came about in the 1960s. While not a new idea, modern technology is allowing us to revisit these ideas and explore them with a fresh perspective. Separately, there are gaps in the community's understanding of the effects of radiation and aging, such as with respect to low-level, long-term effects of radiation and estimating someone's biological age. To study their association, a number of tools exist that need to be efficiently integrated to study this complex and interdisciplinary field. This article includes an extensive literature review on the theory of these two topics, providing a detailed foundation for a current understanding. We then present a resource-agnostic approach for researchers in these areas, focusing on studying the association between the two. Primary points of interest are focused on indirect damage of radiation exposure via oxidative stress within a cell, a comprehensive table of functional estimators for biological age, and using modern computational tools and biology to overlap fields of study to develop and exploit a rad–age association. Multi-omics studies have emerged as powerful tools for tailoring individualized responses to various conditions, capitalizing on genome sequencing technologies' increasing affordability and efficiency. This paper delves into the potential of multi-omics in deepening our understanding of biological age, examining the techniques available in light of evolving technology and computational models. The primary objective is to review the relationship between ionizing radiation and biological age, exploring a wide array of functional, physiological, and psychological parameters. This comprehensive review draws upon an extensive range of sources, including peer-reviewed journal articles, government documents, and reputable websites. The literature review spans from fundamental insights into radiation effects to the latest developments in aging research. Ionizing radiation exerts its influence through direct mechanisms, notably single- and double-strand DNA breaks and cross links, along with other critical cellular events. The cumulative impact of DNA damage forms the foundation for the intricate process of natural aging, intersecting with numerous diseases and pivotal biomarkers. Furthermore, there is a resurgence of interest in ionizing radiation research from various organizations and countries, reinvigorating its importance as a key contributor to the study of biological age. Biological age serves as a vital reference point for the monitoring and mitigation of the effects of various stressors, including ionizing radiation. Ionizing radiation emerges as a potent candidate for modeling the separation of biological age from chronological age, offering a promising avenue for tailoring protocols across diverse fields, including the rigorous demands of space exploration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Selected Papers from the Asia Pacific Bioinformatics Conference 2021 (APBC2021) Edited by H. Sunny Sun, Ping-Chiang Lyu, Chao A. Hsiung, Shaw-Jenq Tsai and Yi-Ping Phoebe Chen.
- Subjects
- *
BIOINFORMATICS , *CONFERENCES & conventions , *EDITING - Published
- 2021
- Full Text
- View/download PDF
46. Computational Optimization of Irradiance and Fluence for Interstitial Photodynamic Therapy Treatment of Patients with Malignant Central Airway Obstruction.
- Author
-
Oakley, Emily, Parilov, Evgueni, Beeson, Karl, Potasek, Mary, Ivanick, Nathaniel, Tworek, Lawrence, Hutson, Alan, and Shafirstein, Gal
- Subjects
- *
TREATMENT of respiratory obstructions , *FINITE element method , *CONFIDENCE intervals , *PHOTODYNAMIC therapy , *PATIENT-centered care , *SIMULATION methods in education , *BIOINFORMATICS , *LIGHT , *DESCRIPTIVE statistics , *RESEARCH funding - Abstract
Simple Summary: There are no effective treatments for patients with cancers that induce airway narrowing via extrinsic pressure to the bronchus (i.e., extrinsic malignant central airway obstruction—MCAO). The effects of these cancerous tumors must be quickly alleviated to allow normal breathing and delay disease progression. Currently, stents are used to keep the airway open, but stents do not halt the progression of the cancerous tumor that can crush the stent. We have shown that interstitial photodynamic therapy (I-PDT) can be a safe and beneficial treatment option for patients with extrinsic MCAO. Image-based pre-treatment planning is critical for patient safety and tumor response in I-PDT. Herein, we present and validate novel image-based computer optimization methods for guiding light administration in I-PDT of extrinsic MCAO, based on a rate-based light dose metric. We demonstrate the benefit of our approach in data from representative patients with extrinsic MCAO who were treated with I-PDT. There are no effective treatments for patients with extrinsic malignant central airway obstruction (MCAO). In a recent clinical study, we demonstrated that interstitial photodynamic therapy (I-PDT) is a safe and potentially effective treatment for patients with extrinsic MCAO. In previous preclinical studies, we reported that a minimum light irradiance and fluence should be maintained within a significant volume of the target tumor to obtain an effective PDT response. In this paper, we present a computational approach to personalized treatment planning of light delivery in I-PDT that simultaneously optimizes the delivered irradiance and fluence using finite element method (FEM) solvers of either Comsol Multiphysics® or Dosie™ for light propagation. The FEM simulations were validated with light dosimetry measurements in a solid phantom with tissue-like optical properties. The agreement between the treatment plans generated by two FEMs was tested using typical imaging data from four patients with extrinsic MCAO treated with I-PDT. The concordance correlation coefficient (CCC) and its 95% confidence interval (95% CI) were used to test the agreement between the simulation results and measurements, and between the two FEMs treatment plans. Dosie with CCC = 0.994 (95% CI, 0.953–0.996) and Comsol with CCC = 0.999 (95% CI, 0.985–0.999) showed excellent agreement with light measurements in the phantom. The CCC analysis showed very good agreement between Comsol and Dosie treatment plans for irradiance (95% CI, CCC: 0.996–0.999) and fluence (95% CI, CCC: 0.916–0.987) in using patients' data. In previous preclinical work, we demonstrated that effective I-PDT is associated with a computed light dose of ≥45 J/cm2 when the irradiance is ≥8.6 mW/cm2 (i.e., the effective rate-based light dose). In this paper, we show how to use Comsol and Dosie packages to optimize rate-based light dose, and we present Dosie's newly developed domination sub-maps method to improve the planning of the delivery of the effective rate-based light dose. We conclude that image-based treatment planning using Comsol or Dosie FEM-solvers is a valid approach to guide the light dosimetry in I-PDT of patients with MCAO. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Editorial from the new Editor-in-Chief: Artificial Intelligence in Medicine and the forthcoming challenges.
- Author
-
Combi, Carlo
- Subjects
- *
ARTIFICIAL intelligence , *RESEARCH papers (Students) , *EDITORIAL boards , *MEDICINE , *MEDICAL research , *NEWSLETTERS , *PUBLISHING , *BIOINFORMATICS - Published
- 2017
- Full Text
- View/download PDF
48. Integrated Diagnostics of Thyroid Nodules.
- Author
-
Giovanella, Luca, Campennì, Alfredo, Tuncel, Murat, and Petranović Ovčariček, Petra
- Subjects
- *
THYROID gland radiography , *THYROTROPIN , *MOLECULAR diagnosis , *PREDICTIVE tests , *THYROID gland tumors , *INTEGRATIVE medicine , *UNNECESSARY surgery , *ARTIFICIAL intelligence , *DIAGNOSTIC imaging , *BIOINFORMATICS , *RADIONUCLIDE imaging , *CYTOLOGY , *RISK management in business , *NUCLEAR medicine , *THYROID gland , *NEEDLE biopsy , *IODINE - Abstract
Simple Summary: Thyroid nodules are commonly detected in daily clinical practice, and their diagnosis and therapy usually involve different specialists and various diagnostic and therapeutic methods. Thyroid nodule management requires the integration of laboratory, imaging, and pathology examinations to achieve a proper diagnosis. It enables the elimination of unnecessary therapeutic procedures in many individuals and the timely identification of patients who require specific therapies. Furthermore, bioinformatics may change the current management of clinical data, enabling more personalized diagnostic approaches for patients with thyroid nodules. The clinical impact of artificial intelligence needs to be determined in further large-sample studies, especially in indeterminate cytology findings, that require "diagnostic surgery" to provide a definitive diagnosis. Thyroid nodules are common findings, particularly in iodine-deficient regions. Our paper aims to revise different diagnostic tools available in clinical thyroidology and propose their rational integration. We will elaborate on the pros and cons of thyroid ultrasound (US) and its scoring systems, thyroid scintigraphy, fine-needle aspiration cytology (FNAC), molecular imaging, and artificial intelligence (AI). Ultrasonographic scoring systems can help differentiate between benign and malignant nodules. Depending on the constellation or number of suspicious ultrasound features, a FNAC is recommended. However, hyperfunctioning thyroid nodules are presumed to exclude malignancy with a very high negative predictive value (NPV). Particularly in regions where iodine supply is low, most hyperfunctioning thyroid nodules are seen in patients with normal thyroid-stimulating hormone (TSH) levels. Thyroid scintigraphy is essential for the detection of these nodules. Among non-toxic thyroid nodules, a careful application of US risk stratification systems is pivotal to exclude inappropriate FNAC and guide the procedure on suspicious ones. However, almost one-third of cytology examinations are rendered as indeterminate, requiring "diagnostic surgery" to provide a definitive diagnosis. 99mTc-methoxy-isobutyl-isonitrile ([99mTc]Tc-MIBI) and [18F]fluoro-deoxy-glucose ([18F]FDG) molecular imaging can spare those patients from unnecessary surgeries. The clinical value of AI in the evaluation of thyroid nodules needs to be determined. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Fractal feature selection model for enhancing high-dimensional biological problems.
- Author
-
Alsaeedi, Ali Hakem, Al-Mahmood, Haider Hameed R., Alnaseri, Zainab Fahad, Aziz, Mohammad R., Al-Shammary, Dhiah, Ibaida, Ayman, and Ahmed, Khandakar
- Subjects
- *
FEATURE selection , *ARTIFICIAL intelligence , *MACHINE learning , *STANDARD deviations , *COMPUTER science - Abstract
The integration of biology, computer science, and statistics has given rise to the interdisciplinary field of bioinformatics, which aims to decode biological intricacies. It produces extensive and diverse features, presenting an enormous challenge in classifying bioinformatic problems. Therefore, an intelligent bioinformatics classification system must select the most relevant features to enhance machine learning performance. This paper proposes a feature selection model based on the fractal concept to improve the performance of intelligent systems in classifying high-dimensional biological problems. The proposed fractal feature selection (FFS) model divides features into blocks, measures the similarity between blocks using root mean square error (RMSE), and determines the importance of features based on low RMSE. The proposed FFS is tested and evaluated over ten high-dimensional bioinformatics datasets. The experiment results showed that the model significantly improved machine learning accuracy. The average accuracy rate was 79% with full features in machine learning algorithms, while FFS delivered promising results with an accuracy rate of 94%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. A novel immune-associated prognostic signature based on the immune cell infiltration analysis for hepatocellular carcinoma.
- Author
-
Lin, Xinrong, Tian, Chuan, Pan, Fan, and Wang, Rui
- Abstract
Immune-related genes (IRGs) in hepatocellular carcinoma (HCC) are significantly associated with both tumor-infiltrating immune cells (TICs) and disease prognosis. Therefore, exploring the correlation between IRGs with HCC and its related mechanism will provide new evidence for the diagnosis and treatment of HCC. The current paper analyzed the TICs in 374 HCC samples retrieved from the TCGA-LIHC dataset using ssGSEA and divided them according to the level of immune cell. A total of 177 differentially expressed genes (DEGs) were analyzed by protein-protein interaction (PPI) networks and univariate and multivariate Cox regression analyses. Four IRGs (C7, CTSV, MMP1, and VCAN) were found to be indicators of the immune prognosis for HCC according to the PPI network and Cox regression analyses of 177 DEGs, which was independently validated using an external dataset. A prognosis risk model was constructed for factors dependent on the four IRGs. Prognostic risk was associated with the subtype of infiltrating immune cells. Four effective IRGs were identified as novel independent prognostic factors that were correlated with tumor immune infiltration in HCC. This signature may guide the choice of immunotherapy for HCC. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.