217 results on '"generalized linear model (GLM)"'
Search Results
2. A Geostatistical Approach to Estimate South Atlantic Swordfish Abundance From Commercial Catch Data.
- Author
-
Rodrigues, Silvaneide Luzinete and Andrade, Humber Agrelli
- Subjects
- *
SWORDFISH , *FISHERY resources , *MIGRATORY animals , *FISHERIES , *POPULATION dynamics - Abstract
ABSTRACT We estimated relative abundance of South Atlantic Swordfish (Xiphias gladius), a highly migratory marine species, from commercial fishing data using a three‐step analysis protocol. First, we modeled catch rate based on variables affecting the catch rate to rule out effects of catchability‐related factors. Next, we analyzed residuals to identify autocorrelation. Last, we used an area‐weighed recursive algorithm that considered spatial autocorrelation for each year within the analysis period. Commercial South Atlantic swordfish catch data from the Brazilian pelagic longline fleet were analyzed. Swordfish accounted for the highest spatial dependence in distance in 2005 (456 km), 2008 (111 km), 2012 (80 km), and 2014 (443 km), as well as for the lowest in 2011 and 2013 (average = 15.45 km). A downward trend in swordfish abundance between 2010 and 2017 was detected several years earlier than conventional standardized indices, so our proposed index is an alternative, and potentially more accurate, index of swordfish population dynamics in the South Atlantic. Our findings highlight the need to integrate multiple approaches into assessments of the abundance of marine species. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Integrating Statistical Models and Principal Component Analysis to Assess Grasshopper Distribution and Diversity in Matiari District, Sindh, Pakistan.
- Author
-
Pitafi, Muhammad Rafique
- Subjects
MIGRATORY locust ,PRINCIPAL components analysis ,BIODIVERSITY conservation ,ECOLOGICAL niche ,GRASSHOPPERS - Abstract
This study offers a comprehensive analysis of grasshopper diversity and distribution across different sites of Matiari district, Sindh, Pakistan during 2021-22 by using advanced statistical and ecological methodologies. This research documented ten grasshopper species, revealing significant spatial and temporal patterns. Acrotylus humertianus and Aiolopus thalassinus were widely distributed, with Acrotylus humertianus showing peak abundance in Mooro Lakho, while Oedaleus rosesense demonstrated the highest overall density, reaching up to 20 individuals per site. Notably, species richness peaked at ten species in Hala during April 2022, indicating substantial temporal variability. Principal Component Analysis (PCA) highlighted distinct ecological niches, with Acrotylus humertianus scoring highly on PC1, suggesting it thrives under specific conditions. The Shannon-Wiener Index revealed Bhit Shah and Mooro Lakho as the most diverse sites, while Simpson's Diversity Index indicated lower diversity in Saedabad. The correlation analysis showed a moderate positive relationship between temperature and grasshopper density (r = 0.452), while humidity and soil type had minimal effects. Generalized Linear Model (GLM) analysis identified Locusta migratoria as the most effective model, with the lowest Deviance, AIC, and BIC values, reflecting its optimal fit. The study emphasizes the complex interplay between environmental factors and grasshopper populations, providing crucial insights for biodiversity conservation and ecological research. These results highlight the dynamic nature of grasshopper ecology and underscore the need for targeted conservation strategies based on environmental and temporal variations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Generalized models for estimating cerebral lateralisation of young children using functional transcranial Doppler ultrasound.
- Author
-
Quin‐Conroy, Josephine E., Thompson, Paul A., Bayliss, Donna M., and Badcock, Nicholas A.
- Subjects
- *
TRANSCRANIAL Doppler ultrasonography , *COMMON misconceptions , *BRAIN mapping , *LATERAL dominance , *ADULTS - Abstract
Thompson et al., 2023 (Generalized models for quantifying laterality using functional transcranial Doppler ultrasound. Human Brain Mapping, 44(1), 35–48) introduced generalised model‐based analysis methods for determining cerebral lateralisation from functional transcranial Doppler ultrasound (fTCD) data which substantially decreased the uncertainty of individual lateralisation estimates across several large adult samples. We aimed to assess the suitability of these methods for increasing precision in lateralisation estimates for child fTCD data. We applied these methods to adult fTCD data to establish the validity of two child‐friendly language and visuospatial tasks. We also applied the methods to fTCD data from 4‐ to 7‐year‐old children. For both samples, the laterality estimates from the complex generalised additive model (GAM) approach correlated strongly with the traditional methods while also decreasing individual standard errors compared to the popular period‐of‐interest averaging method. We recommend future research using fTCD with young children consider using GAMs to reduce the noise in their LI estimates. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Multi-hazard susceptibility mapping in the Salt Lake watershed
- Author
-
Sima Pourhashemi, Mohammad Ali Zangane Asadi, and Mahdi Boroughani
- Subjects
Dust source areas ,Floods ,Landslides ,Generalized linear model (GLM) ,Random Forest (RF) ,Environmental sciences ,GE1-350 - Abstract
Multi-hazard risks are closely related to the sustainable management of society. An effective multi-hazard risk reduction requires analysis of the individual hazards and their interplay. The current study aimed to suggest a multi-hazard probability assessment in Salt Lake watershed, Iran. At first, we construct maps depicting the most effective factors on floods (11 factors), dust source (9 factors), and landslides (12 factors), and used the prioritize the impact of each respective factor on the occurrence of each hazard. Subsequently, flood, landslides, and dust source susceptibility maps prepared using a Generalized Linear Model (GLM) and Random Forest (RF) models in the R statistical software. Accuracy of the applied model was examined using receiver operating characteristic (ROC) and relative density (R-index). The result indicates that identified hazard areas cover about 73 % of the area of the Salt Lake watershed. This result indicates that the highest portion of the research area is affected by the three hazards of the dust source area, flood, and landslide, making it a very hazardous region. Landslide hazard share the largest percentage (32.1 %) of the entire research area. After landslides, floods and dust source areas (24.8 and 8.3 %, respectively) are the next most important hazard. The results showed that the RF model with AUC values of 93.5 %, 91.4 %, and 95.6 % has higher accuracy than GLM for the three dust, flood, and landslide hazards, This multi-hazard map serves as a valuable tool for land use planning and sustainable infrastructure development for the salt lake watershed.
- Published
- 2025
- Full Text
- View/download PDF
6. Can institutions reduce the vulnerability to climate change? A study on the char lands of Assam, India.
- Author
-
Saikia, Mrinal and Mahanta, Ratul
- Abstract
Studies taking into account numerous aspects of climate change, disaster, and risk are necessary in order to emphasize the diverse issues such as threats to human lives, their asset base, and their livelihood vulnerability etc. that people confront in different regions. This study explores how institutions may help char dwellers, who reside in Assam, India's flood-prone and erosion-affected areas, become less vulnerable to climate change. The study measures the char dwellers' vulnerability to climate change using the adjusted livelihood vulnerability index (ALVI). The study also evaluates the quality and efficiency of the char institutions in raising the adaptability of the char inhabitants using the adaptive capacity wheel (ACW) and the generalized linear model (GLM). The study finds that the physical circumstances such as geographical location and structure of the char and social circumstances such as different socio-cultural and ethnic belongings of char residents place them at high risk and making the char institutions ineffective and performing unevenly among locations. The GLM result shows that institutions play a substantial role in reducing vulnerability. Land ownership, hazard prevention, and adaptation measures are all important variables in lowering their risk. The study suggests that boosting the char dwellers' resilience requires cooperation and diversity across different types of institutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. THE DEVELOPMENT OF THE TRAIN ACCIDENT MODEL TO THE INFRASTRUCTURE FACTORS IN INDONESIA.
- Author
-
Arisikam, Dicky, Lubis, Harun Al Rasyid, Kusumawati, Aine, and Indrayana, Desiderius Viby
- Subjects
INFRASTRUCTURE (Economics) ,POISSON regression ,RAILROAD stations ,GOODNESS-of-fit tests ,RAILROAD accidents ,AUTOMATIC train control ,INDEPENDENT variables - Abstract
The majority of Train Accidents (TA) in Indonesia from 2015 to 2020 were caused by infrastructure factors, namely railway tracks, bridges, and signals. To mitigate these TAs, infrastructure maintenance is required, prioritizing locations with a high risk of TA. The prioritization of these locations can be accomplished through a model that estimates TA based on infrastructure risk factors. Locations with the highest TA estimates will be prioritized for infrastructure maintenance. This model illustrates the associative relationship between KKA as the dependent variable and exposure (train frequency and track length) and infrastructure risk factors (railway tracks, bridges, and signals) as independent variables. The model is constructed using the Generalized Linear Model (GLM), considering Poisson Regression (PR), Negative Binomial (NB), Zero Inflated Poisson (ZIP), and Zero Inflated Negative Binomial (ZINB) models. TA data from the Operational Areas (OA) of Jakarta, Bandung, and Cirebon during 2015-2020 were used to build the model, with the model entity being the segment between two train stations. The selection of the best model is based on tests of dispersion value, goodness-of-fit test, and Vuong test. Modeling results indicate that the NB model is the most suitable for illustrating the associative relationship between TA and infrastructure factors in the Indonesian Railway. The variables are train frequency (trains/day), track length (km), train speed (km/h), length of curves with a radius of 500 m to ≤ 1000 m (km), number of vulnerable areas (points), length of electricity network (km), and track type (single or double). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. A framework of regularized low-rank matrix models for regression and classification.
- Author
-
Huang, Hsin-Hsiung, Yu, Feng, Fan, Xing, and Zhang, Teng
- Abstract
While matrix-covariate regression models have been studied in many existing works, classical statistical and computational methods for the analysis of the regression coefficient estimation are highly affected by high dimensional matrix-valued covariates. To address these issues, this paper proposes a framework of matrix-covariate regression models based on a low-rank constraint and an additional regularization term for structured signals, with considerations of models of both continuous and binary responses. We propose an efficient Riemannian-steepest-descent algorithm for regression coefficient estimation. We prove that the consistency of the proposed estimator is in the order of O (r (q + m) + p / n) , where r is the rank, p × m is the dimension of the coefficient matrix and p is the dimension of the coefficient vector. When the rank r is small, this rate improves over O (q m + p / n) , the consistency of the existing work (Li et al. in Electron J Stat 15:1909-1950, 2021) that does not apply a rank constraint. In addition, we prove that all accumulation points of the iterates have similar estimation errors asymptotically and substantially attaining the minimax rate. We validate the proposed method through a simulated dataset on two-dimensional shape images and two real datasets of brain signals and microscopic leucorrhea images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Close association of PFASs exposure with hepatic fibrosis than steatosis: evidences from NHANES 2017–2018.
- Author
-
Cheng, Wenli, Li, Min, Zhang, Luyun, Zhou, Cheng, Zhang, Xinyu, Zhu, Chenyu, Tan, Luyi, Lin, Hui, Zhang, Wenjuan, and Zhang, Wenji
- Subjects
HEPATIC fibrosis ,NON-alcoholic fatty liver disease ,FATTY degeneration ,FATTY liver ,FLUOROALKYL compounds - Abstract
Multiple animals and in vitro studies have demonstrated that perfluoroalkyl and polyfluoroalkyl substances (PFASs) exposure causes liver damage associated with fat metabolism. However, it is lack of population evidence for the correlation between PFAS exposure and nonalcoholic fatty liver disease (NAFLD). A cross-sectional analysis was performed of 1150 participants aged over 20 from the US. Liver ultrasound transient elastography was to identify the participants with NAFLD and multiple biomarkers were the indicators for hepatic steatosis and hepatic fibrosis. Logistics regression and restricted cubic splines models were used to estimate the association between PFASs and NAFLD. PFASs had not a significant association with NAFLD after adjustment. The hepatic steatosis indicators including fatty liver index, NAFLD liver fat score, and Framingham steatosis index were almost not significantly correlated with PFASs exposure respectively. But fibrosis indicators including fibrosis-4 index (FIB-4), NAFLD fibrosis score, and Hepamet fibrosis score were positively correlated with each type of PFASs exposure. After adjustment by gender, age, race, education, and poverty income rate, there was also a significant correlation between PFOS and FIB-4 with 0.07 (0.01, 0.13). The mixed PFASs were associated with FIB-4, with PFOS contributing the most (PIP = 1.000) by the Bayesian kernel machine regression model. The results suggested PFASs exposure appeared to be more closely associated with hepatic fibrosis than steatosis, and PFOS might be the main cause of PFASs associated with hepatic fibrosis. Current exposure doses of PFAS did not significantly change the risk of developing NAFLD. PFASs exposure appeared to be more closely associated with hepatic fibrosis than steatosis. PFOS might be the main cause of PFASs associated with hepatic fibrosis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Linear Mixed-Effects Models for Longitudinal Microbiome Data
- Author
-
Xia, Yinglin, Sun, Jun, Xia, Yinglin, and Sun, Jun
- Published
- 2023
- Full Text
- View/download PDF
11. Close association of PFASs exposure with hepatic fibrosis than steatosis: evidences from NHANES 2017–2018
- Author
-
Wenli Cheng, Min Li, Luyun Zhang, Cheng Zhou, Xinyu Zhang, Chenyu Zhu, Luyi Tan, Hui Lin, Wenjuan Zhang, and Wenji Zhang
- Subjects
Nonalcoholic fatty liver disease (NAFLD) ,Perfluoroalkyl and polyfluoroalkyl substances (PFASs) ,Generalized linear model (GLM) ,Restricted cubic spline (RCS) ,Bayesian kernel machine regression (BKMR) ,Medicine - Abstract
AbstractMultiple animals and in vitro studies have demonstrated that perfluoroalkyl and polyfluoroalkyl substances (PFASs) exposure causes liver damage associated with fat metabolism. However, it is lack of population evidence for the correlation between PFAS exposure and nonalcoholic fatty liver disease (NAFLD). A cross-sectional analysis was performed of 1150 participants aged over 20 from the US. Liver ultrasound transient elastography was to identify the participants with NAFLD and multiple biomarkers were the indicators for hepatic steatosis and hepatic fibrosis. Logistics regression and restricted cubic splines models were used to estimate the association between PFASs and NAFLD. PFASs had not a significant association with NAFLD after adjustment. The hepatic steatosis indicators including fatty liver index, NAFLD liver fat score, and Framingham steatosis index were almost not significantly correlated with PFASs exposure respectively. But fibrosis indicators including fibrosis-4 index (FIB-4), NAFLD fibrosis score, and Hepamet fibrosis score were positively correlated with each type of PFASs exposure. After adjustment by gender, age, race, education, and poverty income rate, there was also a significant correlation between PFOS and FIB-4 with 0.07 (0.01, 0.13). The mixed PFASs were associated with FIB-4, with PFOS contributing the most (PIP = 1.000) by the Bayesian kernel machine regression model. The results suggested PFASs exposure appeared to be more closely associated with hepatic fibrosis than steatosis, and PFOS might be the main cause of PFASs associated with hepatic fibrosis.Key messagesCurrent exposure doses of PFAS did not significantly change the risk of developing NAFLD.PFASs exposure appeared to be more closely associated with hepatic fibrosis than steatosis.PFOS might be the main cause of PFASs associated with hepatic fibrosis.
- Published
- 2023
- Full Text
- View/download PDF
12. Modeling dragonfly population data with a Bayesian bivariate geometric mixed-effects model.
- Author
-
van Oppen, Yulan B., Milder-Mulderij, Gabi, Brochard, Christophe, Wiggers, Rink, de Vries, Saskia, Krijnen, Wim P., and Grzegorczyk, Marco A.
- Subjects
- *
MARKOV chain Monte Carlo , *GEOMETRIC modeling , *FIXED effects model , *DRAGONFLIES - Abstract
We develop a generalized linear mixed model (GLMM) for bivariate count responses for statistically analyzing dragonfly population data from the Northern Netherlands. The populations of the threatened dragonfly species Aeshna viridis were counted in the years 2015–2018 at 17 different locations (ponds and ditches). Two different widely applied population size measures were used to quantify the population sizes, namely the number of found exoskeletons ('exuviae') and the number of spotted egg-laying females were counted. Since both measures (responses) led to many zero counts but also feature very large counts, our GLMM model builds on a zero-inflated bivariate geometric (ZIBGe) distribution, for which we show that it can be easily parameterized in terms of a correlation parameter and its two marginal medians. We model the medians with linear combinations of fixed (environmental covariates) and random (location-specific intercepts) effects. Modeling the medians yields a decreased sensitivity to overly large counts; in particular, in light of growing marginal zero inflation rates. Because of the relatively small sample size (n = 114) we follow a Bayesian modeling approach and use Metropolis-Hastings Markov Chain Monte Carlo (MCMC) simulations for generating posterior samples. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Modelling of Leishmaniasis Infection Dynamics: A Comparative Time Series Analysis with VAR, VECM, Generalized Linear and Markov Switching Models †.
- Author
-
Badaoui, Fadoua, Bouhout, Souad, Amar, Amine, and Khomsi, Kenza
- Subjects
CUTANEOUS leishmaniasis ,AUTOREGRESSIVE models ,ERROR correction (Information theory) ,METEOROLOGICAL databases ,HUMIDITY - Abstract
In this paper, we are interested in modeling the dynamics of cutaneous leishmaniasis (CL) in Errachidia province (Morocco), using epidemiologic data and the most notable climatic factors associated with leishmaniasis, namely humidity, wind speed, rainfall, and temperature. To achieve our objective, we compare the performance of three statistical models, namely the Vector Auto-Regressive (VAR) model, the Vector Error Correction model (VECM), and the Generalized Linear model (GLM), using different metrics. The modeling framework will be compared with the Markov Switching (MSM) approach. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. No Wind is Favorable Unless the Sailor is Participative: Customer Participation in Marina Services
- Author
-
Elif Koç, Durmuş Ali Deveci, and Cansu Yıldırım
- Subjects
marina services ,customer participation ,service-dominant (s-d) logic ,value cocreation ,generalized linear model (glm) ,Naval architecture. Shipbuilding. Marine engineering ,VM1-989 - Abstract
Marinas are essential for tourism as a customized service, which, in turn, necessitates active customer cooperation. This study investigates the participation behavior of customers in marina service delivery and aims to determine the facilitating factors and consequences of customer participation (CP). A questionnaire survey was performed to evaluate the perception of marina users (i.e., boat owners or captains) who received service from full-service private marinas. The collected data were analyzed using the generalized linear model. The empirical results showed that customer self-efficacy and customer affective trust are significant facilitating factors, and actionable participation is the most essential dimension of CP substantially impacting customer cocreated value. Moreover, “experience at sea” and “marina region” are the factors with high control effects on the relationships between CP, self-efficacy, trust, and cocreated value.
- Published
- 2023
- Full Text
- View/download PDF
15. Robust estimation and diagnostic of generalized linear model for insurance losses: a weighted likelihood approach
- Author
-
Fung, Tsz Chai
- Published
- 2024
- Full Text
- View/download PDF
16. Climate Change and Plant Invasions
- Author
-
Panda, Rajendra Mohan and Panda, Rajendra Mohan
- Published
- 2022
- Full Text
- View/download PDF
17. Generalized models for quantifying laterality using functional transcranial Doppler ultrasound.
- Author
-
Thompson, Paul A., Watkins, Kate E., Woodhead, Zoe V. J., and Bishop, Dorothy V. M.
- Subjects
- *
TRANSCRANIAL Doppler ultrasonography , *FUNCTIONAL magnetic resonance imaging , *LATERAL dominance , *CEREBRAL dominance - Abstract
We consider how analysis of brain lateralization using functional transcranial Doppler ultrasound (fTCD) data can be brought in line with modern statistical methods typically used in functional magnetic resonance imaging (fMRI). Conventionally, a laterality index is computed in fTCD from the difference between the averages of each hemisphere's signal within a period of interest (POI) over a series of trials. We demonstrate use of generalized linear models (GLMs) and generalized additive models (GAM) to analyze data from individual participants in three published studies (N = 154, 73 and 31), and compare this with results from the conventional POI averaging approach, and with laterality assessed using fMRI (N = 31). The GLM approach was based on classic fMRI analysis that includes a hemodynamic response function as a predictor; the GAM approach estimated the response function from the data, including a term for time relative to epoch start (simple GAM), plus a categorical index corresponding to individual epochs (complex GAM). Individual estimates of the fTCD laterality index are similar across all methods, but error of measurement is lowest using complex GAM. Reliable identification of cases of bilateral language appears to be more accurate with complex GAM. We also show that the GAM‐based approach can be used to efficiently analyze more complex designs that incorporate interactions between tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. No Wind is Favorable Unless the Sailor is Participative: Customer Participation in Marina Services.
- Author
-
Koç, Elif, Deveci, Durmuç Ali, and Yildirim, Cansu
- Subjects
MARITIME shipping ,SAILORS ,TOURISM ,CUSTOMER services ,CUSTOMER cocreation - Abstract
Marinas are essential for tourism as a customized service, which, in turn, necessitates active customer cooperation. This study investigates the participation behavior of customers in marina service delivery and aims to determine the facilitating factors and consequences of customer participation (CP). A questionnaire survey was performed to evaluate the perception of marina users (i.e., boat owners or captains) who received service from full-service private marinas. The collected data were analyzed using the generalized linear model. The empirical results showed that customer self-efficacy and customer affective trust are significant facilitating factors, and actionable participation is the most essential dimension of CP substantially impacting customer cocreated value. Moreover, "experience at sea" and "marina region" are the factors with high control effects on the relationships between CP, self-efficacy, trust, and cocreated value. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Three-dimensional wavelet decomposition-based radiomics analysis for tumor characterization in patients with oropharyngeal squamous cell carcinoma [version 1; peer review: awaiting peer review]
- Author
-
Hassan Bagher-Ebadian, Farzan Siddiqui, Ahmed I. Ghanem, Benjamin Movsas, and Indrin J. Chetty
- Subjects
Research Article ,Articles ,Three-Dimensional Discrete Wavelet Decomposition Technique ,Radiomic Information ,Frequency-based Radiomics Analysis ,HPV Characterization ,OPSCC patients ,Least Absolute Shrinkage and Selection Operator (Lasso) ,Feature Classification Analysis ,Generalized Linear Model (GLM) - Abstract
Background: We investigated the potential predictive value along with interpretability of the three-dimensional wavelet decomposition (3D-WD)-based radiomics analysis for characterization of gross-tumor-volumes (GTVs) for patients with Human Papilloma Virus (HPV) oropharyngeal squamous cell carcinoma (OPSCC). The goal was to characterize and identify the spatial frequencies and regions of primary tumor that are responsible for classifying the HPV status. Methods: One-hundred twenty-eight OPSCC patients (60-HPV+ and 68-HPV-, confirmed by immunohistochemistry-P16-Protein) were retrospectively studied. 3D-WD analysis was performed on the contrast-enhanced-CT images of patients’ primary tumor-GTVs to decompose information into three decomposition levels explained by a series of high-pass and low-pass wavelet coefficients (WCs). Log-Energy-Entropy of the WCs was calculated as radiomics features. A Least-Absolute-Shrinkage-and-Selection-Operation (Lasso) technique combined with a Generalized-Linear-Model (Lasso-GLM) was applied on the feature space to identify and rank the frequency sub-bands associated with the HPV status. The classifier was validated using a nested-cross-validation technique. Average of Area Under ROC (AUC), and Positive and Negative Predictive values (PPV and NPV) were computed to estimate the generalization-error and performance of the classifier. The significant features were used to weight tumor sub-band frequencies to reconstruct the tumor zones with highest information towards characterization of HPV. Results: Among 22 frequency-based features, two low-frequency and two high-frequency features were statistically discriminant between the two cohorts. Results (AUC/PPV/NPV=0.798/0.745/0.823) imply that tumor’s high-frequency and low-frequency components are associated with its HPV positivity and negativity, respectively. Conclusions: This study suggests that compared to the central zones of tumor, peritumoral regions contain more information for characterization of the HPV-status. Albeit subject to confirmation in a larger cohort, this pilot study presents encouraging results in support of the role of frequency-based radiomics analysis towards characterization of tumor microenvironment in patients with OPSCC. By associating this information with tumor pathology, one can potentially link radiomics to underlying biological mechanisms.
- Published
- 2022
- Full Text
- View/download PDF
20. Lung Radiomics Features Selection for COPD Stage Classification Based on Auto-Metric Graph Neural Network.
- Author
-
Yang, Yingjian, Wang, Shicong, Zeng, Nanrong, Duan, Wenxin, Chen, Ziran, Liu, Yang, Li, Wei, Guo, Yingwei, Chen, Huai, Li, Xian, Chen, Rongchang, and Kang, Yan
- Subjects
- *
RADIOMICS , *FEATURE selection , *FEATURE extraction , *CHRONIC obstructive pulmonary disease , *CONVOLUTIONAL neural networks - Abstract
Chronic obstructive pulmonary disease (COPD) is a preventable, treatable, progressive chronic disease characterized by persistent airflow limitation. Patients with COPD deserve special consideration regarding treatment in this fragile population for preclinical health management. Therefore, this paper proposes a novel lung radiomics combination vector generated by a generalized linear model (GLM) and Lasso algorithm for COPD stage classification based on an auto-metric graph neural network (AMGNN) with a meta-learning strategy. Firstly, the parenchyma images were segmented from chest high-resolution computed tomography (HRCT) images by ResU-Net. Second, lung radiomics features are extracted from the parenchyma images by PyRadiomics. Third, a novel lung radiomics combination vector (3 + 106) is constructed by the GLM and Lasso algorithm for determining the radiomics risk factors (K = 3) and radiomics node features (d = 106). Last, the COPD stage is classified based on the AMGNN. The results show that compared with the convolutional neural networks and machine learning models, the AMGNN based on constructed novel lung radiomics combination vector performs best, achieving an accuracy of 0.943, precision of 0.946, recall of 0.943, F1-score of 0.943, and ACU of 0.984. Furthermore, it is found that our method is effective for COPD stage classification. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Effect of non-normality on the monitoring of simple linear profiles in two-stage processes: a remedial measure for gamma-distributed responses.
- Author
-
Soleimani, Paria and Asadzadeh, Shervin
- Subjects
- *
SKEWNESS (Probability theory) , *GAMMA distributions , *INDEPENDENT variables , *MOVING average process , *QUALITY control - Abstract
The relationship between the response variable and one or more independent variables refers to the quality characteristic in some statistical quality control applications, which is called profile. Most research dealt with the monitoring of profiles in single-stage processes considering a basic assumption of normality. However, some processes are made up of several sub-processes; thus, the effect of cascade property in multistage processes should be considered. Moreover, sometimes in practice, the assumption of normally distributed data does not hold. This paper first examines the effect of non-normal data to monitor simple linear profiles in two-stage processes in Phase II. We study non-normal distributions such as the skewed gamma distribution and the heavy-tailed symmetric t-distribution to measure the non-normality effect using the average run length criterion. Next, generalized linear models have been used and a monitoring approach based on generalized likelihood ratio (GLR) has been developed for gamma-distributed responses as a remedial measure to reduce the detrimental effects of non-normality. The results of simulation studies reveal that the performance of the GLR procedure is satisfactory for the multistage non-normal linear profiles. Finally, the simulated and real case studies with gamma-distributed data have been provided to show the application of the competing monitoring approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. Exploring a mobile phone user's attitude toward watching TV content on a mobile phone – uses and gratifications perspective
- Author
-
Shin, Soo Il, Kim, J.B., Han, Sumin, and Lee, Sangmi
- Published
- 2021
- Full Text
- View/download PDF
23. Modelling of Leishmaniasis Infection Dynamics: A Comparative Time Series Analysis with VAR, VECM, Generalized Linear and Markov Switching Models
- Author
-
Fadoua Badaoui, Souad Bouhout, Amine Amar, and Kenza Khomsi
- Subjects
leishmaniasis dynamics ,generalized linear model (GLM) ,Markov switching model (MSM) ,meteorological data ,vector auto-regressive model (VAR) ,vector error correction model (VECM) ,Engineering machinery, tools, and implements ,TA213-215 - Abstract
In this paper, we are interested in modeling the dynamics of cutaneous leishmaniasis (CL) in Errachidia province (Morocco), using epidemiologic data and the most notable climatic factors associated with leishmaniasis, namely humidity, wind speed, rainfall, and temperature. To achieve our objective, we compare the performance of three statistical models, namely the Vector Auto-Regressive (VAR) model, the Vector Error Correction model (VECM), and the Generalized Linear model (GLM), using different metrics. The modeling framework will be compared with the Markov Switching (MSM) approach.
- Published
- 2023
- Full Text
- View/download PDF
24. Calculating and Comparing the Annualized Relapse Rate and Estimating the Confidence Interval in Relapsing Neurological Diseases.
- Author
-
Akaishi, Tetsuya, Ishii, Tadashi, Aoki, Masashi, and Nakashima, Ichiro
- Subjects
NEUROMYELITIS optica ,NEGATIVE binomial distribution ,CONFIDENCE intervals ,DISEASE relapse ,NEUROLOGICAL disorders ,POISSON distribution - Abstract
Calculating the crude or adjusted annualized relapse rate (ARR) and its confidence interval (CI) is often required in clinical studies to evaluate chronic relapsing diseases, such as multiple sclerosis and neuromyelitis optica spectrum disorders. However, accurately calculating ARR and estimating the 95% CI requires careful application of statistical approaches and basic familiarity with the exponential family of distributions. When the relapse rate can be regarded as constant over time or by individuals, the crude ARR can be calculated using the person-years method, which divides the number of all observed relapses among all participants by the total follow-up period of the study cohort. If the number of relapses can be modeled by the Poisson distribution, the 95% CI of ARR can be obtained by finding the 2.5% upper and lower critical values of the parameter λ as the mean. Basic familiarity with F-statistics is also required when comparing the ARR between two disease groups. It is necessary to distinguish the observed relapse rate ratio (RR) between two sample groups (sample RR) from the unobserved RR between their originating populations (population RR). The ratio of population RR to sample RR roughly follows the F distribution, with degrees of freedom obtained by doubling the number of observed relapses in the two sample groups. Based on this, a 95% CI of the population RR can be estimated. When the count data of the response variable is overdispersed, the negative binomial distribution would be a better fit than the Poisson. Adjusted ARR and the 95% CI can be obtained by using the generalized linear regression models after selecting appropriate error structures (e.g., Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial) according to the overdispersion and zero-inflation in the response variable. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. Good Statistical Practices in Agronomy Using Categorical Data Analysis, with Alfalfa Examples Having Poisson and Binomial Underlying Distributions.
- Author
-
Mowers, Ronald P., Bucciarelli, Bruna, Cao, Yuanyuan, Samac, Deborah A., and Xu, Zhanyou
- Subjects
- *
AGRONOMY , *ALFALFA , *PLANT breeding , *ANALYSIS of variance , *DATA distribution - Abstract
Categorical data derived from qualitative classifications or countable quantitative data are common in biological scientific work and crop breeding. Categorical data analyses are important for drawing correct inferences from experiments. However, categorical data can introduce unique issues in data analysis. This paper discusses common problems arising from categorical variable analysis and modeling, demonstrates the issues or risks of misapplying analysis, and suggests approaches to address data analysis challenges using two data sets from alfalfa breeding programs. For each data set, we present several analysis methods, e.g., simple t-test, analysis of variance (ANOVA), split plot analysis, generalized linear model (glm), generalized linear mixed model (glmm) using R with R markdown, and with the standard statistical analysis software SAS/JMP. The goal is to demonstrate good analysis practices for categorical data by comparing the potential 'bad' analyses with better ones, avoiding too much reliance on reaching a significant p-value of 0.05, and navigating the morass of ever-increasing numbers of potential R functions. The three main aspects of this research focus on choosing the right data distribution to use, using the correct error terms for hypothesis test p-values including the right type of sum of the squares (Type I, II, and III), and proper statistical models for categorical data analysis. Our results show the importance of good statistical analysis practice to help agronomists, breeders, and other researchers apply appropriate statistical approaches to draw more accurate conclusions from their data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. برآورد شاخص فراوانی نسبی ماهی مرکب عمانی و اسکوئید پشت ارغوانی در تلاش صیادی با ترال طنابدار در غرب دریای عمان
- Author
-
سید یوسف پیغمبری, رضا بدلی, پرویز زارع, and رضا عباسپور نادری
- Abstract
The present study estimated relative frequency index of Purpleback flying squid and Oman cuttlefish also evaluated the effect of depth, distance, and time of capture (before & after noon) on them. Sampling was in the west of the Gulf of Oman at the spring of 2019. The depth of the Gulf was 208 to 285 meters. The vessel used was an industrial Myctophid trawler and the sampling gear was a rope trawl that operated with the fishing technique in a water column (the bottom panel was maximum 6 meters higher than the bottom of the Gulf). According to field observations, the average catch per unit of effort index in Purpleback flying squid and Oman cuttlefish was estimated to be 7584.22 and 1345.066 gram / hour based on weight and 19.062 and 18.250 number / hour based on number, respectively. Based on the selected models, the variables of depth and distance had a significant effect on catch per unit of effort for Purpleback flying squid (P<0.05). With increasing depth and distance, the catch per unit of effort for squid increased. However, based on the results, no significant effect of the studied variables on the relative abundance index of Oman cuttlefish was observed, given the lack of sufficient research on this theme and the discard caught of these species in Iranian waters, more extensive research to manage their stock in the Gulf of Oman is a need. [ABSTRACT FROM AUTHOR]
- Published
- 2022
27. Effect of managerial ownership on bank value: insights of an emerging economy
- Author
-
Moudud-Ul-Huq, Syed, Biswas, Tanmay, and Proshad Dola, Shukla
- Published
- 2020
- Full Text
- View/download PDF
28. Calculating and Comparing the Annualized Relapse Rate and Estimating the Confidence Interval in Relapsing Neurological Diseases
- Author
-
Tetsuya Akaishi, Tadashi Ishii, Masashi Aoki, and Ichiro Nakashima
- Subjects
annualized relapse rate (ARR) ,confidence interval ,F-distribution ,Poisson distribution ,generalized linear model (GLM) ,person-years method ,Neurology. Diseases of the nervous system ,RC346-429 - Abstract
Calculating the crude or adjusted annualized relapse rate (ARR) and its confidence interval (CI) is often required in clinical studies to evaluate chronic relapsing diseases, such as multiple sclerosis and neuromyelitis optica spectrum disorders. However, accurately calculating ARR and estimating the 95% CI requires careful application of statistical approaches and basic familiarity with the exponential family of distributions. When the relapse rate can be regarded as constant over time or by individuals, the crude ARR can be calculated using the person-years method, which divides the number of all observed relapses among all participants by the total follow-up period of the study cohort. If the number of relapses can be modeled by the Poisson distribution, the 95% CI of ARR can be obtained by finding the 2.5% upper and lower critical values of the parameter λ as the mean. Basic familiarity with F-statistics is also required when comparing the ARR between two disease groups. It is necessary to distinguish the observed relapse rate ratio (RR) between two sample groups (sample RR) from the unobserved RR between their originating populations (population RR). The ratio of population RR to sample RR roughly follows the F distribution, with degrees of freedom obtained by doubling the number of observed relapses in the two sample groups. Based on this, a 95% CI of the population RR can be estimated. When the count data of the response variable is overdispersed, the negative binomial distribution would be a better fit than the Poisson. Adjusted ARR and the 95% CI can be obtained by using the generalized linear regression models after selecting appropriate error structures (e.g., Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial) according to the overdispersion and zero-inflation in the response variable.
- Published
- 2022
- Full Text
- View/download PDF
29. Ultrasound image segmentation using an active contour model and learning-structured inference.
- Author
-
Fang, Lingling, Zhang, Lirong, Yao, Yibo, and Chen, Le
- Subjects
ULTRASONIC imaging ,IMAGE segmentation ,MACHINE learning ,PROBLEM solving ,COMPUTER-assisted image analysis (Medicine) - Abstract
Automated segmentation of medical ultrasound (US) images is a challenging problem due to the complicated features of lesions, inconsistent lesions across individuals, and the high segmentation accuracy requirement. From recently published papers in this area, the active contour model (ACM) and machine learning method produce more accurate lesion segmentation results than previous methods. This paper proposes a novel image segmentation approach that integrates an ACM with a generalized linear model (GLM) and forms learning-structured inference. Compared with the GLM, the proposed method can solve the problems of initialization and the local minimum of the ACM. Furthermore, rather than using the ACM as a postprocessing tool, we integrate it into the training phase to fine-tune the GLM. This step allows the use of unlabeled data during training in a semisupervised setting. The integrated model requires only one image as the training set and is not as sensitive to labeled data as other methods. The proposed method is verified using US images, and the results show that the proposed method can produce accurate segmentation results. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Identification and utilization of copy number information for correcting Hi-C contact map of cancer cell lines
- Author
-
Ahmed Ibrahim Samir Khalil, Siti Rawaidah Binte Mohammad Muzaki, Anupam Chattopadhyay, and Amartya Sanyal
- Subjects
Chromosome conformation capture (3C) ,Hi-C normalization tool ,Read depth ,Copy number variation (CNV) ,Generalized linear model (GLM) ,Poisson regression ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Biology (General) ,QH301-705.5 - Abstract
Abstract Background Hi-C and its variant techniques have been developed to capture the spatial organization of chromatin. Normalization of Hi-C contact map is essential for accurate modeling and interpretation of high-throughput chromatin conformation capture (3C) experiments. Hi-C correction tools were originally developed to normalize systematic biases of karyotypically normal cell lines. However, a vast majority of available Hi-C datasets are derived from cancer cell lines that carry multi-level DNA copy number variations (CNVs). CNV regions display over- or under-representation of interaction frequencies compared to CN-neutral regions. Therefore, it is necessary to remove CNV-driven bias from chromatin interaction data of cancer cell lines to generate a euploid-equivalent contact map. Results We developed the HiCNAtra framework to compute high-resolution CNV profiles from Hi-C or 3C-seq data of cancer cell lines and to correct chromatin contact maps from systematic biases including CNV-associated bias. First, we introduce a novel ‘entire-fragment’ counting method for better estimation of the read depth (RD) signal from Hi-C reads that recapitulates the whole-genome sequencing (WGS)-derived coverage signal. Second, HiCNAtra employs a multimodal-based hierarchical CNV calling approach, which outperformed OneD and HiNT tools, to accurately identify CNVs of cancer cell lines. Third, incorporating CNV information with other systematic biases, HiCNAtra simultaneously estimates the contribution of each bias and explicitly corrects the interaction matrix using Poisson regression. HiCNAtra normalization abolishes CNV-induced artifacts from the contact map generating a heatmap with homogeneous signal. When benchmarked against OneD, CAIC, and ICE methods using MCF7 cancer cell line, HiCNAtra-corrected heatmap achieves the least 1D signal variation without deforming the inherent chromatin interaction signal. Additionally, HiCNAtra-corrected contact frequencies have minimum correlations with each of the systematic bias sources compared to OneD’s explicit method. Visual inspection of CNV profiles and contact maps of cancer cell lines reveals that HiCNAtra is the most robust Hi-C correction tool for ameliorating CNV-induced bias. Conclusions HiCNAtra is a Hi-C-based computational tool that provides an analytical and visualization framework for DNA copy number profiling and chromatin contact map correction of karyotypically abnormal cell lines. HiCNAtra is an open-source software implemented in MATLAB and is available at https://github.com/AISKhalil/HiCNAtra .
- Published
- 2020
- Full Text
- View/download PDF
31. Application of Logistic Regression on Passenger Survival Data of the Titanic Liner
- Author
-
Sajjida Reza, Bilal Sarwar, Raja Rub Nawaz, and S. M. Nabeel Ul Haq
- Subjects
Binary ,Dichotomous ,Generalized Linear Model (GLM) ,Logistic Regression ,Finance ,HG1-9999 - Abstract
Purpose: This empirical research aims to predict the distinguishing variables of passengers who did or did not survive while traveling in the famous Titanic liner, which sunk in 1912. Design/Methodology/Approach: The binary logistic regression analysis empirically analyzes the secondary dataset available for 1046 passengers. Variables such as passenger’s gender, age, family composition, ticket class, number of parents with/without children, and number of siblings and/or spouses were opted to examine the differences between the binary dependent variable (Passenger Survived/ Not Survived). Findings: The study results indicate that all the variables are statistically significant in the model, with passenger's gender being the most significant predictor followed by passenger’s ticket class. The survival chances of passengers decreased for male passengers compared to their counterparts (female passengers) for the sample data [Exp(β)=0.080], for the passengers of age more than 21 years compared to passengers of age less than and equal to 21 years [Exp(β)=0.576], and for passengers with ticket class second and third compared to first-class ticket holders [Exp(β)=0.412]. In contrast, there was a greater chance of survival for families traveling together with parents, siblings, spouses compared to single travelers [Exp(β)=1.823]. Implications/Originality/Value: The study is a classic example of the application of binary logistic regression analysis using EVIEWS software.
- Published
- 2022
- Full Text
- View/download PDF
32. DEM resolution effects on machine learning performance for flood probability mapping.
- Author
-
Avand, Mohammadtaghi, Kuriqi, Alban, Khazaei, Majid, and Ghorbanzadeh, Omid
- Subjects
MACHINE performance ,MACHINE learning ,RECEIVER operating characteristic curves ,FLOODS ,ARTIFICIAL neural networks ,NATURAL disasters ,CONCEPT mapping ,DIGITAL elevation models - Abstract
[Display omitted] Floods are among the devastating natural disasters that occurred very frequently in arid regions during the last decades. Accurate assessment of the flood susceptibility mapping is crucial in sustainable development. It helps respective authorities to prevent as much as possible their irreversible consequences. The Digital Elevation Model (DEM) spatial resolution is one of the most crucial base layer factors for modeling Flood Probability Maps (FPMs). Therefore, the main objective of this study was to assess the influence of the spatial resolution of the DEMs 12.5 m (ALOS PALSAR) and 30 m (ASTER) on the accuracy of flood probability prediction using three machine learning models (MLMs), including Random Forest (RF), Artificial Neural Network (ANN), and Generalized Linear Model (GLM). This study selected 14 causative factors in the flood as independent variables, and 220 flood locations were selected as dependent variables. Dependent variables were divided into training (70%) and validation (30%) for flood susceptibility modeling. The Receiver Operating Characteristic Curve (ROC), Kappa index, accuracy, and other statistical criteria were used to evaluate the models' accuracy. The results showed that resolving the DEM alone cannot significantly affect the accuracy of flood probability prediction regardless of the applied MLM and independently of the statistical model used to assess the performance accuracy. In contrast, the factors such as altitude, precipitation, and distance from the river have a considerable impact on floods in this region. Also, the evaluation results of the models showed that the RF (AUC 12.5,30m = 0.983, 0.975) model is more accurate in preparing the FPM than the ANN (AUC 12.5,30m = 0.949, 0.93) and GLM (AUC 12.5,30m = 0.965, 0.949) models. This study's solution-oriented findings might help water managers and decision-makers to make the most effective adaptation and mitigation measures against potential flooding. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Localizing Epileptic Foci Using Simultaneous EEG-fMRI Recording: Template Component Cross-Correlation.
- Author
-
Ebrahimzadeh, Elias, Shams, Mohammad, Seraji, Masoud, Sadjadi, Seyyed Mostafa, Rajabion, Lila, and Soltanian-Zadeh, Hamid
- Subjects
PEOPLE with epilepsy ,BRAIN anatomy ,MEDICAL records ,INDEPENDENT component analysis ,TIME series analysis - Abstract
Conventional EEG-fMRI methods have been proven to be of limited use in the sense that they cannot reveal the information existing in between the spikes. To resolve this issue, the current study obtains the epileptic components time series detected on EEG and uses them to fit the Generalized Linear Model (GLM), as a substitution for classical regressors. This approach allows for a more precise localization, and equally importantly, the prediction of the future behavior of the epileptic generators. The proposed method approaches the localization process in the component domain, rather than the electrode domain (EEG), and localizes the generators through investigating the spatial correlation between the candidate components and the spike template, as well as the medical records of the patient. To evaluate the contribution of EEG-fMRI and concordance between fMRI and EEG, this method was applied on the data of 30 patients with refractory epilepsy. The results demonstrated the significant numbers of 29 and 24 for concordance and contribution, respectively, which mark improvement as compared to the existing literature. This study also shows that while conventional methods often fail to properly localize the epileptogenic zones in deep brain structures, the proposed method can be of particular use. For further evaluation, the concordance level between IED-related BOLD clusters and Seizure Onset Zone (SOZ) has been quantitatively investigated by measuring the distance between IED/SOZ locations and the BOLD clusters in all patients. The results showed the superiority of the proposed method in delineating the spike-generating network compared to conventional EEG-fMRI approaches. In all, the proposed method goes beyond the conventional methods by breaking the dependency on spikes and using the outside-the-scanner spike templates and the selected components, achieving an accuracy of 97%. Doing so, this method contributes to improving the yield of EEG-fMRI and creates a more realistic perception of the neural behavior of epileptic generators which is almost without precedent in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. Localizing Epileptic Foci Using Simultaneous EEG-fMRI Recording: Template Component Cross-Correlation
- Author
-
Elias Ebrahimzadeh, Mohammad Shams, Masoud Seraji, Seyyed Mostafa Sadjadi, Lila Rajabion, and Hamid Soltanian-Zadeh
- Subjects
simultaneous EEG-fMRI ,epileptogenic zone ,independent component analysis (ICA) ,generalized linear model (GLM) ,blood-oxygen-level dependent imaging (BOLD) ,epilepsy ,Neurology. Diseases of the nervous system ,RC346-429 - Abstract
Conventional EEG-fMRI methods have been proven to be of limited use in the sense that they cannot reveal the information existing in between the spikes. To resolve this issue, the current study obtains the epileptic components time series detected on EEG and uses them to fit the Generalized Linear Model (GLM), as a substitution for classical regressors. This approach allows for a more precise localization, and equally importantly, the prediction of the future behavior of the epileptic generators. The proposed method approaches the localization process in the component domain, rather than the electrode domain (EEG), and localizes the generators through investigating the spatial correlation between the candidate components and the spike template, as well as the medical records of the patient. To evaluate the contribution of EEG-fMRI and concordance between fMRI and EEG, this method was applied on the data of 30 patients with refractory epilepsy. The results demonstrated the significant numbers of 29 and 24 for concordance and contribution, respectively, which mark improvement as compared to the existing literature. This study also shows that while conventional methods often fail to properly localize the epileptogenic zones in deep brain structures, the proposed method can be of particular use. For further evaluation, the concordance level between IED-related BOLD clusters and Seizure Onset Zone (SOZ) has been quantitatively investigated by measuring the distance between IED/SOZ locations and the BOLD clusters in all patients. The results showed the superiority of the proposed method in delineating the spike-generating network compared to conventional EEG-fMRI approaches. In all, the proposed method goes beyond the conventional methods by breaking the dependency on spikes and using the outside-the-scanner spike templates and the selected components, achieving an accuracy of 97%. Doing so, this method contributes to improving the yield of EEG-fMRI and creates a more realistic perception of the neural behavior of epileptic generators which is almost without precedent in the literature.
- Published
- 2021
- Full Text
- View/download PDF
35. Integrating a learned probabilistic model with energy functional for ultrasound image segmentation.
- Author
-
Fang, Lingling, Zhang, Lirong, and Yao, Yibo
- Subjects
- *
IMAGE segmentation , *ULTRASONIC imaging , *IMAGE quality analysis , *PROBABILISTIC generative models , *MEAN square algorithms - Abstract
The segmentation of ultrasound (US) images is steadily growing in popularity, owing to the necessity of computer-aided diagnosis (CAD) systems and the advantages that this technique shows, such as safety and efficiency. The objective of this work is to separate the lesion from its background in US images. However, most US images contain poor quality, which is affected by the noise, ambiguous boundary, and heterogeneity. Moreover, the lesion region may be not salient amid the other normal tissues, which makes its segmentation a challenging problem. In this paper, an US image segmentation algorithm that combines the learned probabilistic model with energy functionals is proposed. Firstly, a learned probabilistic model based on the generalized linear model (GLM) reduces the false positives and increases the likelihood energy term of the lesion region. It yields a new probability projection that attracts the energy functional toward the desired region of interest. Then, boundary indicator and probability statistical-based energy functional are used to provide a reliable boundary for the lesion. Integrating probabilistic information into the energy functional framework can effectively overcome the impact of poor quality and further improve the accuracy of segmentation. To verify the performance of the proposed algorithm, 40 images are randomly selected in three databases for evaluation. The values of DICE coefficient, the Jaccard distance, root-mean-square error, and mean absolute error are 0.96, 0.91, 0.059, and 0.042, respectively. Besides, the initialization of the segmentation algorithm and the influence of noise are also analyzed. The experiment shows a significant improvement in performance. A. Description of the proposed paper. B. The main steps involved in the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. Estimating soil N2O emissions induced by organic and inorganic fertilizer inputs using a Tier-2, regression-based meta-analytic approach for U.S. agricultural lands.
- Author
-
Xia, Yushu, Kwon, Hoyoung, and Wander, Michelle
- Published
- 2024
- Full Text
- View/download PDF
37. Lung Radiomics Features Selection for COPD Stage Classification Based on Auto-Metric Graph Neural Network
- Author
-
Yingjian Yang, Shicong Wang, Nanrong Zeng, Wenxin Duan, Ziran Chen, Yang Liu, Wei Li, Yingwei Guo, Huai Chen, Xian Li, Rongchang Chen, and Yan Kang
- Subjects
COPD stage (GOLD) ,auto-metric graph neural network (AMGNN) ,multi-classification ,lung radiomics features ,Lasso algorithm ,generalized linear model (GLM) ,Medicine (General) ,R5-920 - Abstract
Chronic obstructive pulmonary disease (COPD) is a preventable, treatable, progressive chronic disease characterized by persistent airflow limitation. Patients with COPD deserve special consideration regarding treatment in this fragile population for preclinical health management. Therefore, this paper proposes a novel lung radiomics combination vector generated by a generalized linear model (GLM) and Lasso algorithm for COPD stage classification based on an auto-metric graph neural network (AMGNN) with a meta-learning strategy. Firstly, the parenchyma images were segmented from chest high-resolution computed tomography (HRCT) images by ResU-Net. Second, lung radiomics features are extracted from the parenchyma images by PyRadiomics. Third, a novel lung radiomics combination vector (3 + 106) is constructed by the GLM and Lasso algorithm for determining the radiomics risk factors (K = 3) and radiomics node features (d = 106). Last, the COPD stage is classified based on the AMGNN. The results show that compared with the convolutional neural networks and machine learning models, the AMGNN based on constructed novel lung radiomics combination vector performs best, achieving an accuracy of 0.943, precision of 0.946, recall of 0.943, F1-score of 0.943, and ACU of 0.984. Furthermore, it is found that our method is effective for COPD stage classification.
- Published
- 2022
- Full Text
- View/download PDF
38. Prediction of Software Effort in the Early Stage of Software Development: A Hybrid Model.
- Author
-
Rai, Prerana, Kumar, Shishir, and Verma, Dinesh Kumar
- Subjects
COMPUTER software development ,COMPUTER software ,FORECASTING ,PROJECT managers - Abstract
The key challenge that project managers face during software development is the accurate prediction of the software effort. Improper prediction leads either to overestimation or underestimation of the software effort, which can have disastrous consequences for the stakeholders. This article attempts to design a model that gives an accurate prediction of effort in the initial phase of the software development lifecycle. The proposed model uses multilayer perceptron (MLP) and the generalized linear model (GLM) with the ensemble technique for the learning purpose. The model is trained and validated using the ISBSG dataset. The proposed model is compared for performance with two baseline models: MLP and GLM. The results show that the proposed model outperforms most of the baseline models against different performance metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. Localizing confined epileptic foci in patients with an unclear focus or presumed multifocality using a component-based EEG-fMRI method.
- Author
-
Ebrahimzadeh, Elias, Shams, Mohammad, Rahimpour Jounghani, Ali, Fayaz, Farahnaz, Mirbagheri, Mahya, Hakimi, Naser, Rajabion, Lila, and Soltanian-Zadeh, Hamid
- Abstract
Precise localization of epileptic foci is an unavoidable prerequisite in epilepsy surgery. Simultaneous EEG-fMRI recording has recently created new horizons to locate foci in patients with epilepsy and, in comparison with single-modality methods, has yielded more promising results although it is still subject to limitations such as lack of access to information between interictal events. This study assesses its potential added value in the presurgical evaluation of patients with complex source localization. Adult candidates considered ineligible for surgery on account of an unclear focus and/or presumed multifocality on the basis of EEG underwent EEG-fMRI. Adopting a component-based approach, this study attempts to identify the neural behavior of the epileptic generators and detect the components-of-interest which will later be used as input in the GLM model, substituting the classical linear regressor. Twenty-eight sets interictal epileptiform discharges (IED) from nine patients were analyzed. In eight patients, at least one BOLD response was significant, positive and topographically related to the IEDs. These patients were rejected for surgery because of an unclear focus in four, presumed multifocality in three, and a combination of the two conditions in two. Component-based EEG-fMRI improved localization in five out of six patients with unclear foci. In patients with presumed multifocality, component-based EEG-fMRI advocated one of the foci in five patients and confirmed multifocality in one of the patients. In seven patients, component-based EEG-fMRI opened new prospects for surgery and in two of these patients, intracranial EEG supported the EEG-fMRI results. In these complex cases, component-based EEG-fMRI either improved source localization or corroborated a negative decision regarding surgical candidacy. As supported by the statistical findings, the developed EEG-fMRI method leads to a more realistic estimation of localization compared to the conventional EEG-fMRI approach, making it a tool of high value in pre-surgical evaluation of patients with refractory epilepsy. To ensure proper implementation, we have included guidelines for the application of component-based EEG-fMRI in clinical practice. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. Capturing Multiple Timescales of Adaptation to Second-Order Statistics With Generalized Linear Models: Gain Scaling and Fractional Differentiation
- Author
-
Kenneth W. Latimer and Adrienne L. Fairhall
- Subjects
adaptation ,gain scaling ,fractional differentiation ,generalized linear model (GLM) ,Hodgkin and Huxley model ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Single neurons can dynamically change the gain of their spiking responses to take into account shifts in stimulus variance. Moreover, gain adaptation can occur across multiple timescales. Here, we examine the ability of a simple statistical model of spike trains, the generalized linear model (GLM), to account for these adaptive effects. The GLM describes spiking as a Poisson process whose rate depends on a linear combination of the stimulus and recent spike history. The GLM successfully replicates gain scaling observed in Hodgkin-Huxley simulations of cortical neurons that occurs when the ratio of spike-generating potassium and sodium conductances approaches one. Gain scaling in the GLM depends on the length and shape of the spike history filter. Additionally, the GLM captures adaptation that occurs over multiple timescales as a fractional derivative of the stimulus envelope, which has been observed in neurons that include long timescale afterhyperpolarization conductances. Fractional differentiation in GLMs requires long spike history that span several seconds. Together, these results demonstrate that the GLM provides a tractable statistical approach for examining single-neuron adaptive computations in response to changes in stimulus variance.
- Published
- 2020
- Full Text
- View/download PDF
41. Identification and utilization of copy number information for correcting Hi-C contact map of cancer cell lines.
- Author
-
Khalil, Ahmed Ibrahim Samir, Muzaki, Siti Rawaidah Binte Mohammad, Chattopadhyay, Anupam, and Sanyal, Amartya
- Subjects
- *
CELL lines , *DNA copy number variations , *CANCER cells , *NUCLEOTIDE sequencing , *POISSON regression - Abstract
Background: Hi-C and its variant techniques have been developed to capture the spatial organization of chromatin. Normalization of Hi-C contact map is essential for accurate modeling and interpretation of high-throughput chromatin conformation capture (3C) experiments. Hi-C correction tools were originally developed to normalize systematic biases of karyotypically normal cell lines. However, a vast majority of available Hi-C datasets are derived from cancer cell lines that carry multi-level DNA copy number variations (CNVs). CNV regions display over- or under-representation of interaction frequencies compared to CN-neutral regions. Therefore, it is necessary to remove CNV-driven bias from chromatin interaction data of cancer cell lines to generate a euploid-equivalent contact map. Results: We developed the HiCNAtra framework to compute high-resolution CNV profiles from Hi-C or 3C-seq data of cancer cell lines and to correct chromatin contact maps from systematic biases including CNV-associated bias. First, we introduce a novel 'entire-fragment' counting method for better estimation of the read depth (RD) signal from Hi-C reads that recapitulates the whole-genome sequencing (WGS)-derived coverage signal. Second, HiCNAtra employs a multimodal-based hierarchical CNV calling approach, which outperformed OneD and HiNT tools, to accurately identify CNVs of cancer cell lines. Third, incorporating CNV information with other systematic biases, HiCNAtra simultaneously estimates the contribution of each bias and explicitly corrects the interaction matrix using Poisson regression. HiCNAtra normalization abolishes CNV-induced artifacts from the contact map generating a heatmap with homogeneous signal. When benchmarked against OneD, CAIC, and ICE methods using MCF7 cancer cell line, HiCNAtra-corrected heatmap achieves the least 1D signal variation without deforming the inherent chromatin interaction signal. Additionally, HiCNAtra-corrected contact frequencies have minimum correlations with each of the systematic bias sources compared to OneD's explicit method. Visual inspection of CNV profiles and contact maps of cancer cell lines reveals that HiCNAtra is the most robust Hi-C correction tool for ameliorating CNV-induced bias. Conclusions: HiCNAtra is a Hi-C-based computational tool that provides an analytical and visualization framework for DNA copy number profiling and chromatin contact map correction of karyotypically abnormal cell lines. HiCNAtra is an open-source software implemented in MATLAB and is available at https://github.com/AISKhalil/HiCNAtra. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
42. Capturing Multiple Timescales of Adaptation to Second-Order Statistics With Generalized Linear Models: Gain Scaling and Fractional Differentiation.
- Author
-
Latimer, Kenneth W. and Fairhall, Adrienne L.
- Subjects
LINEAR statistical models ,MODELS & modelmaking ,POISSON processes ,STATISTICAL models - Abstract
Single neurons can dynamically change the gain of their spiking responses to take into account shifts in stimulus variance. Moreover, gain adaptation can occur across multiple timescales. Here, we examine the ability of a simple statistical model of spike trains, the generalized linear model (GLM), to account for these adaptive effects. The GLM describes spiking as a Poisson process whose rate depends on a linear combination of the stimulus and recent spike history. The GLM successfully replicates gain scaling observed in Hodgkin-Huxley simulations of cortical neurons that occurs when the ratio of spike-generating potassium and sodium conductances approaches one. Gain scaling in the GLM depends on the length and shape of the spike history filter. Additionally, the GLM captures adaptation that occurs over multiple timescales as a fractional derivative of the stimulus envelope, which has been observed in neurons that include long timescale afterhyperpolarization conductances. Fractional differentiation in GLMs requires long spike history that span several seconds. Together, these results demonstrate that the GLM provides a tractable statistical approach for examining single-neuron adaptive computations in response to changes in stimulus variance. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
43. Long-Term Tolerance Acquisition and Changes in Acetylcholinesterase Activity in Three Cladoceran Species After a 48-H Pulsed Exposure to Pirimicarb.
- Author
-
Ishimota, Makoto, Tajiki-Nishino, Risako, Fukuyama, Tomoki, Tomiyama, Naruto, Sakamoto, Masaki, and Ohyama, Kazutoshi
- Subjects
ACETYLCHOLINESTERASE ,DAPHNIA magna ,AQUATIC organisms - Abstract
We investigated (1) whether rapid acquisition of tolerance to pirimicarb would develop in three cladocerans (Daphnia magna, Scapholeberis kingi, and Ceriodaphnia cornuta) after short-term exposure and whether this tolerance was maintained in their descendants over four generations; (2) whether tolerance implies fitness costs, and whether these costs quantitatively correlated with tolerance levels; and (3) how AChE activity and AChE mRNA levels were altered by short-term exposure to pirimicarb. After 48 h of exposure to 0, 1.3, 2.5, 5.0, 10.0, 20.0, and 40.0 μg/L pirimicarb only in F0 generation, the surviving cladocerans in the concentrations (0, 1.3, 2.5, 5.0, 10.0 μg/L) were subsequently kept for additional three generations (F1, F2, and F3) in the absence of pirimicarb. Among the three tested cladocerans, the EC
50 value (50% effective concentration for 48 h exposure, using immobility as the endpoint) of only C. cornuta increased significantly. The increased tolerance in C. cornuta was retained in F1, F2, and F3. In C. cornuta, AChE activity and AChE mRNA levels in F0 decreased significantly, but these values in F1 were comparable to those in the controls, suggesting these changes may be related to the tolerance. Direct exposure of F0 C. cornuta to 2.5 μg/L or 5.0 μg/L pirimicarb induced a decrease in intrinsic population growth rate. However, this effect disappeared as soon as exposure was removed in F1. Thus, tolerance to pirimicarb, as observed in C. cornuta, involves no fitness cost. Our findings will contribute to clarifying adaptation of aquatic organisms to chemicals. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
44. Building a Credit Scoring Model Based on Data Mining Approaches.
- Author
-
Nalić, Jasmina and Martinovic, Goran
- Subjects
CREDIT ratings ,DATA mining ,DECISION trees ,DATABASES ,CLASSIFICATION algorithms ,SUPPORT vector machines - Abstract
Nowadays, one of the biggest challenges in banking sector, certainly, is assessment of the client's creditworthiness. In order to improve the decision-making process and risk management, banks resort to using data mining techniques for hidden patterns recognition within a wide data. The main objective of this study is to build a high-performance customized credit scoring model. The model named Reliable client is based on Bank's real dataset and originally built by applying four different classification algorithms: decision tree (DT), naive Bayes (NB), generalized linear model (GLM) and support vector machine (SVM). Since it showed the greatest results, but also seemed as the most appropriate algorithm, the adopted model is based on GLM algorithm. The results of this model are presented based on many performance measures that showed great predictive confidence and accuracy, but we also demonstrated significant impact of data pre-processing on model performance. Statistical analysis of the model identified the most significant parameters on the model outcome. In the end, created credit scoring model was evaluated using another set of real data of the same Bank. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
45. The Impact of the COVID-19 Pandemic on Cross-Border Mergers and Acquisitions’ Determinants: New Empirical Evidence from Quasi-Poisson and Negative Binomial Regression Models
- Author
-
Han-Sol Lee, Ekaterina A. Degtereva, and Alexander M. Zobov
- Subjects
entry mode ,mergers and acquisitions (M&As) ,sustainable development goal (SDG) index ,COVID-19 ,generalized linear model (GLM) ,Economics as a science ,HB71-74 - Abstract
The cross-border movement of capital has suffered due to the COVID-19 pandemic since December 2019. Nevertheless, it is unrealistic for multinational companies to withdraw giant global value chains (GVCs) overnight because of the pandemic. Instead, active discussions and achievements of deals in cross-border mergers and acquisitions (M&As) are expected in the post-COVID-19 era among various other market entry modes, considering the growing demand in high technologies in societies. This paper analyzes particular determinants of cross-border mergers and acquisitions (M&As) during the pandemic year (2020) based on cross-sectional datasets by employing quasi-Poisson and negative binomial regression models. According to the empirical evidence, COVID-19 indices do not hamper M&A deals in general. This indicates that managerial capabilities of the coronavirus, not the outbreak itself, determined locational decisions of M&A deals during the pandemic. In this vein, it is expected that the vaccination rate will become a key factor of locational decision for M&A deals in the near future. Furthermore, countries that have been outstanding in coping with COVID-19 and thus serve as a good example for other nations may seize more opportunities to take a leap forward. In addition, as hypothesized, the results present positive and significant associations with M&A deals and the SDG index, confirming the resource-based theory of internationalization. In particular, the achievement of SDGs seems to exercise much influence in developing countries for M&A bidders during the pandemic year. This indicates that the pandemic demands a new zeitgeist that pursues growth while resolving existing but disregarded environmental issues and cherishes humanitarian values, for all countries, non-exceptionally, standing at the start line of the post-COVID-19 era.
- Published
- 2021
- Full Text
- View/download PDF
46. Supervised Component Generalized Linear Regression with Multiple Explanatory Blocks: THEME-SCGLR
- Author
-
Bry, Xavier, Trottier, Catherine, Mortier, Fréderic, Cornu, Guillaume, Verron, Thomas, Abdi, Hervé, editor, Esposito Vinzi, Vincenzo, editor, Russolillo, Giorgio, editor, Saporta, Gilbert, editor, and Trinchera, Laura, editor
- Published
- 2016
- Full Text
- View/download PDF
47. Distribution Modeling of Protective and Valuable Plant Species in the Tourist Area of Polour Using Generalized Linear Model (GLM) and Generalized Additive Model(GAM)
- Author
-
Zeinab Jafarian and Mansoureh Kargar
- Subjects
environment factors ,generalized additive model (gam) ,generalized linear model (glm) ,polour rangelands ,Geography (General) ,G1-922 - Abstract
The prediction models of geographical distribution of the plant species are probabilistic and static models. They determinate the mathematical equations governing on geographical distribution of species with their current environment and environmental factors effective on the distribution of species. The aim of the current Study is to review the performance of Generalized Linear Model (GLM) and Generalized Additive Model )GAM(in determination of the relationship between vegetation and environmental factors in Polour rangeland. Stratified random sampling method was used. Five dominant species include Astragalus ochrodeucus, Ferula gumosa, Thymus kotschyanus,Onobrychis Cornata and Agropyron Sp were identified. the studied environmental factors were including 13 soil characteristics, 3 topographic factors and 3 climatic factors. The analyses were performed using Presence-Absence and GRASP package of R software. Also for evaluating the performance of prediction model , the statistical coefficients including AUC, AIC, RMSE and R2 were used. The results showed that the highest R2 in GLM model is related to presence of Agropyron Sp with 0.98 and also the lowest RMSE and AIC is related to Astragalus ochrodeucus with 0.29 and 12 respectively. Also in GAM model the highest R2 is related to Thymus kotschyanus with 0.98 and the lowest RMSE and AIC is related to Astragalus ochrodeucus and Ferula gumosa with 0.22 and 18.12 respectively. .. Also the highest AUC in GLM model is related to Onobrychis Cornata and in GAM model is related to Agropyron Sp with 0.86.Given the value of protecting and valuable plant species studied, the results of these models can be used in programs of conservation and improvement of the tourist area.
- Published
- 2017
- Full Text
- View/download PDF
48. Non-motor Brain Regions in Non-dominant Hemisphere Are Influential in Decoding Movement Speed
- Author
-
Macauley Smith Breault, Zachary B. Fitzgerald, Pierre Sacré, John T. Gale, Sridevi V. Sarma, and Jorge A. González-Martínez
- Subjects
movement speed ,StereoelEctroencEphalography (SEEG) ,Local Field Potential (LFP) ,generalized linear model (GLM) ,regression ,non-dominant hemisphere ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Sensorimotor control studies have predominantly focused on how motor regions of the brain relay basic movement-related information such as position and velocity. However, motor control is often complex, involving the integration of sensory information, planning, visuomotor tracking, spatial mapping, retrieval and storage of memories, and may even be emotionally driven. This suggests that many more regions in the brain are involved beyond premotor and motor cortices. In this study, we exploited an experimental setup wherein activity from over 87 non-motor structures of the brain were recorded in eight human subjects executing a center-out motor task. The subjects were implanted with depth electrodes for clinical purposes. Using training data, we constructed subject-specific models that related spectral power of neural activity in six different frequency bands as well as a combined model containing the aggregation of multiple frequency bands to movement speed. We then tested the models by evaluating their ability to decode movement speed from neural activity in the test data set. The best models achieved a correlation of 0.38 ± 0.03 (mean ± standard deviation). Further, the decoded speeds matched the categorical representation of the test trials as correct or incorrect with an accuracy of 70 ± 2.75% across subjects. These models included features from regions such as the right hippocampus, left and right middle temporal gyrus, intraparietal sulcus, and left fusiform gyrus across multiple frequency bands. Perhaps more interestingly, we observed that the non-dominant hemisphere (ipsilateral to dominant hand) was most influential in decoding movement speed.
- Published
- 2019
- Full Text
- View/download PDF
49. Prediction of the Number of Defects in Image Sensors by VM Using Equipment QC Data.
- Author
-
Okazaki, Toshiya, Okusa, Kosuke, and Yoshida, Kyo
- Subjects
- *
IMAGE sensors , *DUST , *REGRESSION trees , *REGRESSION analysis , *DATA , *SEMICONDUCTOR devices - Abstract
This paper describes methods and evaluation results of predicting the number of defects in image sensors using equipment QC data. Virtual metrology (VM) models are mainly used for measurable values such as dimensions and electrical characteristics. Herein, to predict countable values, we used a regression tree and stepwise AIC for variable selection as well as the “hockey-stick regression model” and generalized linear model for regression, instead of the partial least squares (PLS) regression. The results showed an improved prediction performance in comparison with the conventional method. This method can be used to predict other countable values such as defects or dust particles. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. Quantitative determination of concordance in localizing epileptic focus by component-based EEG-fMRI.
- Author
-
Ebrahimzadeh, Elias, Shams, Mohammad, Fayaz, Farahnaz, Rajabion, Lila, Mirbagheri, Mahya, Nadjar Araabi, Babak, and Soltanian-Zadeh, Hamid
- Subjects
- *
ELECTROENCEPHALOGRAPHY , *PARTIAL epilepsy , *PSYCHODIAGNOSTICS , *HIGH resolution imaging - Abstract
• A novel systematic, quantitative approach is presented to evaluate concordance between the Component-related BOLD clusters and IED-location using simultaneous EEG-fMRI. • This study demonstrated that BOLD changes are related to epileptic spikes in different brain structures. • The component-based EEG-fMRI is considered as a reliable predictor of the spike onset zone. • The proposed method improves localization accuracy to 97% which marks a dramatic rise compared to the conventional works. • The concordance level is determined based on the distance between the center of the cluster with the maximum BOLD and the dipole using our component-based EEG-fMRI method. Accurate seizure onset zone (SOZ) localization is an essential step in pre-surgical assessment of patients with refractory focal epilepsy. Complex pathophysiology of epileptic cerebral structures, seizure types and frequencies have not been considered as influential features for accurate identification of SOZ using EEG-fMRI. There is a crucial need to quantitatively measure concordance between presumed SOZ and IED-related BOLD response in different brain regions to improve SOZ delineation. A novel component-based EEG-fMRI approach is proposed to measure physical distance between BOLD clusters and selected component dipole location using patient-specific high resolution anatomical images. The method is applied on 18 patients with refractory focal epilepsy to localize epileptic focus and determine concordance quantitatively and compare between maximum BOLD cluster with identified component dipole. To measure concordance, distance from a voxel with maximal z-score of maximum BOLD to center of extracted component dipole is measured. BOLD clusters to spikes distances for concordant (<25 mm), partially concordant (25–50 mm), and discordant (>50 mm) groups were significantly different (p < 0.0001). The results showed full concordance in 17 IED types (17.85 ± 4.69 mm), partial concordance in 4 (36.47 ± 8.84 mm), and nodiscordance, which is a significant rise compared to the existing literature. The proposed method is premised on the cross-correlation between the spike template outside the scanner and the highly-ranked extracted components. It successfully surpasses the limitations of conventional EEG-fMRI studies which are largely dependent on inside-scanner spikes. More significantly, the proposed method improves localization accuracy to 97% which marks a dramatic rise compared to conventional works. This study demonstrated that BOLD changes were related to epileptic spikes in different brain regions in patients with refractory focal epilepsy. In a systematic quantitative approach, concordance levels based on the distance between center of maximum BOLD cluster and dipole were determined by component-based EEG-fMRI method. Therefore, component-based EEG-fMRI can be considered as a reliable predictor of SOZ in patients with focal epilepsy and included as part of clinical evaluation for patients with medically resistant epilepsy. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.