71,434 results on '"Selection (genetic algorithm)"'
Search Results
2. Holobiont Evolution: Population Theory for the Hologenome
- Author
-
Joan Roughgarden
- Subjects
Mathematical theory ,Genetic theory ,Holobiont ,education.field_of_study ,Evolutionary biology ,Hologenome theory of evolution ,Population ,Microbiome ,Biology ,education ,Selection (genetic algorithm) ,Ecology, Evolution, Behavior and Systematics - Abstract
This article develops mathematical theory for the population dynamics of microbiomes with their hosts and for holobiont evolution caused by holobiont selection. The objective is to account for the formation of microbiome-host integration.Microbial population-dynamic parameters must mesh with the host’s for coexistence.A horizontally transmitted microbiome is a genetic system with “collective inheritance”. The microbial source pool in the environment corresponds to the gamete pool for nuclear genes. Poisson sampling of the microbial source pool corresponds to binomial sampling of the gamete pool. However, holobiont selection on the microbiome does not lead to a counterpart of the Hardy-Weinberg Law nor to directional selection that always fixes microbial genes conferring the highest holobiont fitness.A microbe might strike an optimal fitness balance between lowering its within-host fitness while increasing holobiont fitness. Such microbes are replaced by otherwise identical microbes that contribute nothing to holobiont fitness. This replacement can be reversed by hosts that initiate immune responses to non-helpful microbes. This discrimination leads to microbial species sorting. Host-orchestrated species sorting (HOSS) followed by microbial competition, rather than co-evolution or multi-level selection, is predicted to be the cause of microbiome-host integration.
- Published
- 2023
3. A Credibilistic Multiobjective Multiperiod Efficient Portfolio Selection Approach Using Data Envelopment Analysis
- Author
-
Mukesh Kumar Mehlawat, Arun Kumar, Pankaj Gupta, and Sanjay Yadav
- Subjects
Mathematical optimization ,Computer science ,Strategy and Management ,Data envelopment analysis ,Portfolio ,Electrical and Electronic Engineering ,Selection (genetic algorithm) - Published
- 2023
4. A novel pure data-selection framework for day-ahead wind power forecasting
- Author
-
Jiancheng Qin, Zili Zhang, Ying Chen, Jingjing Zhao, and Hua Li
- Subjects
Multidisciplinary ,Computer science ,Wind power forecasting ,Numerical weather prediction ,computer.software_genre ,Computer experiment ,Wind speed ,Metamodeling ,Support vector machine ,Pure Data ,Data mining ,computer ,Selection (genetic algorithm) ,computer.programming_language - Abstract
Numerical weather prediction (NWP) data possess internal inaccuracies, such as low NWP wind speed corresponding to high actual wind power generation. This study is intended to reduce the negative effects of such inaccuracies by proposing a pure data-selection framework (PDF) to choose useful data prior to modeling, thus improving the accuracy of day-ahead wind power forecasting. Briefly, we convert an entire NWP training dataset into many small subsets and then select the best subset combination via a validation set to build a forecasting model. Although a small subset can increase selection flexibility, it can also produce billions of subset combinations, resulting in computational issues. To address this problem, we incorporated metamodeling and optimization steps into PDF. We then proposed a design and analysis of the computer experiments-based metamodeling algorithm and heuristic-exhaustive search optimization algorithm, respectively.Experimental results demonstrate that (1) it is necessary to select data before constructing a forecasting model; (2) using a smaller subset will likely increase selection flexibility, leading to a more accurate forecasting model; (3) PDF can generate a better training dataset than similarity-based data selection methods (e.g., K-means and support vector classification); and (4) choosing data before building a forecasting model produces a more accurate forecasting model compared with using a machine learning method to construct a model directly.
- Published
- 2023
5. Deciding What to Replicate
- Author
-
Daniel Lakens, Ivan Ropovik, Marco Perugini, Joachim I. Krueger, Marek A. Vranka, Anna van 't Veer, K. Andrew DeSoto, Peder M. Isager, Robbie C. M. van Aert, Roger Giner-Sorolla, Mark J. Brandt, Štěpán Bahník, Human Technology Interaction, and Department of Methodology and Statistics
- Subjects
bepress|Physical Sciences and Mathematics ,MetaArXiv|Physical Sciences and Mathematics|Statistics and Probability|Design of Experiments and Sample Surveys ,Expected utility ,Computer science ,Decision theory ,media_common.quotation_subject ,Replication ,Replication value ,Replication (computing) ,MetaArXiv|Physical Sciences and Mathematics ,Resource (project management) ,Risk analysis (engineering) ,Study selection ,bepress|Physical Sciences and Mathematics|Statistics and Probability ,MetaArXiv|Physical Sciences and Mathematics|Statistics and Probability ,Psychology (miscellaneous) ,Function (engineering) ,Decision model ,bepress|Physical Sciences and Mathematics|Statistics and Probability|Design of Experiments and Sample Surveys ,Selection (genetic algorithm) ,Expected utility hypothesis ,media_common ,Causal model - Abstract
Robust scientific knowledge is contingent upon replication of original findings. However, replicating researchers are constrained by resources, and will almost always have to choose one replication effort to focus on from a set of potential candidates. To select a candidate efficiently in these cases, we need methods for deciding which out of all candidates considered would be the most useful to replicate, given some overall goal researchers wish to achieve. In this article we assume that the overall goal researchers wish to achieve is to maximize the utility gained by conducting the replication study. We then propose a general rule for study selection in replication research based on the replication value of the set of claims considered for replication. The replication value of a claim is defined as the maximum expected utility we could gain by conducting a replication of the claim, and is a function of (a) the value of being certain about the claim, and (b) uncertainty about the claim based on current evidence. We formalize this definition in terms of a causal decision model, utilizing concepts from decision theory and causal graph modeling. We discuss the validity of using replication value as a measure of expected utility gain, and we suggest approaches for deriving quantitative estimates of replication value. Our goal in this article is not to define concrete guidelines for study selection, but to provide the necessary theoretical foundations on which such concrete guidelines could be built.Translational Abstract Replication-redoing a study using the same procedures-is an important part of checking the robustness of claims in the psychological literature. The practice of replicating original studies has been woefully devalued for many years, but this is now changing. Recent calls for improving the quality of research in psychology has generated a surge of interest in funding, conducting, and publishing replication studies. Because many studies have never been replicated, and researchers have limited time and money to perform replication studies, researchers must decide which studies are the most important to replicate. This way scientists learn the most, given limited resources. In this article, we lay out what it means to think about what is the most important thing to replicate, and we propose a general decision rule for picking a study to replicate. That rule depends on a concept we call replication value. Replication value is a function of the importance of the study, and how uncertain we are about the findings. In this article we explain how researchers can think precisely about the value of replication studies. We then discuss when and how it makes sense to use replication value as a measure of how valuable a replication study would be, and we discuss factors that funders, journals, or scientists could consider when determining how valuable a replication study is.
- Published
- 2023
6. Efficient Decomposition Selection for Multi-class Classification
- Author
-
Zeyi Wen, Bingsheng He, Yawen Chen, and Jian Chen
- Subjects
Multiclass classification ,Distribution (mathematics) ,Computational Theory and Mathematics ,Degree (graph theory) ,Computer science ,Decomposition (computer science) ,Decomposition method (constraint satisfaction) ,Divergence (statistics) ,Algorithm ,Selection (genetic algorithm) ,Computer Science Applications ,Information Systems - Abstract
Choosing a decomposition method for multi-class classification is an important trade-off between efficiency and predictive accuracy. Trying all the decomposition methods to find the best one is too time-consuming for many applications, while choosing the wrong one may result in large loss on predictive accuracy. In this paper, we propose an automatic decomposition method selection approach called ``D-Chooser", which is lightweight and can choose the best decomposition method accurately. D-Chooser is equipped with our proposed difficulty index which consists of sub-metrics including distribution divergence, overlapping regions, unevenness degree and relative size of the solution space. The difficulty index has two intriguing properties: 1) fast to compute and 2) measuring multi-class problems comprehensively. Extensive experiments on real-world multi-class problems show that D-Chooser achieves an accuracy of 83.3% in choosing the best decomposition method. It can choose the best method in just a few seconds, while existing approaches verify the effectiveness of a decomposition method often takes a few hours. We also provide case studies on Kaggle competitions and the results confirm that D-Chooser is able to choose a better decomposition method than the winning solutions.
- Published
- 2023
7. Niche Specificity, Polygeny, and Pleiotropy in Herbivorous Insects
- Author
-
Matthew L. Forister and Nate B. Hardy
- Subjects
education.field_of_study ,Pleiotropy ,Evolutionary biology ,Specialization (functional) ,Niche ,Population ,Genetic model ,Biology ,Allele ,Generalist and specialist species ,education ,Selection (genetic algorithm) ,Ecology, Evolution, Behavior and Systematics - Abstract
What causes host-use specificity in herbivorous insects? Population genetic models predict specialization when habitat preference can evolve and there is antagonistic pleiotropy at a performance-affecting locus. But empirically for herbivorous insects, host-use performance is governed by many genetic loci, and antagonistic pleiotropy seems to be rare. Here, we use individual-based quantitative genetic simulation models to investigate the role of pleiotropy in the evolution of sympatric host-use specialization when performance and preference are quantitative traits. We look first at pleiotropies affecting only host-use performance. We find that when the host environment changes slowly the evolution of host-use specialization requires levels of antagonistic pleiotropy much higher than what has been observed in nature. On the other hand, with rapid environmental change or pronounced asymmetries in productivity across host species, the evolution of host-use specialization readily occurs without pleiotropy. When pleiotropies affect preference as well as performance, even with slow environmental change and host species of equal productivity, we observe fluctuations in host-use breadth, with mean specificity increasing with the pervasiveness of antagonistic pleiotropy. So, our simulations show that pleiotropy is not necessary for specialization, although it can be sufficient, provided it is extensive or multifarious.
- Published
- 2023
8. Insights into the genetic covariation between harvest survival and growth rate in olive flounder (Paralichthys olivaceus) under commercial production environment
- Author
-
Yangzhen Li, Weiwei Zheng, Yingming Yang, and Yuanri Hu
- Subjects
education.field_of_study ,Ecology ,Paralichthys ,biology ,Population ,Aquatic Science ,Heritability ,biology.organism_classification ,Selective breeding ,Genetic correlation ,Olive flounder ,Animal science ,Growth rate ,education ,Ecology, Evolution, Behavior and Systematics ,Selection (genetic algorithm) - Abstract
In aquaculture, selective breeding for survival till harvest have become an alternative strategy for improving disease resistance and production. However, knowledge of genetic parameters of harvest survival, e.g., heritability and genetic correlations between survival and growth rate traits, is still scarce. The aims of this study were to estimate genetic parameters for harvest survival and growth rate traits under commercial farming conditions in olive flounder (Paralichthys olivaceus). Harvest survival was defined as a binary trait; growth traits were measured as average daily gain (ADG), specific growth rate (SGR), daily growth coefficient (DGC) and body weight (BW). Data from a population of 241 full-sib families (involving 39,904 individuals, four generations) were used. Heritabilities of survival were low but significant, which were 0.15 ± 0.04 and 0.22 ± 0.01 on observed and underlying scale, respectively. Heritability estimates for ADG, SGR and DGC were medium to high, which were 0.33 ± 0.06, 0.83 ± 0.07, 0.58 ± 0.07, respectively. While the heritability of BW is of low magnitude (0.17 ± 0.08). The genetic correlations between harvest survival and three growth rate traits (i.e., ADG, SGR and DGC) were very strong (ranging from 0.66 to 0.79), which is an exciting result. However, the genetic correlation between harvest survival and BW was much lower (0.17 ± 0.08). These results suggest that selection for harvest survival would consequentially result in concomitant increase of growth rate, and vice versa. Our findings revealed novel insights into the genetic improvement of growth rate and harvest survival through genetic selection in olive flounder.
- Published
- 2023
9. Interchromosomal linkage disequilibrium and linked fitness cost loci associated with selection for herbicide resistance
- Author
-
Anah Soble, Megan L. Van Etten, Regina S. Baucom, Sonal Gupta, Alex Harkess, and Jim Leebens-Mack
- Subjects
Whole genome sequencing ,Genetics ,Genetic hitchhiking ,Linkage disequilibrium ,Physiology ,Epistasis ,Plant Science ,Biology ,Adaptation ,Allele ,Gene ,Selection (genetic algorithm) - Abstract
The adaptation of weedy plants to herbicide is both a significant problem in agriculture and a model for the study of rapid adaptation under regimes of strong selection. Despite recent advances in our understanding of simple genetic changes that lead to resistance, a significant gap remains in our knowledge of resistance controlled by many loci and the evolutionary factors that influence the maintenance of resistance over time. Here, we perform a multi-level analysis involving whole genome sequencing and assembly, resequencing and gene expression analysis to both uncover putative loci involved in nontarget herbicide resistance and to examine evolutionary forces underlying the maintenance of resistance in natural populations. We found loci involved in herbicide detoxification, stress sensing, and alterations in the shikimate acid pathway to be under selection, and confirmed that detoxification is responsible for glyphosate resistance using a functional assay. Furthermore, we found interchromosomal linkage disequilibrium (ILD), most likely associated with epistatic selection, to influence NTSR loci found on separate chromosomes thus potentially mediating resistance through generations. Additionally, by combining the selection screen, differential expression and LD analysis, we identified fitness cost loci that are strongly linked to resistance alleles, indicating the role of genetic hitchhiking in maintaining the cost. Overall, our work strongly suggests that NTSR glyphosate resistance in I. purpurea is conferred by multiple genes which are maintained through generations via ILD, and that the fitness cost associated with resistance in this species is a by-product of genetic-hitchhiking.
- Published
- 2023
10. Lean Six Sigma Project Selection in a Manufacturing Environment Using Hybrid Methodology Based on Intuitionistic Fuzzy MADM Approach
- Author
-
Jose Arturo Garza-Reyes, Rajeev Rathi, Mahipal Singh, and Jiju Antony
- Subjects
Operations research ,Computer science ,Strategy and Management ,05 social sciences ,Six Sigma ,TOPSIS ,Robustness (computer science) ,0502 economics and business ,Credibility ,Entropy (information theory) ,Electrical and Electronic Engineering ,Lean Six Sigma ,050203 business & management ,Reliability (statistics) ,Selection (genetic algorithm) - Abstract
Project selection has a critical role in the successful execution of the lean six sigma (LSS) program in any industry. The poor selection of LSS projects leads to limited results and diminishes the credibility of LSS initiatives. For this reason, in this article, we propose a method for the assessment and effective selection of LSS projects. Intuitionistic fuzzy sets based on the weighted average were adopted for aggregating individual suggestions of decision makers. The weights of selection criteria were computed using entropy measures and the available projects are prioritized using the multiattribute decision making approach, i.e., modified TOPSIS and VIKOR. The proposed methodology is validated through a case example of the LSS project selection in a manufacturing organization. The results of the case study reveal that out of eight LSS projects, the assembly section (A8) is the best LSS project. A8 is the ideal LSS project for swift gains and manufacturing sustainability. The robustness and reliability of the obtained results are checked through a sensitivity analysis. The proposed methodology will help manufacturing organizations in the selection of the best opportunities among complex situations, results in sustainable development. The engineering managers and LSS consultants can also adopt the proposed methodology for LSS project selections.
- Published
- 2023
11. Security-reliability tradeoff of MIMO TAS/SC networks using harvest-to-jam cooperative jamming methods with random jammer location
- Author
-
Ha Duy Hung, Tran Trung Duy, Le-Tien Thuong, and Pham Minh Nam
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Reliability (computer networking) ,MIMO ,Jamming ,Quantitative Biology::Subcellular Processes ,Artificial Intelligence ,Hardware and Architecture ,Computer Science::Networking and Internet Architecture ,Wireless ,Antenna (radio) ,business ,Algorithm ,Software ,Selection (genetic algorithm) ,Energy (signal processing) ,Computer Science::Cryptography and Security ,Computer Science::Information Theory ,Information Systems ,Rayleigh fading - Abstract
This paper evaluates outage probability (OP) and intercept probability (IP) of physical-layer security based MIMO networks adopting cooperative jamming (Coop-Jam). In the considered scenario, a multi-antenna source communicates with a multi-antenna destination employing transmit antenna selection (TAS)/ selection combining (SC), in presence of a multi-antenna eavesdropper using SC. One of jammers appearing near the destination is selected for generating jamming noises on the eavesdropper. Moreover, the destination supports the wireless energy for the chosen jammer, and cooperates with it to remove the jamming noises. We consider two jammer selection approaches, named RAND and SHORT. In RAND, the destination randomly selects the jammer, and in SHORT, the jammer, which is nearest to the destination, is chosen. We derive exact and asymptotic expressions of OP and IP over Rayleigh fading, and perform Monte-Carlo simulations to verify the correction of our derivation. The results present advantages of the proposed RAND and SHORT methods, as compared with the corresponding one without using Coop-Jam.
- Published
- 2023
12. Projects Selection In Knapsack Problem By Using Artificial Bee Colony Algorithm
- Author
-
Armaneesa Naaman Hasoon
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Computer science ,Combinatorial optimization problem ,Investment plan ,02 engineering and technology ,General Medicine ,Field (computer science) ,Artificial bee colony algorithm ,020901 industrial engineering & automation ,Knapsack problem ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,MATLAB ,computer ,Selection (genetic algorithm) ,computer.programming_language - Abstract
One of the combinatorial optimization problems is Knapsack problem, which aims to maximize the benefit of objects whose weight not exceeding the capacity of knapsack. This paper introduces artificial bee colony algorithm to select a subset of project and represented by knapsack problem to put the best investment plan which achieve the highest profits within a determined costs, this plan is one of the applications of the financial field. The result from the proposed algorithm implemented by matlab (8.3) show the ability to find best solution with precisely and rapidity compared to genetic algorithm http://dx.doi.org/10.25130/tjps.23.2018.039
- Published
- 2023
13. Guiding secondary school students during task selection
- Author
-
Liesbeth Kester, Jeroen J. G. van Merriënboer, Michelle L. Nugteren, Halszka Jarodzka, Department of Online Learning and Instruction, RS-Theme Cognitive Processes in Education, RS: SHE - R1 - Research (OvO), and Onderwijsontw & Onderwijsresearch
- Subjects
050101 languages & linguistics ,STRATEGIES ,media_common.quotation_subject ,feedback ,Conformity ,Education ,Task (project management) ,strategic guidance ,Task selection ,ComputingMilieux_COMPUTERSANDEDUCATION ,0501 psychology and cognitive sciences ,Cognitive skill ,Selection (genetic algorithm) ,media_common ,conformity ,feed forward ,05 social sciences ,050301 education ,ADVICE ,SKILL ,PERFORMANCE ,Computer Science Applications ,SELF-REGULATION ,procedural guidance ,Psychology ,0503 education ,Cognitive psychology - Abstract
Secondary school students often learn new cognitive skills by practicing with tasks that vary in difficulty, amount of support and/or content. Occasionally, they have to select these tasks themselves. Studies on task-selection guidance investigated either procedural guidance (specific rules for selecting tasks) or strategic guidance (general rules and explanations for task selection), but never directly compared them. Experiment 1 aimed to replicate these studies by comparing procedural guidance and strategic guidance to a no-guidance condition, in an electronic learning environment in which participants practiced eight self-selected tasks. Results showed no differences in selected tasks during practice and domain-specific skill acquisition between the experimental groups. A possible explanation for this is an ineffective combination of feedback and feed forward (i.e. the task-selection advice). The second experiment compared inferential guidance (which combines procedural feedback with strategic feed forward), to a no-guidance condition. Results showed that participants selected more difficult, less-supported tasks after receiving inferential guidance than after no guidance. Differences in domain-specific skill acquisition were not significant, but higher conformity to inferential guidance did significantly predict higher domain-specific skill acquisition. Hence, we conclude that inferential guidance can positively affect task selections and domain-specific skill acquisition, but only when conformity is high.
- Published
- 2023
14. Choose Appropriate Subproblems for Collaborative Modeling in Expensive Multiobjective Optimization
- Author
-
Zhenkun Wang, Yew-Soon Ong, Qingfu Zhang, Haitao Liu, Shunyu Yao, and Jianping Luo
- Subjects
Mathematical optimization ,Computer science ,media_common.quotation_subject ,Multi-objective optimization ,Computer Science Applications ,Human-Computer Interaction ,symbols.namesake ,Multiobjective optimization problem ,Control and Systems Engineering ,Benchmark (computing) ,symbols ,Leverage (statistics) ,Electrical and Electronic Engineering ,Function (engineering) ,Gaussian process ,Software ,Selection (genetic algorithm) ,Information Systems ,media_common - Abstract
In dealing with the expensive multiobjective optimization problem, some algorithms convert it into a number of single-objective subproblems for optimization. At each iteration, these algorithms conduct surrogate-assisted optimization on one or multiple subproblems. However, these subproblems may be unnecessary or resolved. Operating on such subproblems can cause server inefficiencies, especially in the case of expensive optimization. To overcome this shortcoming, we propose an adaptive subproblem selection (ASS) strategy to identify the most promising subproblems for further modeling. To better leverage the cross information between the subproblems, we use the collaborative multioutput Gaussian process surrogate to model them jointly. Moreover, the commonly used acquisition functions (also known as infill criteria) are investigated in this article. Our analysis reveals that these acquisition functions may cause severe imbalances between exploitation and exploration in multiobjective optimization scenarios. Consequently, we develop a new acquisition function, namely, adaptive lower confidence bound (ALCB), to cope with it. The experimental results on three different sets of benchmark problems indicate that our proposed algorithm is competitive. Beyond that, we also quantitatively validate the effectiveness of the ASS strategy, the CoMOGP model, and the ALCB acquisition function.
- Published
- 2023
15. Efficient Performance Technical Selection of Positive Buck-Boost Converter
- Author
-
Abadal-Salam T. Hussain and Ahmed K. Abbas
- Subjects
Computer science ,Boost converter ,Pharmacology (medical) ,Selection (genetic algorithm) ,Reliability engineering - Abstract
The necessity for stable DC voltage in both removable and non-removable systems has been considerably desired recently. These systems have to be implemented efficiently in order to be responding rapidly based voltage variations. Under this act, the efficient power can extend the lifetime of the employed batteries in such systems. The presented efficiency can be realized with respect to buck and boost components that were implemented to generate what is called positive Buck-Boost converter circuits. The main functions of the positive Buck-Boost converter are identified by announcing the unchanged situation of output voltage polarity and indicating the level of the voltage rationally between the input and the output. It is worth mention, the positive Buck-Boost converter circuit was simulated based Proteus software, and the hardware components were connected in reality. Finally, the microcontroller type that employed in the proposed system is PIC_16F877A, which realizes the input voltage sensitively to generate Pulse Width Modulation (PWM) signals in order to feed the employed MOSFET element.
- Published
- 2022
16. Novel Single-Valued Neutrosophic Combined Compromise Solution Approach for Sustainable Waste Electrical and Electronics Equipment Recycling Partner Selection
- Author
-
Arunodaya Raj Mishra and Pratibha Rani
- Subjects
Risk analysis (engineering) ,Robustness (computer science) ,Computer science ,Strategy and Management ,Similarity (psychology) ,Stability (learning theory) ,Context (language use) ,Sensitivity (control systems) ,Electronics ,Electrical and Electronic Engineering ,Similarity measure ,Selection (genetic algorithm) - Abstract
Waste electrical and electronics equipment (WEEE) recyclers have become a boon for countries as they assist to reduce carbon emissions by recycling the WEEE in the most ecofriendly way. The evaluation and choice of optimal WEEE recycling partner is a significant and multifaceted decision for the managerial experts because of the involvement of several qualitative and quantitative criteria. Thus, the aim of the article is to propose a novel methodology by integrating a combined compromise solution approach and similarity measure within the context of single-valued neutrosophic sets (SVNSs) and, then, used to solve the decision-making problem. In this approach, the criteria weights are evaluated by a new procedure based on a similarity measure. For this, a novel similarity measure is introduced for SVNSs and also presented its efficiency over existing similarity measures. To investigate the efficiency and practicality of the introduced approach, a case study of WEEE recycling partner selection is taken under the SVNSs environment. Finally, comparison and sensitivity analysis are made to check the robustness and stability of the developed methodology. The outcomes of this article exemplify that the developed approach is more suitable and well consistent with existing approaches.
- Published
- 2022
17. Conditional Joint Distribution-Based Test Selection for Fault Detection and Isolation
- Author
-
Yang Li, Xiuli Wang, Ningyun Lu, and Bin Jiang
- Subjects
Mathematical optimization ,Dependency (UML) ,Computer science ,Fault (power engineering) ,Fault detection and isolation ,Computer Science Applications ,law.invention ,Human-Computer Interaction ,Control and Systems Engineering ,law ,Joint probability distribution ,Bernoulli distribution ,Electrical network ,Measurement uncertainty ,Electrical and Electronic Engineering ,Algorithms ,Software ,Selection (genetic algorithm) ,Information Systems - Abstract
Data-driven fault detection and isolation (FDI) depends on complete, comprehensive, and accurate fault information. Optimal test selection can substantially improve information achievement for FDI and reduce the detecting cost and the maintenance cost of the engineering systems. Considerable efforts have been worked to model the test selection problem (TSP), but few of them considered the impact of the measurement uncertainty and the fault occurrence. In this article, a conditional joint distribution (CJD)-based test selection method is proposed to construct an accurate TSP model. In addition, we propose a deep copula function which can describe the dependency among the tests. Afterward, an improved discrete binary particle swarm optimization (IBPSO) algorithm is proposed to deal with TSP. Then, application to an electrical circuit is used to illustrate the efficiency of the proposed method over two available methods: 1) joint distribution-based IBPSO and 2) Bernoulli distribution-based IBPSO.
- Published
- 2022
18. A Multiobjective Framework for Many-Objective Optimization
- Author
-
Si-Chen Liu, Jun Zhang, Kay Chen Tan, and Zhi-Hui Zhan
- Subjects
Mathematical optimization ,Optimization problem ,Computer science ,Pareto principle ,Space (commercial competition) ,Evolutionary computation ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Differential evolution ,Convergence (routing) ,Electrical and Electronic Engineering ,Cluster analysis ,Software ,Selection (genetic algorithm) ,Information Systems - Abstract
It is known that many-objective optimization problems (MaOPs) often face the difficulty of maintaining good diversity and convergence in the search process due to the high-dimensional objective space. To address this issue, this article proposes a novel multiobjective framework for many-objective optimization (Mo4Ma), which transforms the many-objective space into multiobjective space. First, the many objectives are transformed into two indicative objectives of convergence and diversity. Second, a clustering-based sequential selection strategy is put forward in the transformed multiobjective space to guide the evolutionary search process. Specifically, the selection is circularly performed on the clustered subpopulations to maintain population diversity. In each round of selection, solutions with good performance in the transformed multiobjective space will be chosen to improve the overall convergence. The Mo4Ma is a generic framework that any type of evolutionary computation algorithm can incorporate compatibly. In this article, the differential evolution (DE) is adopted as the optimizer in the Mo4Ma framework, thus resulting in an Mo4Ma-DE algorithm. Experimental results show that the Mo4Ma-DE algorithm can obtain well-converged and widely distributed Pareto solutions along with the many-objective Pareto sets of the original MaOPs. Compared with seven state-of-the-art MaOP algorithms, the proposed Mo4Ma-DE algorithm shows strong competitiveness and general better performance.
- Published
- 2022
19. Deep Reinforcement Learning for Solving the Heterogeneous Capacitated Vehicle Routing Problem
- Author
-
Jingwen Li, Zhiguang Cao, Wen Song, Andrew Lim, Ruize Gao, Jie Zhang, and Yining Ma
- Subjects
Computer Science - Machine Learning ,Mathematical optimization ,Computer science ,Heuristic ,Node (networking) ,String (computer science) ,Computer Science Applications ,Rendering (computer graphics) ,Human-Computer Interaction ,Control and Systems Engineering ,Vehicle routing problem ,Reinforcement learning ,Electrical and Electronic Engineering ,Heuristics ,Mathematics - Optimization and Control ,Software ,Selection (genetic algorithm) ,Information Systems - Abstract
Existing deep reinforcement learning (DRL) based methods for solving the capacitated vehicle routing problem (CVRP) intrinsically cope with homogeneous vehicle fleet, in which the fleet is assumed as repetitions of a single vehicle. Hence, their key to construct a solution solely lies in the selection of the next node (customer) to visit excluding the selection of vehicle. However, vehicles in real-world scenarios are likely to be heterogeneous with different characteristics that affect their capacity (or travel speed), rendering existing DRL methods less effective. In this paper, we tackle heterogeneous CVRP (HCVRP), where vehicles are mainly characterized by different capacities. We consider both min-max and min-sum objectives for HCVRP, which aim to minimize the longest or total travel time of the vehicle(s) in the fleet. To solve those problems, we propose a DRL method based on the attention mechanism with a vehicle selection decoder accounting for the heterogeneous fleet constraint and a node selection decoder accounting for the route construction, which learns to construct a solution by automatically selecting both a vehicle and a node for this vehicle at each step. Experimental results based on randomly generated instances show that, with desirable generalization to various problem sizes, our method outperforms the state-of-the-art DRL method and most of the conventional heuristics, and also delivers competitive performance against the state-of-the-art heuristic method, i.e., SISR. Additionally, the results of extended experiments demonstrate that our method is also able to solve CVRPLib instances with satisfactory performance., Comment: This paper has been accepted at IEEE Transactions on Cybernetics
- Published
- 2022
20. Conservation and Convergence of Genetic Architecture in the Adaptive Radiation of Anolis Lizards
- Author
-
Megan E. Kobiela, Jonathan B. Losos, Jason J. Kolbe, Joel W. McGlothlin, Edmund D. Brodie, and Helen V. Wright
- Subjects
Constraint (information theory) ,biology ,Phylogenetic tree ,Evolutionary biology ,Phylogenetics ,Adaptive radiation ,Convergence (relationship) ,biology.organism_classification ,Anolis ,Genetic architecture ,Selection (genetic algorithm) ,Ecology, Evolution, Behavior and Systematics - Abstract
The G matrix, which quantifies the genetic architecture of traits, is often viewed as an evolutionary constraint. However, G can evolve in response to selection and may also be viewed as a product of adaptive evolution. Convergent evolution of G in similar environments would suggest that G evolves adaptively, but it is difficult to disentangle such effects from phylogeny. Here, we use the adaptive radiation of Anolis lizards to ask whether convergence of G accompanies the repeated evolution of habitat specialists, or ecomorphs, across the Greater Antilles. We measured G in seven species representing three ecomorphs (trunk-crown, trunk- ground, and grass-bush). We found that the overall structure of G does not converge. Instead, the structure of G is well conserved and displays a phylogenetic signal consistent with Brownian motion. However, several elements of G showed signatures of convergence, indicating that some aspects of genetic architecture have been shaped by selection. Most notably, genetic correlations between limb traits and body traits were weaker in long-legged trunk-ground species, suggesting effects of recurrent selection on limb length. Our results demonstrate that common selection pressures may have subtle but consistent effects on the evolution of G, even as its overall structure remains conserved.
- Published
- 2022
21. Detecting Compiler Warning Defects Via Diversity-Guided Program Mutation
- Author
-
Zhilei Ren, Weiqiang Kong, Xiaochen Li, Zhide Zhou, He Jiang, and Yixuan Tang
- Subjects
Dead code ,Programming language ,Computer science ,Mutation (genetic algorithm) ,Program compilation ,Test program ,Compiler ,Construct (python library) ,computer.software_genre ,Abstract syntax tree ,computer ,Software ,Selection (genetic algorithm) - Abstract
Compiler diagnostic warnings help developers identify potential programming mistakes during program compilation. However, these warnings could be erroneous due to the defects of compiler warning diagnostics. Although many techniques have been proposed to automatically generate test programs for compiler warning defect detection, the effectiveness of these techniques on defect-nding is still limited, due to their ability at generating warning-sensitive test program structures. Therefore, in this paper, we propose a DIversity-guided PROgram Mutation approach, called DIPROM, to construct diverse warning-sensitive programs for effective compiler warning defect detection. Given a seed test program, DIPROM rst removes its dead code to reduce false positive warning defects. Then, the abstract syntax tree (AST) of the test program is constructed; DIPROM iteratively mutates the structures of the AST to generate warning-sensitive program variants. To improve the diversity of program variants, DIPROM applies a novel diversity function to guide the selection of the best program variants in each iteration. With the selected program variants, differential testing is conducted to effectively detect warning defects in different compilers. In the experiments, we evaluate DIPROM with two popular C compilers (i.e., GCC and Clang). Experimental results show that DIPROM can detect 75.36% and 218.42% more warning defects than two state-of-the-art approaches (i.e., Epiphron and Csmith), respectively. Meanwhile, DIPROM is efcient, which only spends 61.14% of the time of comparative approaches on average in nding the same number of warning defects. We at last applied DIPROM on the latest development versions of GCC and Clang. After two months running, DIPROM reported 12 new warning defects; nine of them have been conrmed/xed by developers.
- Published
- 2022
22. Improving artificial bee colony algorithm using modified nearest neighbor sequence
- Author
-
Kai Li, Zhihua Cui, Hui Wang, Feng Wang, and Wenjun Wang
- Subjects
Artificial bee colony algorithm ,Sequence ,Roulette ,General Computer Science ,Computer science ,Process (computing) ,Benchmark (computing) ,Evolutionary algorithm ,Algorithm ,Selection (genetic algorithm) ,k-nearest neighbors algorithm - Abstract
Nearest neighbor (NN) is a simple machine learning algorithm, which is often used in classification problems. In this paper, a concept of modified nearest neighbor (MNN) is proposed to strengthen the optimization capability of artificial bee colony (ABC) algorithm. The new approach is called ABC based on modified nearest neighbor sequence (NNSABC). Firstly, MNN is used to construct solution sequences. Unlike the original roulette selection, NNSABC randomly chooses a solution from the corresponding nearest neighbor sequence to generate offspring. Then, two novel search strategies based on the modified nearest neighbor sequence are employed to build a strategy pool. In the optimization process, different search strategies are dynamically chosen from the strategy pool according to the current search status. In order to study the optimization capability of NNSABC, two benchmark sets including 22 classical problems and 28 CEC 2013 complex problems are tested. Experimental results show NNSABC obtains competitive performance when compared with twenty-three other ABCs and evolutionary algorithms.
- Published
- 2022
23. Foraging for the self: Environment selection for agency inference
- Author
-
Kelsey Perrykkad, Jakob Hohwy, and Robinson Je
- Subjects
Computer science ,business.industry ,Self ,Foraging ,Inference ,Experimental and Cognitive Psychology ,Machine learning ,computer.software_genre ,Arts and Humanities (miscellaneous) ,Agency (sociology) ,Developmental and Educational Psychology ,Artificial intelligence ,business ,computer ,Selection (genetic algorithm) - Abstract
Sometimes agents choose to occupy environments that are neither traditionally rewarding nor worth exploring, but which rather promise to help minimise uncertainty related to what they can control. Selecting environments that afford inferences about agency seems a foundational aspect of environment selection dynamics – if an agent can’t form reliable beliefs about what they can and can’t control, then they can’t act efficiently to achieve rewards. This relatively neglected aspect of environment selection is important to study so that we can better understand why agents occupy certain environments over others – something that may also be relevant for mental and developmental conditions, such as autism. This online experiment investigates the impact of uncertainty about agency on the way participants choose to freely move between two environments, one that has greater irreducible variability and one that is more complex to model. We hypothesise that increasingly erroneous predictions about the expected outcome of agency-exploring actions can be a driver of switching environments, and we explore which type of environment agents prefer. Results show that participants actively switch between the two environments following increases in prediction error, and that the tolerance for prediction error before switching is modulated by individuals’ autism traits. Further, we find that participants more frequently occupy the variable environment, which is predicted by greater accuracy and higher confidence than the complex environment. This is the first online study to investigate relatively unconstrained ongoing foraging dynamics in support of judgements of agency, and in doing so represents a significant methodological advance.
- Published
- 2022
24. Un método para la selección de aves bioindicadoras con base en sus posibilidades de monitoreo
- Author
-
Marco Antonio, Luis Enrique Domínguez Velázquez, Jaqueline Guzmán Hernández, Martin Gómez, and Altamirano Gonzalez Ortega
- Subjects
monitoreo ,Geography ,Indicator species ,lcsh:Zoology ,método de selección ,Sampling (statistics) ,Forestry ,lcsh:QL1-991 ,General Medicine ,indicadores biológicos ,Chiapas ,Cartography ,Selection (genetic algorithm) - Abstract
A b s t r a c t A method to select bird indicator species taking into account its monitoring possibilities Based on criteria proposed by some specialists to characterize terrestrial birds that respond to environmental changes, we designed a numeric matrix for the selection of indicator species with the objective of recognizing those with more possibilities of monitoring. We assigned weighted values that allowed to evaluate through this matrix, each species recorded in sampling sites from an area located in North-western Chiapas, visited during the year 2002. The result of its application pointed out 14 species with possibilities of monitoring, of a universe of 272 species recorded. The proposed method, besides taking into account most of the basic considerations for the selection of indicator species, it has the capacity to discern among the community of species recorded to use pondered numeric values. We described the method and some recommendations for its use.
- Published
- 2022
25. Early Lessons Learned with the Independent IR Residency Selection Process: Similarities and Differences From the Vascular and Interventional Radiology Fellowship
- Author
-
M. Victoria Marx, Shantanu Warhadpande, Paul J. Rochon, S Sabri, Claire Kaufman, and Minhaj S. Khaja
- Subjects
medicine.medical_specialty ,Career Choice ,medicine.diagnostic_test ,Process (engineering) ,business.industry ,Internship and Residency ,Interventional radiology ,Radiology, Interventional ,United States ,Education, Medical, Graduate ,Humans ,Medicine ,Radiology, Nuclear Medicine and imaging ,Medical physics ,Fellowships and Scholarships ,business ,Selection (genetic algorithm) - Published
- 2022
26. Conspicuous consumption: A meta-analytic review of its antecedents, consequences, and moderators
- Author
-
Lalita A. Manrai, Ajay K. Manrai, Bipul Kumar, and Richard P. Bagozzi
- Subjects
Marketing ,Econometrics ,Conspicuous consumption ,Psychology ,Selection (genetic algorithm) ,Structural equation modeling ,Stock (geology) ,Independent research - Abstract
This paper documents a comprehensive theoretical framework that has been developed to understand conspicuous consumption behavior. The proposed framework identifies three antecedents and two consequences of conspicuous consumption. We tested hypotheses concerning this framework using a meta-analytic approach. We also meta-analytically tested the effect of contextual, methodological, and individual-level moderators on the relationship between conspicuous consumption and its consequences. Additionally, we examined the mediating role of conspicuous consumption behavior in the relationship between its antecedents and consequences using meta-analytic structural equation modeling. After an extensive literature search based on multiple selection criteria, we use 59 independent research studies and 97 unique effect sizes to test hypotheses. The findings theoretically contribute to the stock of knowledge on conspicuous consumption and provide new insights for practitioners.
- Published
- 2022
27. Hash Bit Selection Based on Collaborative Neurodynamic Optimization
- Author
-
Jun Wang, Sam Kwong, and Xinqi Li
- Subjects
Mathematical optimization ,Computer science ,Hash function ,Particle swarm optimization ,Binary number ,Models, Theoretical ,Computer Science Applications ,Human-Computer Interaction ,Cardinality ,Control and Systems Engineering ,Mutation (genetic algorithm) ,Astrophysics::Solar and Stellar Astrophysics ,Quadratic programming ,Electrical and Electronic Engineering ,Algorithms ,Software ,Selection (genetic algorithm) ,Information Systems ,Premature convergence - Abstract
Hash bit selection determines an optimal subset of hash bits from a candidate bit pool. It is formulated as a zero-one quadratic programming problem subject to binary and cardinality constraints. In this article, the problem is equivalently reformulated as a global optimization problem. A collaborative neurodynamic optimization (CNO) approach is applied to solve the problem by using a group of neurodynamic models initialized with particle swarm optimization iteratively in the CNO. Lévy mutation is used in the CNO to avoid premature convergence by ensuring initial state diversity. A theoretical proof is given to show that the CNO with the Lévy mutation operator is almost surely convergent to global optima. Experimental results are discussed to substantiate the efficacy and superiority of the CNO-based hash bit selection method to the existing methods on three benchmarks.
- Published
- 2022
28. Handling Constrained Multiobjective Optimization Problems via Bidirectional Coevolution
- Author
-
Ke Tang, Bing-Chuan Wang, and Zhi-Zhong Liu
- Subjects
0209 industrial biotechnology ,education.field_of_study ,Mathematical optimization ,Computer science ,Feasible region ,Population ,Evolutionary algorithm ,Sorting ,Boundary (topology) ,02 engineering and technology ,Multi-objective optimization ,Computer Science Applications ,Human-Computer Interaction ,020901 industrial engineering & automation ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,education ,Software ,Selection (genetic algorithm) ,Information Systems - Abstract
Constrained multiobjective optimization problems (CMOPs) involve both conflicting objective functions and various constraints. Due to the presence of constraints, CMOPs' Pareto-optimal solutions are very likely lying on constraint boundaries. The experience from the constrained single-objective optimization has shown that to quickly obtain such an optimal solution, the search should surround the boundary of the feasible region from both the feasible and infeasible sides. In this article, we extend this idea to cope with CMOPs and, accordingly, we propose a novel constrained multiobjective evolutionary algorithm with bidirectional coevolution, called BiCo. BiCo maintains two populations, that is: 1) the main population and 2) the archive population. To update the main population, the constraint-domination principle is equipped with an NSGA-II variant to move the population into the feasible region and then to guide the population toward the Pareto front (PF) from the feasible side of the search space. While for updating the archive population, a nondominated sorting procedure and an angle-based selection scheme are conducted in sequence to drive the population toward the PF within the infeasible region while maintaining good diversity. As a result, BiCo can get close to the PF from two complementary directions. In addition, to coordinate the interaction between the main and archive populations, in BiCo, a restricted mating selection mechanism is developed to choose appropriate mating parents. Comprehensive experiments have been conducted on three sets of CMOP benchmark functions and six real-world CMOPs. The experimental results suggest that BiCo can obtain quite competitive performance in comparison to eight state-of-the-art-constrained multiobjective evolutionary optimizers.
- Published
- 2022
29. A Ranking Model for the Selection and Ranking of Commercial Off-the-Shelf Components
- Author
-
Rakesh Garg
- Subjects
Basis (linear algebra) ,Computer science ,Strategy and Management ,Fuzzy set ,computer.software_genre ,Fuzzy logic ,Ranking (information retrieval) ,Compatibility (mechanics) ,Data mining ,Electrical and Electronic Engineering ,Commercial off-the-shelf ,computer ,Selection (genetic algorithm) ,Reliability (statistics) - Abstract
In this article, a deterministic model for the selection and ranking of commercial off-the-shelf (COTS) components is developed on the basis of fuzzy modified distance-based approach (FMDBA). The COTS selection and ranking problem is modeled as multicriteria decision making problem due to the involvement of multiple ranking criteria like functionality, reliability, compatibility, etc. FMDBA is the combination of the fuzzy set theory and the modified distance-based approach. To show the working of the developed ranking model, a case study of the e-payment system is demonstrated which aims on the selection and ranking of eight COTS components over four major categories of ranking criteria. FMDBA provides a comprehensive ranking of the components based on their calculated composite distance values. To depict the docility of FMDBA method, the results obtained are also compared with the existing decision-making methodologies.
- Published
- 2022
30. Application of Novel MCDM for Location Selection of Surface Water Treatment Plant
- Author
-
Sudipa Choudhury, Apu Kumar Saha, Mrinmoy Majumder, and Prasenjit Howladar
- Subjects
education.field_of_study ,Relation (database) ,Treated water ,Operations research ,Computer science ,Process (engineering) ,Strategy and Management ,Population ,Multiple-criteria decision analysis ,Surface water treatment ,Polynomial neural network ,Electrical and Electronic Engineering ,education ,Selection (genetic algorithm) - Abstract
Surface water treatment plants (SWTPs) are responsible for supplying treated water to urban or rural consumers to satisfy the demand of the proximal population for drinking water. One important reason why an SWTP may fail is the poor selection of its location. This article proposes an automated framework for decision making which was developed empirically to identify a location objectively and cognitively, with consideration of all relevant indicators, selected in relation to their role in ensuring the optimality of SWTP performance. Thus, a new multicriteria decision making method was developed to identify the most common and significant indicators and to develop an index that would capture the feasibility of a given location for the installation of an SWTP. In addition to this method, another novel predictive model was developed, based on a polynomial neural network architecture and bagged modeling, to automate the process of the feasibility assessment of the location. These two models were developed and utilized for the first time for the automatic assessment of location feasibility for an SWTP installation. The prototype application of the two new models concerned a few test locations of a peri-urban metro city, located in northeast India, where the results obtained from the model seconded the real scenario for the test locations. The results encourage further application of this process.
- Published
- 2022
31. Multi criteria decision making through TOPSIS and COPRAS on drilling parameters of magnesium AZ91
- Author
-
G. Jayaprakash, N. Baskar, M. Bhuvanesh Kumar, M. Varatharajulu, and Muthukannan Duraiselvam
- Subjects
010302 applied physics ,Mathematical optimization ,Materials science ,Metals and Alloys ,Drilling ,TOPSIS ,02 engineering and technology ,Ideal solution ,021001 nanoscience & nanotechnology ,Multiple-criteria decision analysis ,01 natural sciences ,Cost reduction ,Mechanics of Materials ,0103 physical sciences ,Drill bit ,Minification ,0210 nano-technology ,Selection (genetic algorithm) - Abstract
Magnesium (Mg) alloys are extensively used in the automotive and aircraft industries due to their prominent properties. The selection of appropriate process parameters is an important decision to be made because of the cost reduction and quality improvement. This decision entails the selection of suitable process parameters concerning various conflicting factors, so it has to be addressed with the Multiple Criteria Decision Making (MCDM) method. Therefore, this work addresses the MCDM problem through the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and COPRAS (COmplex PRoportional ASsessment) methods. The assessment carried out in the material Mg AZ91 with the Solid Carbide (SC) drill bit. The dependent parameters like drilling time, burr height, burr thickness, and roughness are considered with the independent parameters like spindle speed and feed rate. Drilling alternatives are ranked using the above said two methods and the results are evaluated. The optimum combination was found on the basis of TOPSIS and COPRAS for simultaneous minimization of all the responses which is found with a spindle speed of 4540 rpm and a feed rate of 0.076 mm/rev. The identical sequencing order was observed in TOPSIS and COPRAS method. The empirical model was developed through Box-Behnken design for each response. Superior empirical model developed for drilling time which is 3.959 times accurate than the conventional equation. The trends of various dependents based on the heterogeneity of various independents are not identical, these complex mechanisms are identified and reported. The optimized results of the Desirability Function Approach are greater accordance with the TOPSIS and COPRAS top rank. The confirmation results are observed with lesser deviation suggesting the selection of the above independent parameters.
- Published
- 2022
32. Challenges in KNN Classification
- Author
-
Shichao Zhang
- Subjects
Computer science ,business.industry ,Nearest neighbor search ,Sample (statistics) ,Resolution (logic) ,Machine learning ,computer.software_genre ,Computer Science Applications ,k-nearest neighbors algorithm ,Variety (cybernetics) ,ComputingMethodologies_PATTERNRECOGNITION ,Computational Theory and Mathematics ,Lazy learning ,Classification rule ,Artificial intelligence ,business ,computer ,Selection (genetic algorithm) ,Information Systems - Abstract
The KNN algorithm is one of the most popular data mining algorithms. It has been widely and successfully applied to data analysis applications across a variety of research topics in computer science. This paper illustrates that, despite its success, there remain many challenges in KNN classification, including K computation, nearest neighbor selection, nearest neighbor search and classification rules. Having established these issues, recent approaches to their resolution are examined in more detail, thereby providing a potential roadmap for ongoing KNN-related research, as well as some new classification rules regarding how to tackle the issue of training sample imbalance. To evaluate the new approaches, some experiments were conducted with 15 datasets.
- Published
- 2022
33. Neuro-Evolutionary Direct Policy Search for Multiobjective Optimal Control
- Author
-
Andrea Castelletti, Marta Zaniolo, and Matteo Giuliani
- Subjects
Mathematical optimization ,Noise measurement ,Computer Networks and Communications ,Computer science ,Fitness landscape ,Population ,Crossover ,Aerospace electronics ,Network topology ,Topology ,Artificial Intelligence ,Search problems ,Direct policy search (DPS) ,Reinforcement learning ,education ,multiobjective (MO) control ,Selection (genetic algorithm) ,education.field_of_study ,neuroevolution ,Optimal control ,Computer Science Applications ,neural networks architecture ,Task analysis ,Markov decision process ,Software - Abstract
Direct policy search (DPS) is emerging as one of the most effective and widely applied reinforcement learning (RL) methods to design optimal control policies for multiobjective Markov decision processes (MOMDPs). Traditionally, DPS defines the control policy within a preselected functional class and searches its optimal parameterization with respect to a given set of objectives. The functional class should be tailored to the problem at hand and its selection is crucial, as it determines the search space within which solutions can be found. In MOMDPs problems, a different objective tradeoff determines a different fitness landscape, requiring a tradeoff-dynamic functional class selection. Yet, in state-of-the-art applications, the policy class is generally selected a priori and kept constant across the multidimensional objective space. In this work, we present a novel policy search routine called neuro-evolutionary multiobjective DPS (NEMODPS), which extends the DPS problem formulation to conjunctively search the policy functional class and its parameterization in a hyperspace containing policy architectures and coefficients. NEMODPS begins with a population of minimally structured approximating networks and progressively builds more sophisticated architectures by topological and parametrical mutation and crossover, and selection of the fittest individuals concerning multiple objectives. We tested NEMODPS for the problem of designing the control policy of a multipurpose water system. Numerical results show that the tradeoff-dynamic structural and parametrical policy search of NEMODPS is consistent across multiple runs, and outperforms the solutions designed via traditional DPS with predefined policy topologies.
- Published
- 2022
34. Subjective selection and the evolution of complex culture
- Author
-
Manvir Singh
- Subjects
business.industry ,Anthropology ,Sociology ,Artificial intelligence ,General Medicine ,Machine learning ,computer.software_genre ,business ,computer ,Selection (genetic algorithm) - Abstract
Why is culture the way it is? Here I argue that a major force shaping culture is what I call subjective (cultural) selection, or the selective retention of cultural variants that people subjectively perceive as satisfying their goals. I show that people evaluate behaviors and beliefs according to how useful they are, especially for achieving goals. As they adopt and pass on those variants that seem best, they iteratively craft culture into increasingly effective-seeming forms. I argue that this process drives the development of many cumulatively complex cultural products, including effective technology, magic and ritual, aesthetic traditions, and institutions. I show that it can explain cultural dependencies, such as how certain beliefs create corresponding new practices, and I outline how it interacts with other cultural evolutionary processes. Cultural practices everywhere, from spears to shamanism, develop because people subjectively evaluate them to be effective means of satisfying regular goals.
- Published
- 2022
35. Outage analysis of cognitive communication system using opportunistic relay selection and secondary users as relays
- Author
-
Hojjat Javadzadeh and abdulhamid zahedi
- Subjects
Computer science ,business.industry ,Computer Networks and Communications ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Cognitive communication ,Data_CODINGANDINFORMATIONTHEORY ,law.invention ,Electronic, Optical and Magnetic Materials ,Relay ,law ,Electrical and Electronic Engineering ,business ,Instrumentation ,Selection (genetic algorithm) ,Computer network - Abstract
Efficient bandwidth utilization is significant in new communication systems where secondary users can be used besides of primary users considering interference issues and idle state of primary users. Using secondary users as relays to transmit their own signals in addition to the primary signals can be applied for more reliability of the system where opportunistic relay selection can significantly enhance the performance of the system. The best-condition secondary user is selected as the optimum relay for retransmission of primary/secondary signal. Outage probability is analyzed in this paper based on decode and forward techniques in secondary users while the closed-form statement for outage probability is provided and verified by numerical evaluations.
- Published
- 2022
36. Why estimation alone causes Markowitz portfolio selection to fail and what we might do about it
- Author
-
Yuan Zhao, Elmira Mynbayeva, and John D. Lamb
- Subjects
Information Systems and Management ,General Computer Science ,Computer science ,business.industry ,Bootstrap aggregating ,Estimator ,Variance (accounting) ,Management Science and Operations Research ,Covariance ,Industrial and Manufacturing Engineering ,Hedge fund ,Core (game theory) ,Modeling and Simulation ,Econometrics ,Portfolio ,business ,Selection (genetic algorithm) - Abstract
Markowitz optimisation is well known to work poorly in practice, but it has not been clear why this happens. We show both theoretically and empirically that Markowitz optimisation is likely to fail badly, even with normally-distributed data, with no time series or correlation effects, and even with shrinkage estimators to reduce estimation risk. A core problem is that very often we cannot confidently distinguish between the mean returns of most assets. We develop a method, based on a sequentially rejective test procedure, to help remedy this problem by identifying subsets of assets indistinguishable in mean or variance. We test our method against naive Markowitz and compare it to other methods, including bootstrap aggregation, proposed to remedy the poor practical performance of Markowitz optimisation. We use out-of-sample and bootstrap tests on data from several market indices and hedge funds. We find our method is more robust than naive Markowitz and outperforms equally weighted portfolios but bootstrap aggregation works, as expected, better when we cannot distinguish among means. We also find evidence that covariance shrinkage improves performance.
- Published
- 2022
37. Distributed learning algorithm with synchronized epochs for dynamic spectrum access in unknown environment using multi-user restless multi-armed bandit
- Author
-
Himanshu Agrawal and Krishna Asawa
- Subjects
General Computer Science ,Stochastic process ,Computer science ,Markov process ,020206 networking & telecommunications ,Regret ,02 engineering and technology ,Multi-user ,Multi-armed bandit ,symbols.namesake ,Cognitive radio ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Algorithm ,Selection (genetic algorithm) ,Communication channel - Abstract
Dynamic spectrum access using cognitive radio has many application areas like smart-grid, Internet of Things, and various other device-to-device communication paradigms. In dynamic spectrum access, a user picks a channel out of N channels to transmit during each time-slot. Thus, the user gets an arbitrary reward from a limited set of reward states, and the selected channel is termed as an active channel. The reward condition of the active channel evolves as per an unknown Markovian chain. In contrast, the reward condition of the passive channels evolves as an arbitrary strange random process. Notably, the objective of a channel selection strategy is to minimize regret by selecting the best channel in terms of mean-availability. A strategy based on consecutive selections (epochs) of channels, dubbed as Adaptive Sequencing of Exploration and Exploitation for Channel Selection in Unknown Environment (ASEE-CSUE), has been proposed. By reasonably planning the sequencing of epochs, ASEE-CSUE can achieve a logarithmic order of regret with time. Furthermore, the extensive simulation results indicate that collisions and switching cost are less than 7% and 2%, respectively, and the selection of the best channels is more than 90% of the total time-slots.
- Published
- 2022
38. Learning Improvement Heuristics for Solving Routing Problems
- Author
-
Yaoxin Wu, Wen Song, Andrew Lim, Zhiguang Cao, and Jie Zhang
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Mathematical optimization ,Computer Science - Artificial Intelligence ,Computer Networks and Communications ,Computer science ,business.industry ,Deep learning ,Travelling salesman problem ,Machine Learning (cs.LG) ,Computer Science Applications ,Artificial Intelligence (cs.AI) ,Artificial Intelligence ,Vehicle routing problem ,Reinforcement learning ,Limit (mathematics) ,Artificial intelligence ,Routing (electronic design automation) ,Heuristics ,business ,Software ,Selection (genetic algorithm) - Abstract
Recent studies in using deep learning to solve routing problems focus on construction heuristics, the solutions of which are still far from optimality. Improvement heuristics have great potential to narrow this gap by iteratively refining a solution. However, classic improvement heuristics are all guided by hand-crafted rules which may limit their performance. In this paper, we propose a deep reinforcement learning framework to learn the improvement heuristics for routing problems. We design a self-attention based deep architecture as the policy network to guide the selection of next solution. We apply our method to two important routing problems, i.e. travelling salesman problem (TSP) and capacitated vehicle routing problem (CVRP). Experiments show that our method outperforms state-of-the-art deep learning based approaches. The learned policies are more effective than the traditional hand-crafted ones, and can be further enhanced by simple diversifying strategies. Moreover, the policies generalize well to different problem sizes, initial solutions and even real-world dataset., Comment: 10 pages, 4 figures
- Published
- 2022
39. Deep learning driven beam selection for orthogonal beamforming with limited feedback
- Author
-
Moldir Yerzhanova, Jinho Choi, Jihong Park, and Yun Hee Kim
- Subjects
Beamforming ,Artificial neural network ,Computer Networks and Communications ,Computer science ,business.industry ,Deep learning ,Set (abstract data type) ,Artificial Intelligence ,Hardware and Architecture ,Robustness (computer science) ,Rician fading ,Physics::Accelerator Physics ,Artificial intelligence ,business ,Algorithm ,Software ,Selection (genetic algorithm) ,Beam (structure) ,Computer Science::Information Theory ,Information Systems - Abstract
This letter studies deep learning methods for beam selection in multiuser beamforming with limited feedback. We construct a set of orthogonal random beams and allocate the beams to users to maximize the sum rate, based on limited feedback regarding the channel power on the orthogonal beams. We formulate the beam allocation problem as a classification or a regression task using a deep neural network (DNN). The results demonstrate that the DNN-based methods achieve higher sum rates than a conventional limited feedback solution in the low signal-to-noise ratio regime under Rician fading, thanks to their robustness to noisy limited feedback.
- Published
- 2022
40. Automatically Designing Network-Based Deep Transfer Learning Architectures Based on Genetic Algorithm for In-Situ Tool Condition Monitoring
- Author
-
Yuekai Liu, Liang Guo, Yongwen Tan, Yaoxiang Yu, and Hongli Gao
- Subjects
Schedule ,Computer science ,business.industry ,Deep learning ,Real-time computing ,Control and Systems Engineering ,Robustness (computer science) ,Genetic algorithm ,Artificial intelligence ,Electrical and Electronic Engineering ,Architecture ,Transfer of learning ,business ,Selection (genetic algorithm) ,Edge computing - Abstract
In-situ tool condition monitoring (In-situ TCM) is vital for metal removal manufacturing which realizes on-machine diagnosis in a real-time manner. The limitation of in-situ TCM based on traditional deep learning lies in several aspects: the requirement of sufficient labeled data of health conditions, the empirically manual designed architecture and the labor-intensive tuning of hyper-parameters. Network-based deep transfer learning (NDTL) partially solves the problem of limited labeled data. However, the time-consuming architecture design and the tuning of hyper-parameters also pose a great impact on the schedule of real in-situ TCM projects. In this paper, a new NDTL method is proposed, which is automatically designed by the genetic algorithm (NDTL-GA). A degradation monitoring experiment is conducted employing edge computing devices under multiple working conditions, where the texture of the machined surface is collected during the whole life cycle of milling cutters. The experimental results suggest that the proposed method possesses competitive performance evaluated by both the robustness metrics (e.g., the area under the curve of precision-recall (AUC-PR)) and the efficiency metrics (e.g., multiplyaccumulates (MACC)), where the trial-and-errors are reduced significantly by the automatic architecture design and the selection of hyper-parameters.
- Published
- 2022
41. Top data mining tools for the healthcare industry
- Author
-
Jorge Bernardino, Judith Santos-Pereira, and Le Gruenwald
- Subjects
Complex data type ,General Computer Science ,Computer science ,business.industry ,Healthcare ,Open-source data mining tools ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Health care ,Spark (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Healthcare industry ,020201 artificial intelligence & image processing ,Data mining ,Healthcare data ,business ,computer ,Selection (genetic algorithm) - Abstract
The healthcare industry has become increasingly challenging, requiring retrieval of knowledge from large amounts of complex data to find the best treatments. Several works have suggested the use of Data Mining tools to overcome the challenges; however, none of them has suggested the best tool to do so. To fill this gap, this paper presents a survey of popular open-source data mining tools in which data mining tool selection criteria based on healthcare application requirements is proposed and the best ones using the proposed selection criteria are identified. The following popular open-source data mining tools are assessed: KNIME, R, RapidMiner, Scikit-learn, and Spark. The study shows that KNIME and RapidMiner provide the largest coverage of healthcare data mining requirements.
- Published
- 2022
42. Voting for a Political Candidate under Conditions of Minimal Information
- Author
-
Harold H. Kassarjian, Lee G. Cooper, and Masao Nakanishi
- Subjects
Marketing ,Economics and Econometrics ,Focus (computing) ,Computer science ,media_common.quotation_subject ,Field (computer science) ,Tourism ,Microeconomics ,Politics ,Arts and Humanities (miscellaneous) ,Anthropology ,Voting ,Voting behavior ,Psychology ,Business and International Management ,Market share ,Selection (genetic algorithm) ,Consumer behaviour ,media_common - Abstract
Until very recently, the major focus of research in the field of consumer behavior has been on the selection of products, brands and decision choices primarily in the sphere of marketing. The purpose of this paper was to modify a model developed to measure market share to account for the variables that enter into the selection of a political candidate and predict voting behavior.
- Published
- 2023
43. User Selection for NOMA-Based MIMO With Physical-Layer Network Coding in Internet of Things Applications
- Author
-
Jonathan Gonzalez, Bismark Okyere, Berna Ozbek, Leila Musavian, Mert Ilguy, and Saadet Simay Yilmaz
- Subjects
Physical layer network coding ,Computer Networks and Communications ,Computer science ,business.industry ,MIMO ,Spectral efficiency ,medicine.disease ,Computer Science Applications ,Noma ,Hardware and Architecture ,Signal Processing ,Computer Science::Networking and Internet Architecture ,Bit error rate ,medicine ,Wireless ,business ,Selection algorithm ,Selection (genetic algorithm) ,Information Systems ,Computer network - Abstract
Non-orthogonal multiple access (NOMA) based multiple-input multiple-output (MIMO), which has the potential to provide both massive connectivity and high spectrum efficiency, is considered as one of the efficient techniques for sixth generation (6G) wireless systems. In massive Internet of Things (IoT) networks, user-set selection is crucial for enhancing the overall performance of NOMA based systems when compared with orthogonal multiple access (OMA) techniques. In this paper, we propose a user-set selection algorithm for IoT uplink transmission to improve the sum data rate of the NOMA based MIMO systems. In order to exchange data between the selected IoT pairs, we propose to employ wireless physical layer network coding (PNC) to further improve the spectral efficiency and reduce the delay to fulfill the requirements of future IoT applications. Performance evaluations are provided based on both sum data rate and bit error rate for the proposed NOMA based MIMO with PNC in the considered massive IoT scenarios.
- Published
- 2022
44. Knapsack problems with dependencies through non-additive measures and Choquet integral
- Author
-
Gleb Beliakov
- Subjects
Mathematical optimization ,Information Systems and Management ,General Computer Science ,Computer science ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Choquet integral ,Knapsack problem ,Modeling and Simulation ,Scalability ,Portfolio ,Pairwise comparison ,Selection (genetic algorithm) ,Integer (computer science) - Abstract
In portfolio selection problems the items often depend on each other, and their synergies and redundancies need to be taken into account. We consider the knapsack problem in which the objective is modelled as the Choquet integral with respect to a supermodular capacity which quantifies possible synergies. We provide various formulations which lead to the standard linear mixed integer programs, applicable to small and large portfolios. We also study scalability of the solution methods and compare large problems defined with respect to 2-additive capacities which model pairwise interactions, and linear knapsack with respect to the Shapley values of these capacities.
- Published
- 2022
45. Green Energy Sources Selection for Sustainable Planning: A Case Study
- Author
-
Sumit Bhowmik, Chiranjib Bhowmik, and Amitava Ray
- Subjects
Computer science ,business.industry ,Strategy and Management ,Photovoltaic system ,Analytic hierarchy process ,Environmental economics ,Energy policy ,Renewable energy ,Electricity generation ,Sustainability ,Electrical and Electronic Engineering ,Dimension (data warehouse) ,business ,Selection (genetic algorithm) - Abstract
The aim of this article is to select the optimum green energy sources for sustainable planning for a region. This research presents an integrated model based on theoretical base of benefit, opportunities, costs, risks and a well-known multicriteria decision-making technique, i.e., the analytical hierarchy process, to evaluate the green energy sources from northeast India along with 16 local factors. The analyzed result shows that solar photovoltaic is the optimum green energy source having the highest score value followed by other sources, appraised by the integrated model. Based on the results this article, we suggest some policies for the energy managers, policymaker, and decision makers. This article has both theoretical and practical implications. Theoretically, it contributes holistic measures for designing and managing the green energy sources selection framework for sustainability, and, practically, it helps various organizations operating in the green energy sources selection sector to improve their sustainability dimension for the cleaner future. The proposed article considers not only various cost criteria, but also all other criteria, such as power generation, implementation period, and useful life, that are considered to select the optimum green energy sources for the better future. The findings of this article can provide useful information to energy decision makers and serve as a reference for Tripura's energy policy.
- Published
- 2022
46. Distributed Primal–Dual Splitting Algorithm for Multiblock Separable Optimization Problems
- Author
-
Zheng Wang, Tingwen Huang, Huaqing Li, and Wu Xiangzhao
- Subjects
Range (mathematics) ,Optimization problem ,Control and Systems Engineering ,Asynchronous communication ,Computer science ,Convergence (routing) ,Electrical and Electronic Engineering ,Convex function ,Algorithm ,Selection (genetic algorithm) ,Computer Science Applications ,Block (data storage) ,Separable space - Abstract
This paper considers the distributed structured optimization problem of collaboratively minimizing the global objective function composed of the sum of local cost functions. Each local objective function involves a Lipschitz-differentiable convex function, a nonsmooth convex function, and a linear composite nonsmooth convex function. For such problems, we derive the synchronous distributed primal-dual splitting (S-DPDS) algorithm with uncoordinated stepsizes. Meanwhile, we develop the asynchronous version of the algorithm in light of the randomized block-coordinate method (A-DPDS). Further, the convergence results show the relaxed range and concise form of the acceptable parameters, which indicates that the algorithms are conducive to the selection of parameters in practical applications. Finally, we demonstrate the efficiency of S-DPDS and A-DPDS algorithms by numerical experiments.
- Published
- 2022
47. Financing constraints, home equity and selection into entrepreneurship
- Author
-
Thais Lærkholm Jensen, Søren Leth-Petersen, and Ramana Nanda
- Subjects
Home equity ,Economics and Econometrics ,Entrepreneurship ,Labour economics ,Exploit ,Earnings ,Collateral ,Strategy and Management ,1502 Banking, Finance and Investment ,Work experience ,1606 Political Science ,Accounting ,Credit rationing ,Business ,1402 Applied Economics ,Finance ,Selection (genetic algorithm) - Abstract
We exploit a mortgage reform that differentially unlocked home equity across the Danish population and study how this impacted selection into entrepreneurship. We find that increased entry was concentrated among entrepreneurs whose firms were founded in industries where they had no prior work experience. Nevertheless, we find that marginal entrants benefiting from the reform had higher pre-entry earnings and a significant share of these entrants started longer-lasting firms. Our results are most consistent with a view that housing collateral enabled higher ability individuals with less-well-established track records to overcome credit rationing and start new firms, rather than only leading to ‘frivolous entry’ by those without prior industry experience.
- Published
- 2022
48. PosgenPy: An Automated and Reproducible Approach to Assessing the Validity of Cluster Search Parameters in Atom Probe Tomography Datasets
- Author
-
James Famelton, H. M. Gardner, Benjamin M. Jenkins, Paul A. J. Bagot, Michael P. Moody, Daniel Haley, Przemysław Klupś, Andrew J. London, and Jonathan M. Hyde
- Subjects
DBSCAN ,Computer science ,Material system ,Atom probe ,computer.software_genre ,law.invention ,Determining the number of clusters in a data set ,law ,Cluster (physics) ,Noise (video) ,Data mining ,Cluster analysis ,Instrumentation ,computer ,Selection (genetic algorithm) - Abstract
One of the main capabilities of atom probe tomography (APT) is the ability to not only identify but also characterize early stages of precipitation at length scales that are not achievable by other techniques. One of the most popular methods to identify nanoscale clustering in APT data, based on the density-based spatial clustering of applications with noise (DBSCAN), is used extensively in many branches of research. However, it is common that not all of the steps leading to the selection of certain parameters used in the analysis are reported. Without knowing the rationale behind parameter selection, it may be difficult to compare cluster parameters obtained by different researchers. In this work, a simple open-source tool, PosgenPy, is used to justify cluster search parameter selection via providing a systematic sweep through parameter values with multiple randomizations to minimize a false-positive cluster ratio. The tool is applied to several different microstructures: a simulated material system and two experimental datasets from a low-alloy steel . The analyses show how values for the various parameters can be selected to ensure that the calculated cluster number density and cluster composition are accurate.
- Published
- 2022
49. Genetic diversity of the LILRB1 and LILRB2 coding regions in an admixed Brazilian population sample
- Author
-
Silvana Giuliatti, Luciana C. Veiga-Castelli, Erick C. Castelli, D. Courtin, L. Marcorin, A. L. E. Pereira, Guimarães de Oliveira Ml, Celso T. Mendes-Junior, Andreia S. Souza, Aguinaldo Luiz Simões, Eduardo Antônio Donadi, Audrey Sabbagh, Carratto Tmt, and Heloisa S. Andrade
- Subjects
Genetics ,Genetic diversity ,Membrane Glycoproteins ,Natural selection ,Directional selection ,Haplotype ,Immunology ,Genetic Variation ,Human leukocyte antigen ,Biology ,Negative selection ,Leukocyte Immunoglobulin-like Receptor B1 ,Antigens, CD ,Humans ,Immunology and Allergy ,Amino Acids ,Receptors, Immunologic ,Allele ,Alleles ,Brazil ,Selection (genetic algorithm) - Abstract
Leukocyte Immunoglobulin (Ig)-like Receptors (LILR) LILRB1 and LILRB2 play a pivotal role in maintaining self-tolerance and modulating the immune response through interaction with classical and non-classical Human Leukocyte Antigen (HLA) molecules. Although both diversity and natural selection patterns over HLA genes have been extensively evaluated, little information is available concerning the genetic diversity and selection signatures on the LIRB1/2 regions. Therefore, we identified the LILRB1/2 genetic diversity using next-generation sequencing in a population sample comprising 528 healthy control individuals from São Paulo State, Brazil. We identified 58 LILRB1 Single Nucleotide Variants (SNVs), which gave rise to 13 haplotypes with at least 1% of frequency. For LILRB2, we identified 41 SNVs arranged into 11 haplotypes with frequencies above 1%. We found evidence of either positive or purifying selection on LILRB1/2 coding regions. Some residues in both proteins showed to be under the effect of positive selection, suggesting that amino acid replacements in these proteins resulted in beneficial functional changes. Finally, we have shown that allelic variation (six and five amino acid exchanges in LILRB1 and LILRB2, respectively) affects the structure and/or stability of both molecules. Nonetheless, LILRB2 has shown higher average stability, with no D1/D2 residue affecting protein structure. Taken together, our findings demonstrate that LILRB1 and LILRB2 are highly polymorphic and provide strong evidence supporting the directional selection regime hypothesis.
- Published
- 2022
50. Dynamics of the most common pathogenic mtDNA variant m.3243A > G demonstrate frequency-dependency in blood and positive selection in the germline
- Author
-
Konstantin Khrapko, Zoe Fleischmann, Maxim Braverman, Markuzon N, Jonathan L. Tilly, Douglass M. Turnbull, Sarah J Pickett, David Stein, Dori C. Woods, Konstantin Popadin, Mark Khrapko, D. Aidlen, and Melissa Franco
- Subjects
Genetics ,Mitochondrial DNA ,Mitochondrial Diseases ,Somatic cell ,Point mutation ,Mutant ,General Medicine ,Biology ,Human mitochondrial genetics ,DNA, Mitochondrial ,Germline ,Mitochondria ,Germ Cells ,Mutation (genetic algorithm) ,Mutation ,Humans ,Point Mutation ,Molecular Biology ,Selection (genetic algorithm) ,Genetics (clinical) - Abstract
The A-to-G point mutation at position 3243 in the human mitochondrial genome (m.3243A > G) is the most common pathogenic mtDNA variant responsible for disease in humans. It is widely accepted that m.3243A > G levels decrease in blood with age, and an age correction representing ~ 2% annual decline is often applied to account for this change in mutation level. Here we report that recent data indicate that the dynamics of m.3243A > G are more complex and depend on the mutation level in blood in a bi-phasic way. Consequently, the traditional 2% correction, which is adequate ‘on average’, creates opposite predictive biases at high and low mutation levels. Unbiased age correction is needed to circumvent these drawbacks of the standard model. We propose to eliminate both biases by using an approach where age correction depends on mutation level in a biphasic way to account for the dynamics of m.3243A > G in blood. The utility of this approach was further tested in estimating germline selection of m.3243A > G. The biphasic approach permitted us to uncover patterns consistent with the possibility of positive selection for m.3243A > G. Germline selection of m.3243A > G shows an ‘arching’ profile by which selection is positive at intermediate mutant fractions and declines at high and low mutant fractions. We conclude that use of this biphasic approach will greatly improve the accuracy of modelling changes in mtDNA mutation frequencies in the germline and in somatic cells during aging.
- Published
- 2022
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.