150,037 results on '"Selection (genetic algorithm)"'
Search Results
2. Holobiont Evolution: Population Theory for the Hologenome
- Author
-
Joan Roughgarden
- Subjects
Mathematical theory ,Genetic theory ,Holobiont ,education.field_of_study ,Evolutionary biology ,Hologenome theory of evolution ,Population ,Microbiome ,Biology ,education ,Selection (genetic algorithm) ,Ecology, Evolution, Behavior and Systematics - Abstract
This article develops mathematical theory for the population dynamics of microbiomes with their hosts and for holobiont evolution caused by holobiont selection. The objective is to account for the formation of microbiome-host integration.Microbial population-dynamic parameters must mesh with the host’s for coexistence.A horizontally transmitted microbiome is a genetic system with “collective inheritance”. The microbial source pool in the environment corresponds to the gamete pool for nuclear genes. Poisson sampling of the microbial source pool corresponds to binomial sampling of the gamete pool. However, holobiont selection on the microbiome does not lead to a counterpart of the Hardy-Weinberg Law nor to directional selection that always fixes microbial genes conferring the highest holobiont fitness.A microbe might strike an optimal fitness balance between lowering its within-host fitness while increasing holobiont fitness. Such microbes are replaced by otherwise identical microbes that contribute nothing to holobiont fitness. This replacement can be reversed by hosts that initiate immune responses to non-helpful microbes. This discrimination leads to microbial species sorting. Host-orchestrated species sorting (HOSS) followed by microbial competition, rather than co-evolution or multi-level selection, is predicted to be the cause of microbiome-host integration.
- Published
- 2023
3. A Credibilistic Multiobjective Multiperiod Efficient Portfolio Selection Approach Using Data Envelopment Analysis
- Author
-
Mukesh Kumar Mehlawat, Arun Kumar, Pankaj Gupta, and Sanjay Yadav
- Subjects
Mathematical optimization ,Computer science ,Strategy and Management ,Data envelopment analysis ,Portfolio ,Electrical and Electronic Engineering ,Selection (genetic algorithm) - Published
- 2023
4. A novel pure data-selection framework for day-ahead wind power forecasting
- Author
-
Jiancheng Qin, Zili Zhang, Ying Chen, Jingjing Zhao, and Hua Li
- Subjects
Multidisciplinary ,Computer science ,Wind power forecasting ,Numerical weather prediction ,computer.software_genre ,Computer experiment ,Wind speed ,Metamodeling ,Support vector machine ,Pure Data ,Data mining ,computer ,Selection (genetic algorithm) ,computer.programming_language - Abstract
Numerical weather prediction (NWP) data possess internal inaccuracies, such as low NWP wind speed corresponding to high actual wind power generation. This study is intended to reduce the negative effects of such inaccuracies by proposing a pure data-selection framework (PDF) to choose useful data prior to modeling, thus improving the accuracy of day-ahead wind power forecasting. Briefly, we convert an entire NWP training dataset into many small subsets and then select the best subset combination via a validation set to build a forecasting model. Although a small subset can increase selection flexibility, it can also produce billions of subset combinations, resulting in computational issues. To address this problem, we incorporated metamodeling and optimization steps into PDF. We then proposed a design and analysis of the computer experiments-based metamodeling algorithm and heuristic-exhaustive search optimization algorithm, respectively.Experimental results demonstrate that (1) it is necessary to select data before constructing a forecasting model; (2) using a smaller subset will likely increase selection flexibility, leading to a more accurate forecasting model; (3) PDF can generate a better training dataset than similarity-based data selection methods (e.g., K-means and support vector classification); and (4) choosing data before building a forecasting model produces a more accurate forecasting model compared with using a machine learning method to construct a model directly.
- Published
- 2023
5. The narcissism spectrum and its effects on self-selection into the teaching profession and on the effort-reward imbalance
- Author
-
Hanna-Therese Schmitt
- Subjects
Self-efficacy ,media_common.quotation_subject ,Developmental and Educational Psychology ,Narcissism ,medicine ,Person–environment fit ,Personality ,medicine.symptom ,Psychology ,Social psychology ,Selection (genetic algorithm) ,Effort reward imbalance ,media_common - Abstract
Abstract: Based on the person-environment fit theory and the dynamic self-regulatory model of narcissism, an occupational self-selection into the teaching profession is analysed. This examination consults two comparative groups: student teachers are compared with management students, and practising teachers are compared with business leaders. After a theoretical analysis of the narcissism phenomenon considering the social-personality perspective, the relation between the narcissism spectrum – extreme, healthy and insufficient narcissism – and the effort-reward imbalance is examined. The sample consists of n 958 test persons from Austria. Self-selection tendencies into the teaching profession are mostly confirmed. Teachers show lower levels of extreme, healthy, and higher levels of insufficient narcissism than business leaders. Student teachers show lower levels of healthy and extreme narcissism than management students. Compared to student teachers, practising teachers exhibit higher levels of insufficient narcissism. This difference can be traced back to stressful classroom conditions. Teachers obtain less reward from their work than business leaders. Lower levels of healthy narcissism lead to more overcommitment and a reinforcement of the effort-reward imbalance, and increase the risk of gratification crises in the teaching profession.
- Published
- 2023
6. Numerical simulation of triaxial tests on cement-treated clays using Hoek–Brown criterion
- Author
-
A. S. Azneb, Retnamony G. Robinson, and Subhadeep Banerjee
- Subjects
Cement ,Computer simulation ,Mechanics of Materials ,Constitutive equation ,Soil Science ,Geotechnical engineering ,Building and Construction ,Geotechnical Engineering and Engineering Geology ,Selection (genetic algorithm) ,Mathematics - Abstract
Triaxial tests are generally conducted to establish the mechanical behaviour of cement-treated clays. Numerical simulations of these tests involve selection of the correct constitutive model, which can capture the key features of the stress–strain response of the geomaterial. However, as cement-treated clays behave differently to untreated clay in its natural state, the classic constitutive models may not give reasonable results and the failure envelope has been found to be non-linear. An attempt was made in this work to simulate the behaviour of cement-treated clays using the Hoek–Brown model and the experimental and numerical simulation results were compared.
- Published
- 2023
7. Adaptive Relay Selection Strategy in Underwater Acoustic Cooperative Networks: A Hierarchical Adversarial Bandit Learning Approach
- Author
-
Song Han, Haihong Zhao, Lei Yan, Xinbin Li, and Junzhi Yu
- Subjects
Computer Science::Machine Learning ,Computer Networks and Communications ,Heuristic ,business.industry ,Computer science ,Throughput ,Kalman filter ,law.invention ,Complete information ,Relay ,law ,Convergence (routing) ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Selection algorithm ,Software ,Selection (genetic algorithm) - Abstract
Relay selection solutions for underwater acoustic cooperative networks suffer significant performance degradation as they fail to adapt to incomplete information, noisy interference and overwhelming dynamics. To address this challenge, a hierarchical adversarial multi-armed bandit learning framework by proposing an online reward estimation layer is designed to improve adaptive relay decision control. In online reward estimation layer, adaptive Kalman filter estimator is developed to properly handle noisy observation to support accurate reward. Meanwhile an online predict mechanism is projected for all relays to enrich learning information. Furthermore, based on estimate error variance, an adaptive exploration structure is developed to accelerate the balance between exploration and exploitation. All gathered information are exploited to learn relay quality for the decision-making. Accordingly, we present a Hierarchical Adversarial Bandit Learning (HABL) algorithm to fully exploit the heuristic interaction between the hierarchical framework. HABL integrates reward estimation, information prediction, adaptive exploration and decision making carefully in a holistic algorithm to maximize the learning efficiency. Thereby, the HABL-based relay selection algorithm has higher system throughput and lower communication cost. Further, we rigorously analyze the convergence of HABL algorithm and give its upper bound on the cumulative regret. Finally, extensive simulations elucidate the effectiveness of the HABL.
- Published
- 2023
8. Deciding What to Replicate
- Author
-
Daniel Lakens, Ivan Ropovik, Marco Perugini, Joachim I. Krueger, Marek A. Vranka, Anna van 't Veer, K. Andrew DeSoto, Peder M. Isager, Robbie C. M. van Aert, Roger Giner-Sorolla, Mark J. Brandt, Štěpán Bahník, Human Technology Interaction, and Department of Methodology and Statistics
- Subjects
bepress|Physical Sciences and Mathematics ,MetaArXiv|Physical Sciences and Mathematics|Statistics and Probability|Design of Experiments and Sample Surveys ,Expected utility ,Computer science ,Decision theory ,media_common.quotation_subject ,Replication ,Replication value ,Replication (computing) ,MetaArXiv|Physical Sciences and Mathematics ,Resource (project management) ,Risk analysis (engineering) ,Study selection ,bepress|Physical Sciences and Mathematics|Statistics and Probability ,MetaArXiv|Physical Sciences and Mathematics|Statistics and Probability ,Psychology (miscellaneous) ,Function (engineering) ,Decision model ,bepress|Physical Sciences and Mathematics|Statistics and Probability|Design of Experiments and Sample Surveys ,Selection (genetic algorithm) ,Expected utility hypothesis ,media_common ,Causal model - Abstract
Robust scientific knowledge is contingent upon replication of original findings. However, replicating researchers are constrained by resources, and will almost always have to choose one replication effort to focus on from a set of potential candidates. To select a candidate efficiently in these cases, we need methods for deciding which out of all candidates considered would be the most useful to replicate, given some overall goal researchers wish to achieve. In this article we assume that the overall goal researchers wish to achieve is to maximize the utility gained by conducting the replication study. We then propose a general rule for study selection in replication research based on the replication value of the set of claims considered for replication. The replication value of a claim is defined as the maximum expected utility we could gain by conducting a replication of the claim, and is a function of (a) the value of being certain about the claim, and (b) uncertainty about the claim based on current evidence. We formalize this definition in terms of a causal decision model, utilizing concepts from decision theory and causal graph modeling. We discuss the validity of using replication value as a measure of expected utility gain, and we suggest approaches for deriving quantitative estimates of replication value. Our goal in this article is not to define concrete guidelines for study selection, but to provide the necessary theoretical foundations on which such concrete guidelines could be built.Translational Abstract Replication-redoing a study using the same procedures-is an important part of checking the robustness of claims in the psychological literature. The practice of replicating original studies has been woefully devalued for many years, but this is now changing. Recent calls for improving the quality of research in psychology has generated a surge of interest in funding, conducting, and publishing replication studies. Because many studies have never been replicated, and researchers have limited time and money to perform replication studies, researchers must decide which studies are the most important to replicate. This way scientists learn the most, given limited resources. In this article, we lay out what it means to think about what is the most important thing to replicate, and we propose a general decision rule for picking a study to replicate. That rule depends on a concept we call replication value. Replication value is a function of the importance of the study, and how uncertain we are about the findings. In this article we explain how researchers can think precisely about the value of replication studies. We then discuss when and how it makes sense to use replication value as a measure of how valuable a replication study would be, and we discuss factors that funders, journals, or scientists could consider when determining how valuable a replication study is.
- Published
- 2023
9. A Fast, Reliable, Opportunistic Broadcast Scheme With Mitigation of Internal Interference in VANETs
- Author
-
Dan Keun Sung, Xinming Zhang, and Hui Zhang
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Retransmission ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Latency (audio) ,Data_CODINGANDINFORMATIONTHEORY ,Interference (wave propagation) ,law.invention ,Relay ,law ,Metric (mathematics) ,Redundancy (engineering) ,Overhead (computing) ,Electrical and Electronic Engineering ,business ,Software ,Selection (genetic algorithm) ,Computer network - Abstract
In VANETs, it is important to support fast and reliable multi-hop broadcast for safety-related applications. The performance of multi-hop broadcast schemes is greatly affected by relay selection strategies. However, the relationship between the relay selection strategies and the expected broadcast performance has not been fully characterized yet. Furthermore, conventional broadcast schemes usually attempt to minimize the waiting time difference between adjacent relay candidates to reduce the waiting time overhead, which makes the relay selection process vulnerable to internal interference, occurring due to retransmissions from previous forwarders and transmissions from redundant relays. In this paper, we jointly take both of the relay selection and the internal interference mitigation into account and propose a fast, reliable, opportunistic multi-hop broadcast scheme, in which we utilize a novel metric called the expected broadcast speed in relay selection and propose a delayed retransmission mechanism to mitigate the adverse effect of retransmissions from previous forwarders and an expected redundancy probability based mechanism to mitigate the adverse effect of redundant relays. The performance evaluation results show that the proposed scheme yields the best broadcast performance among the four schemes in terms of the broadcast coverage ratio and the end-to-end delivery latency.
- Published
- 2023
10. Efficient Decomposition Selection for Multi-class Classification
- Author
-
Zeyi Wen, Bingsheng He, Yawen Chen, and Jian Chen
- Subjects
Multiclass classification ,Distribution (mathematics) ,Computational Theory and Mathematics ,Degree (graph theory) ,Computer science ,Decomposition (computer science) ,Decomposition method (constraint satisfaction) ,Divergence (statistics) ,Algorithm ,Selection (genetic algorithm) ,Computer Science Applications ,Information Systems - Abstract
Choosing a decomposition method for multi-class classification is an important trade-off between efficiency and predictive accuracy. Trying all the decomposition methods to find the best one is too time-consuming for many applications, while choosing the wrong one may result in large loss on predictive accuracy. In this paper, we propose an automatic decomposition method selection approach called ``D-Chooser", which is lightweight and can choose the best decomposition method accurately. D-Chooser is equipped with our proposed difficulty index which consists of sub-metrics including distribution divergence, overlapping regions, unevenness degree and relative size of the solution space. The difficulty index has two intriguing properties: 1) fast to compute and 2) measuring multi-class problems comprehensively. Extensive experiments on real-world multi-class problems show that D-Chooser achieves an accuracy of 83.3% in choosing the best decomposition method. It can choose the best method in just a few seconds, while existing approaches verify the effectiveness of a decomposition method often takes a few hours. We also provide case studies on Kaggle competitions and the results confirm that D-Chooser is able to choose a better decomposition method than the winning solutions.
- Published
- 2023
11. Niche Specificity, Polygeny, and Pleiotropy in Herbivorous Insects
- Author
-
Matthew L. Forister and Nate B. Hardy
- Subjects
education.field_of_study ,Pleiotropy ,Evolutionary biology ,Specialization (functional) ,Niche ,Population ,Genetic model ,Biology ,Allele ,Generalist and specialist species ,education ,Selection (genetic algorithm) ,Ecology, Evolution, Behavior and Systematics - Abstract
What causes host-use specificity in herbivorous insects? Population genetic models predict specialization when habitat preference can evolve and there is antagonistic pleiotropy at a performance-affecting locus. But empirically for herbivorous insects, host-use performance is governed by many genetic loci, and antagonistic pleiotropy seems to be rare. Here, we use individual-based quantitative genetic simulation models to investigate the role of pleiotropy in the evolution of sympatric host-use specialization when performance and preference are quantitative traits. We look first at pleiotropies affecting only host-use performance. We find that when the host environment changes slowly the evolution of host-use specialization requires levels of antagonistic pleiotropy much higher than what has been observed in nature. On the other hand, with rapid environmental change or pronounced asymmetries in productivity across host species, the evolution of host-use specialization readily occurs without pleiotropy. When pleiotropies affect preference as well as performance, even with slow environmental change and host species of equal productivity, we observe fluctuations in host-use breadth, with mean specificity increasing with the pervasiveness of antagonistic pleiotropy. So, our simulations show that pleiotropy is not necessary for specialization, although it can be sufficient, provided it is extensive or multifarious.
- Published
- 2023
12. Insights into the genetic covariation between harvest survival and growth rate in olive flounder (Paralichthys olivaceus) under commercial production environment
- Author
-
Yangzhen Li, Weiwei Zheng, Yingming Yang, and Yuanri Hu
- Subjects
education.field_of_study ,Ecology ,Paralichthys ,biology ,Population ,Aquatic Science ,Heritability ,biology.organism_classification ,Selective breeding ,Genetic correlation ,Olive flounder ,Animal science ,Growth rate ,education ,Ecology, Evolution, Behavior and Systematics ,Selection (genetic algorithm) - Abstract
In aquaculture, selective breeding for survival till harvest have become an alternative strategy for improving disease resistance and production. However, knowledge of genetic parameters of harvest survival, e.g., heritability and genetic correlations between survival and growth rate traits, is still scarce. The aims of this study were to estimate genetic parameters for harvest survival and growth rate traits under commercial farming conditions in olive flounder (Paralichthys olivaceus). Harvest survival was defined as a binary trait; growth traits were measured as average daily gain (ADG), specific growth rate (SGR), daily growth coefficient (DGC) and body weight (BW). Data from a population of 241 full-sib families (involving 39,904 individuals, four generations) were used. Heritabilities of survival were low but significant, which were 0.15 ± 0.04 and 0.22 ± 0.01 on observed and underlying scale, respectively. Heritability estimates for ADG, SGR and DGC were medium to high, which were 0.33 ± 0.06, 0.83 ± 0.07, 0.58 ± 0.07, respectively. While the heritability of BW is of low magnitude (0.17 ± 0.08). The genetic correlations between harvest survival and three growth rate traits (i.e., ADG, SGR and DGC) were very strong (ranging from 0.66 to 0.79), which is an exciting result. However, the genetic correlation between harvest survival and BW was much lower (0.17 ± 0.08). These results suggest that selection for harvest survival would consequentially result in concomitant increase of growth rate, and vice versa. Our findings revealed novel insights into the genetic improvement of growth rate and harvest survival through genetic selection in olive flounder.
- Published
- 2023
13. Interchromosomal linkage disequilibrium and linked fitness cost loci associated with selection for herbicide resistance
- Author
-
Anah Soble, Megan L. Van Etten, Regina S. Baucom, Sonal Gupta, Alex Harkess, and Jim Leebens-Mack
- Subjects
Whole genome sequencing ,Genetics ,Genetic hitchhiking ,Linkage disequilibrium ,Physiology ,Epistasis ,Plant Science ,Biology ,Adaptation ,Allele ,Gene ,Selection (genetic algorithm) - Abstract
The adaptation of weedy plants to herbicide is both a significant problem in agriculture and a model for the study of rapid adaptation under regimes of strong selection. Despite recent advances in our understanding of simple genetic changes that lead to resistance, a significant gap remains in our knowledge of resistance controlled by many loci and the evolutionary factors that influence the maintenance of resistance over time. Here, we perform a multi-level analysis involving whole genome sequencing and assembly, resequencing and gene expression analysis to both uncover putative loci involved in nontarget herbicide resistance and to examine evolutionary forces underlying the maintenance of resistance in natural populations. We found loci involved in herbicide detoxification, stress sensing, and alterations in the shikimate acid pathway to be under selection, and confirmed that detoxification is responsible for glyphosate resistance using a functional assay. Furthermore, we found interchromosomal linkage disequilibrium (ILD), most likely associated with epistatic selection, to influence NTSR loci found on separate chromosomes thus potentially mediating resistance through generations. Additionally, by combining the selection screen, differential expression and LD analysis, we identified fitness cost loci that are strongly linked to resistance alleles, indicating the role of genetic hitchhiking in maintaining the cost. Overall, our work strongly suggests that NTSR glyphosate resistance in I. purpurea is conferred by multiple genes which are maintained through generations via ILD, and that the fitness cost associated with resistance in this species is a by-product of genetic-hitchhiking.
- Published
- 2023
14. Lean Six Sigma Project Selection in a Manufacturing Environment Using Hybrid Methodology Based on Intuitionistic Fuzzy MADM Approach
- Author
-
Jose Arturo Garza-Reyes, Rajeev Rathi, Mahipal Singh, and Jiju Antony
- Subjects
Operations research ,Computer science ,Strategy and Management ,05 social sciences ,Six Sigma ,TOPSIS ,Robustness (computer science) ,0502 economics and business ,Credibility ,Entropy (information theory) ,Electrical and Electronic Engineering ,Lean Six Sigma ,050203 business & management ,Reliability (statistics) ,Selection (genetic algorithm) - Abstract
Project selection has a critical role in the successful execution of the lean six sigma (LSS) program in any industry. The poor selection of LSS projects leads to limited results and diminishes the credibility of LSS initiatives. For this reason, in this article, we propose a method for the assessment and effective selection of LSS projects. Intuitionistic fuzzy sets based on the weighted average were adopted for aggregating individual suggestions of decision makers. The weights of selection criteria were computed using entropy measures and the available projects are prioritized using the multiattribute decision making approach, i.e., modified TOPSIS and VIKOR. The proposed methodology is validated through a case example of the LSS project selection in a manufacturing organization. The results of the case study reveal that out of eight LSS projects, the assembly section (A8) is the best LSS project. A8 is the ideal LSS project for swift gains and manufacturing sustainability. The robustness and reliability of the obtained results are checked through a sensitivity analysis. The proposed methodology will help manufacturing organizations in the selection of the best opportunities among complex situations, results in sustainable development. The engineering managers and LSS consultants can also adopt the proposed methodology for LSS project selections.
- Published
- 2023
15. Security-reliability tradeoff of MIMO TAS/SC networks using harvest-to-jam cooperative jamming methods with random jammer location
- Author
-
Ha Duy Hung, Tran Trung Duy, Le-Tien Thuong, and Pham Minh Nam
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Reliability (computer networking) ,MIMO ,Jamming ,Quantitative Biology::Subcellular Processes ,Artificial Intelligence ,Hardware and Architecture ,Computer Science::Networking and Internet Architecture ,Wireless ,Antenna (radio) ,business ,Algorithm ,Software ,Selection (genetic algorithm) ,Energy (signal processing) ,Computer Science::Cryptography and Security ,Computer Science::Information Theory ,Information Systems ,Rayleigh fading - Abstract
This paper evaluates outage probability (OP) and intercept probability (IP) of physical-layer security based MIMO networks adopting cooperative jamming (Coop-Jam). In the considered scenario, a multi-antenna source communicates with a multi-antenna destination employing transmit antenna selection (TAS)/ selection combining (SC), in presence of a multi-antenna eavesdropper using SC. One of jammers appearing near the destination is selected for generating jamming noises on the eavesdropper. Moreover, the destination supports the wireless energy for the chosen jammer, and cooperates with it to remove the jamming noises. We consider two jammer selection approaches, named RAND and SHORT. In RAND, the destination randomly selects the jammer, and in SHORT, the jammer, which is nearest to the destination, is chosen. We derive exact and asymptotic expressions of OP and IP over Rayleigh fading, and perform Monte-Carlo simulations to verify the correction of our derivation. The results present advantages of the proposed RAND and SHORT methods, as compared with the corresponding one without using Coop-Jam.
- Published
- 2023
16. Online Radio Access Technology Selection Algorithms in a 5G Multi-RAT Network
- Author
-
Abhay Karandikar, Pranav Jha, Prasanna Chaporkar, and Arghyadip Roy
- Subjects
Computer Networks and Communications ,Wireless network ,Computer science ,Radio access technology ,Throughput ,Markov decision process ,Electrical and Electronic Engineering ,Heuristics ,Algorithm ,Software ,Blocking (computing) ,5G ,Selection (genetic algorithm) - Abstract
In todays wireless networks, a variety of Radio Access Technologies (RATs) are present. However, each RAT being controlled individually leads to suboptimal utilization of network resources. Due to the remarkable growth of data traffic, interworking among different RATs is becoming necessary to overcome the problem of suboptimal resource utilization. Users can be offloaded from one RAT to another based on loads of different networks, channel conditions and priority of users. We consider the optimal RAT selection problem in a Fifth Generation (5G) New Radio (NR)-Wireless Fidelity (WiFi) network where we aim to maximize the total system throughput subject to constraints on the blocking probability of high priority users and the offloading probability of low priority users. The problem is formulated as a Constrained Markov Decision Process (CMDP). We reduce the effective dimensionality of the action space by eliminating the provably suboptimal actions. We propose low-complexity online heuristics for RAT selection which can operate without the knowledge regarding the statistics of system dynamics. Network Simulator-3 (ns-3) simulations reveal that the proposed algorithms offer near-optimal performances and outperform traditional RAT selection algorithms under realistic network scenarios including user mobility.
- Published
- 2023
17. Projects Selection In Knapsack Problem By Using Artificial Bee Colony Algorithm
- Author
-
Armaneesa Naaman Hasoon
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Computer science ,Combinatorial optimization problem ,Investment plan ,02 engineering and technology ,General Medicine ,Field (computer science) ,Artificial bee colony algorithm ,020901 industrial engineering & automation ,Knapsack problem ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,MATLAB ,computer ,Selection (genetic algorithm) ,computer.programming_language - Abstract
One of the combinatorial optimization problems is Knapsack problem, which aims to maximize the benefit of objects whose weight not exceeding the capacity of knapsack. This paper introduces artificial bee colony algorithm to select a subset of project and represented by knapsack problem to put the best investment plan which achieve the highest profits within a determined costs, this plan is one of the applications of the financial field. The result from the proposed algorithm implemented by matlab (8.3) show the ability to find best solution with precisely and rapidity compared to genetic algorithm http://dx.doi.org/10.25130/tjps.23.2018.039
- Published
- 2023
18. Cyclic Selection: Auxiliaries Are Merged, Not Inserted
- Author
-
Asia Pietraszko
- Subjects
Linguistics and Language ,Computer science ,business.industry ,Artificial intelligence ,computer.software_genre ,business ,computer ,Language and Linguistics ,Selection (genetic algorithm) ,Linguistics ,Natural language processing - Abstract
Traditional approaches to verbal periphrasis (compound tenses) treat auxiliary verbs as lexical items that enter syntactic derivation like any other lexical item, via Selection/Merge. An alternative view is that auxiliary verbs are inserted into a previously built structure (e.g., Bach 1967, Arregi 2000, Embick 2000, Cowper 2010, Bjorkman 2011, Arregi and Klecha 2015). Arguments for the insertion approach include auxiliaries’ last-resort distribution and the fact that, in many languages, auxiliaries are not systematically associated with a given inflectional category (Bjorkman’s (2011) “overflow” distribution). Here, I argue against the insertion approach. I demonstrate that the overflow pattern and last-resort distribution follow from Cyclic Selection (Pietraszko 2017)—a Merge counterpart of Cyclic Agree (Béjar and Rezac 2009). I also show that the insertion approach makes wrong predictions about compound tenses in Swahili, a language with overflow periphrasis. Under my approach, an auxiliary verb is a verbal head externally merged as a specifier of a functional head, such as T. It then undergoes m-merger with that head, instantiating an External-Merge version of Matushansky’s (2006) conception of head movement.
- Published
- 2023
19. Guiding secondary school students during task selection
- Author
-
Liesbeth Kester, Jeroen J. G. van Merriënboer, Michelle L. Nugteren, Halszka Jarodzka, Department of Online Learning and Instruction, RS-Theme Cognitive Processes in Education, RS: SHE - R1 - Research (OvO), and Onderwijsontw & Onderwijsresearch
- Subjects
050101 languages & linguistics ,STRATEGIES ,media_common.quotation_subject ,feedback ,Conformity ,Education ,Task (project management) ,strategic guidance ,Task selection ,ComputingMilieux_COMPUTERSANDEDUCATION ,0501 psychology and cognitive sciences ,Cognitive skill ,Selection (genetic algorithm) ,media_common ,conformity ,feed forward ,05 social sciences ,050301 education ,ADVICE ,SKILL ,PERFORMANCE ,Computer Science Applications ,SELF-REGULATION ,procedural guidance ,Psychology ,0503 education ,Cognitive psychology - Abstract
Secondary school students often learn new cognitive skills by practicing with tasks that vary in difficulty, amount of support and/or content. Occasionally, they have to select these tasks themselves. Studies on task-selection guidance investigated either procedural guidance (specific rules for selecting tasks) or strategic guidance (general rules and explanations for task selection), but never directly compared them. Experiment 1 aimed to replicate these studies by comparing procedural guidance and strategic guidance to a no-guidance condition, in an electronic learning environment in which participants practiced eight self-selected tasks. Results showed no differences in selected tasks during practice and domain-specific skill acquisition between the experimental groups. A possible explanation for this is an ineffective combination of feedback and feed forward (i.e. the task-selection advice). The second experiment compared inferential guidance (which combines procedural feedback with strategic feed forward), to a no-guidance condition. Results showed that participants selected more difficult, less-supported tasks after receiving inferential guidance than after no guidance. Differences in domain-specific skill acquisition were not significant, but higher conformity to inferential guidance did significantly predict higher domain-specific skill acquisition. Hence, we conclude that inferential guidance can positively affect task selections and domain-specific skill acquisition, but only when conformity is high.
- Published
- 2023
20. Identifying habitat selection via Fauna of Hor Al-Dalmaj and its surrounding terrestrial Areas, Iraq by using ArcGIS
- Author
-
Ahmed Abbas Awad and Salwan Ali Abed
- Subjects
Fishery ,Hotspot (Wi-Fi) ,Geography ,Habitat ,Fauna ,Threatened species ,Ecosystem ,General Medicine ,Selection (genetic algorithm) - Abstract
The current study aims to shading light on the most important hotspot in Hor Al-Dalmaj, southern Iraq As a result of increased human growth, over-hunting of birds, and ongoing threats to the ecosystem, there is an urgent need to identify Habitat selection through Fauna in Hor Delmaj, as they are the most threatened by hunters, and it serves as a stoplight for thousands of migratory birds. From September 2020 to March 2021, an ecological survey was conducted to collect data on Fauna and review the results. In Kernel Density Estimation Modeling in ArcGIS, 200 checklists were generated and divided into 349 points in the LFC analysis file. The observed maximum value KDE of habitat selection by fauna is 24.84 km2, which is known as core habitat. KDE is not well guarded or is not securely shielded from hunters.
- Published
- 2023
21. Beta autoregressive moving average model selection with application to modeling and forecasting stored hydroelectric energy
- Author
-
Vinícius Teodoro Scher, Francisco Cribari-Neto, and Fábio M. Bayer
- Subjects
Series (mathematics) ,BETA (programming language) ,Hydroelectricity ,Computer science ,Model selection ,Statistics ,Information Criteria ,Autoregressive–moving-average model ,Business and International Management ,computer ,Selection (genetic algorithm) ,computer.programming_language ,Unit interval - Abstract
We evaluate the accuracy of model selection and associated short-run forecasts using beta autoregressive moving average ( β ARMA ) models, which are tailored for modeling and forecasting time series that assume values in the standard unit interval, ( 0 , 1 ) , such as rates, proportions, and concentration indices. Different model selection strategies are considered, including one that uses data resampling. Simulation evidence on the frequency of correct model selection favors the bootstrap-based approach. Model selection based on information criteria outperforms that based on forecasting accuracy measures. A forecasting analysis of the proportion of stored hydroelectric energy in South Brazil is presented and discussed. The empirical evidence shows that model selection based on data resampling typically leads to more accurate out-of-sample forecasts.
- Published
- 2023
22. Data-Driven Many-Objective Crowd Worker Selection for Mobile Crowdsourcing in Industrial IoT
- Author
-
Chunxiao Mu, Xiangrong Tong, Yingshu Li, Zhuoran Lu, Yingjie Wang, and Chen Yu
- Subjects
Data collection ,business.industry ,Computer science ,Distributed computing ,Scale (chemistry) ,Crowdsourcing ,Computer Science Applications ,Data-driven ,Hotspot (Wi-Fi) ,Control and Systems Engineering ,Data integrity ,Electrical and Electronic Engineering ,Internet of Things ,business ,Selection (genetic algorithm) ,Information Systems - Abstract
With the development of mobile networks and intelligent equipment, as a new intelligent data sensing paradigm in large-scale sensor applications such as the industrial Internet of Things (IIoT), mobile crowd sensing (MCS) assigns industrial sensing tasks to workers for data collection and sharing, which has created a bright future for building a strong industrial system and improving industrial services. How to design an effective worker selection mechanism to maximize the utility of crowdsourcing is the research hotspot of mobile sensing technologies. This paper studies the problem of least workers selection to make large MCS system perform sensing tasks more effective and achieve certain coverage with certain constraints being meeting. A many-objective worker selection method (MaOWS) is proposed to achieve the desired tradeoff and an optimization mechanism is designed based on the enhanced differential evolution algorithm (EDEA) to ensure data integrity and search solution optimality. The effectiveness of the proposed method is verified through a large scale of experimental evaluation data sets collected from real world.
- Published
- 2023
23. EEG Feature Selection via Global Redundancy Minimization for Emotion Recognition
- Author
-
Long Ye, Xia Wu, Fulin Wei, Xueyuan Xu, Qing Li, and Tianyuan Jia
- Subjects
medicine.diagnostic_test ,business.industry ,Computer science ,Feature extraction ,Feature selection ,Pattern recognition ,Electroencephalography ,Human-Computer Interaction ,Correlation ,ComputingMethodologies_PATTERNRECOGNITION ,Discriminative model ,Feature (machine learning) ,medicine ,Redundancy (engineering) ,Artificial intelligence ,business ,Software ,Selection (genetic algorithm) - Abstract
A common drawback of EEG-based emotion recognition is that volume conduction effects of the human head introduce interchannel dependence and result in highly correlated information among most EEG features. These highly correlated EEG features cannot provide extra useful information, and they actually reduce the performance of emotion recognition. However, the existing feature selection methods, commonly used to remove redundant EEG features for emotion recognition, ignore the correlation between the EEG features or utilize a greedy strategy to evaluate the interdependence, which leads to the algorithms retaining the correlated and redundant features with similar feature scores in the EEG feature subset. To solve this problem, we propose a novel EEG feature selection method for emotion recognition, termed global redundancy minimization in orthogonal regression (GRMOR). GRMOR can effectively evaluate the dependence among all EEG features from a global view and then select a discriminative and nonredundant EEG feature subset for emotion recognition. To verify the performance of GRMOR, we utilized three EEG emotional data sets (DEAP, SEED, and HDED) with different numbers of channels (32, 62, and 128). The experimental results demonstrate that GRMOR is a promising tool for redundant feature removal and informative feature selection from highly correlated EEG features.
- Published
- 2023
24. Container selection processing implementing extensive neural learning in cloud services
- Author
-
S Muthakshi and K Mahesh
- Subjects
Stream processing ,business.industry ,Computer science ,Reliability (computer networking) ,Distributed computing ,Container (abstract data type) ,Batch processing ,Cloud computing ,General Medicine ,Service provider ,business ,Bottleneck ,Selection (genetic algorithm) - Abstract
The container selection processing performance analyses huge data with minimal resources and the lowest latency. The elastic management applications execute the containers with the specific hierarchy of virtual processing and machine management. The container that has performance degradation implicit provision of least expensive containers with minimal resources helps to increase the performance of containers. The specific data and stream processing prompts scrutinizing of data through the container selection process through different methodologies. To exterminate the bottleneck problem that selects efficient and required size, processing speed, and its reliability of guiding the batch processing of containers. The extensive neural learning handles container optimality involves a dynamic selection of appropriate containers in cloud service providers. The cloud service providers along with container selection contain batch processing and stream processing allocates efficient container-specific selection appropriately. In the huge data segregation of data processed data that emphasizes multiple data scrutinizing.
- Published
- 2023
25. Choose Appropriate Subproblems for Collaborative Modeling in Expensive Multiobjective Optimization
- Author
-
Zhenkun Wang, Yew-Soon Ong, Qingfu Zhang, Haitao Liu, Shunyu Yao, and Jianping Luo
- Subjects
Mathematical optimization ,Computer science ,media_common.quotation_subject ,Multi-objective optimization ,Computer Science Applications ,Human-Computer Interaction ,symbols.namesake ,Multiobjective optimization problem ,Control and Systems Engineering ,Benchmark (computing) ,symbols ,Leverage (statistics) ,Electrical and Electronic Engineering ,Function (engineering) ,Gaussian process ,Software ,Selection (genetic algorithm) ,Information Systems ,media_common - Abstract
In dealing with the expensive multiobjective optimization problem, some algorithms convert it into a number of single-objective subproblems for optimization. At each iteration, these algorithms conduct surrogate-assisted optimization on one or multiple subproblems. However, these subproblems may be unnecessary or resolved. Operating on such subproblems can cause server inefficiencies, especially in the case of expensive optimization. To overcome this shortcoming, we propose an adaptive subproblem selection (ASS) strategy to identify the most promising subproblems for further modeling. To better leverage the cross information between the subproblems, we use the collaborative multioutput Gaussian process surrogate to model them jointly. Moreover, the commonly used acquisition functions (also known as infill criteria) are investigated in this article. Our analysis reveals that these acquisition functions may cause severe imbalances between exploitation and exploration in multiobjective optimization scenarios. Consequently, we develop a new acquisition function, namely, adaptive lower confidence bound (ALCB), to cope with it. The experimental results on three different sets of benchmark problems indicate that our proposed algorithm is competitive. Beyond that, we also quantitatively validate the effectiveness of the ASS strategy, the CoMOGP model, and the ALCB acquisition function.
- Published
- 2023
26. Sparse spatio-temporal autoregressions by profiling and bagging
- Author
-
Shaojun Guo, Hansheng Wang, and Yingying Ma
- Subjects
Economics and Econometrics ,Computer science ,Applied Mathematics ,05 social sciences ,Estimator ,Feature selection ,01 natural sciences ,Moment (mathematics) ,010104 statistics & probability ,Variable (computer science) ,Autoregressive model ,0502 economics and business ,Endogeneity ,0101 mathematics ,Greedy algorithm ,Algorithm ,Selection (genetic algorithm) ,050205 econometrics - Abstract
We consider a new class of spatio-temporal models with sparse autoregressive coefficient matrices and exogenous variable. To estimate the model, we first profile the exogenous variable out of the response. This leads to a profiled model structure. Next, to overcome endogeneity issue, we propose a class of generalized methods of moment (GMM) estimators to estimate the autoregressive coefficient matrices. A novel bagging-based estimator is further developed to conquer the over-determined issue which also occurs in Chang et al. (2015) and Dou et al. (2016). An adaptive forward–backward greedy algorithm is proposed to learn the sparse structure of the autoregressive coefficient matrices. A new BIC-type selection criteria is further developed to conduct variable selection for GMM estimators. Asymptotic properties are further studied. The proposed methodology is illustrated with extensive simulation studies. A social network dataset is analyzed for illustration purpose.
- Published
- 2023
27. Efficient Performance Technical Selection of Positive Buck-Boost Converter
- Author
-
Abadal-Salam T. Hussain and Ahmed K. Abbas
- Subjects
Computer science ,Boost converter ,Pharmacology (medical) ,Selection (genetic algorithm) ,Reliability engineering - Abstract
The necessity for stable DC voltage in both removable and non-removable systems has been considerably desired recently. These systems have to be implemented efficiently in order to be responding rapidly based voltage variations. Under this act, the efficient power can extend the lifetime of the employed batteries in such systems. The presented efficiency can be realized with respect to buck and boost components that were implemented to generate what is called positive Buck-Boost converter circuits. The main functions of the positive Buck-Boost converter are identified by announcing the unchanged situation of output voltage polarity and indicating the level of the voltage rationally between the input and the output. It is worth mention, the positive Buck-Boost converter circuit was simulated based Proteus software, and the hardware components were connected in reality. Finally, the microcontroller type that employed in the proposed system is PIC_16F877A, which realizes the input voltage sensitively to generate Pulse Width Modulation (PWM) signals in order to feed the employed MOSFET element.
- Published
- 2022
28. Novel Single-Valued Neutrosophic Combined Compromise Solution Approach for Sustainable Waste Electrical and Electronics Equipment Recycling Partner Selection
- Author
-
Arunodaya Raj Mishra and Pratibha Rani
- Subjects
Risk analysis (engineering) ,Robustness (computer science) ,Computer science ,Strategy and Management ,Similarity (psychology) ,Stability (learning theory) ,Context (language use) ,Sensitivity (control systems) ,Electronics ,Electrical and Electronic Engineering ,Similarity measure ,Selection (genetic algorithm) - Abstract
Waste electrical and electronics equipment (WEEE) recyclers have become a boon for countries as they assist to reduce carbon emissions by recycling the WEEE in the most ecofriendly way. The evaluation and choice of optimal WEEE recycling partner is a significant and multifaceted decision for the managerial experts because of the involvement of several qualitative and quantitative criteria. Thus, the aim of the article is to propose a novel methodology by integrating a combined compromise solution approach and similarity measure within the context of single-valued neutrosophic sets (SVNSs) and, then, used to solve the decision-making problem. In this approach, the criteria weights are evaluated by a new procedure based on a similarity measure. For this, a novel similarity measure is introduced for SVNSs and also presented its efficiency over existing similarity measures. To investigate the efficiency and practicality of the introduced approach, a case study of WEEE recycling partner selection is taken under the SVNSs environment. Finally, comparison and sensitivity analysis are made to check the robustness and stability of the developed methodology. The outcomes of this article exemplify that the developed approach is more suitable and well consistent with existing approaches.
- Published
- 2022
29. Selection of PI + Notch Voltage Controller Coefficients to Attain Desired Steady-State and Transient Performance in PFC Rectifiers
- Author
-
Alon Kuperman and Pavel Strajnikov
- Subjects
Physics ,Steady state (electronics) ,Voltage controller ,Control theory ,Energy Engineering and Power Technology ,Transient (oscillation) ,Electrical and Electronic Engineering ,Selection (genetic algorithm) - Published
- 2022
30. Conditional Joint Distribution-Based Test Selection for Fault Detection and Isolation
- Author
-
Yang Li, Xiuli Wang, Ningyun Lu, and Bin Jiang
- Subjects
Mathematical optimization ,Dependency (UML) ,Computer science ,Fault (power engineering) ,Fault detection and isolation ,Computer Science Applications ,law.invention ,Human-Computer Interaction ,Control and Systems Engineering ,law ,Joint probability distribution ,Bernoulli distribution ,Electrical network ,Measurement uncertainty ,Electrical and Electronic Engineering ,Algorithms ,Software ,Selection (genetic algorithm) ,Information Systems - Abstract
Data-driven fault detection and isolation (FDI) depends on complete, comprehensive, and accurate fault information. Optimal test selection can substantially improve information achievement for FDI and reduce the detecting cost and the maintenance cost of the engineering systems. Considerable efforts have been worked to model the test selection problem (TSP), but few of them considered the impact of the measurement uncertainty and the fault occurrence. In this article, a conditional joint distribution (CJD)-based test selection method is proposed to construct an accurate TSP model. In addition, we propose a deep copula function which can describe the dependency among the tests. Afterward, an improved discrete binary particle swarm optimization (IBPSO) algorithm is proposed to deal with TSP. Then, application to an electrical circuit is used to illustrate the efficiency of the proposed method over two available methods: 1) joint distribution-based IBPSO and 2) Bernoulli distribution-based IBPSO.
- Published
- 2022
31. A Multiobjective Framework for Many-Objective Optimization
- Author
-
Si-Chen Liu, Jun Zhang, Kay Chen Tan, and Zhi-Hui Zhan
- Subjects
Mathematical optimization ,Optimization problem ,Computer science ,Pareto principle ,Space (commercial competition) ,Evolutionary computation ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Differential evolution ,Convergence (routing) ,Electrical and Electronic Engineering ,Cluster analysis ,Software ,Selection (genetic algorithm) ,Information Systems - Abstract
It is known that many-objective optimization problems (MaOPs) often face the difficulty of maintaining good diversity and convergence in the search process due to the high-dimensional objective space. To address this issue, this article proposes a novel multiobjective framework for many-objective optimization (Mo4Ma), which transforms the many-objective space into multiobjective space. First, the many objectives are transformed into two indicative objectives of convergence and diversity. Second, a clustering-based sequential selection strategy is put forward in the transformed multiobjective space to guide the evolutionary search process. Specifically, the selection is circularly performed on the clustered subpopulations to maintain population diversity. In each round of selection, solutions with good performance in the transformed multiobjective space will be chosen to improve the overall convergence. The Mo4Ma is a generic framework that any type of evolutionary computation algorithm can incorporate compatibly. In this article, the differential evolution (DE) is adopted as the optimizer in the Mo4Ma framework, thus resulting in an Mo4Ma-DE algorithm. Experimental results show that the Mo4Ma-DE algorithm can obtain well-converged and widely distributed Pareto solutions along with the many-objective Pareto sets of the original MaOPs. Compared with seven state-of-the-art MaOP algorithms, the proposed Mo4Ma-DE algorithm shows strong competitiveness and general better performance.
- Published
- 2022
32. Deep Reinforcement Learning for Solving the Heterogeneous Capacitated Vehicle Routing Problem
- Author
-
Jingwen Li, Zhiguang Cao, Wen Song, Andrew Lim, Ruize Gao, Jie Zhang, and Yining Ma
- Subjects
Computer Science - Machine Learning ,Mathematical optimization ,Computer science ,Heuristic ,Node (networking) ,String (computer science) ,Computer Science Applications ,Rendering (computer graphics) ,Human-Computer Interaction ,Control and Systems Engineering ,Vehicle routing problem ,Reinforcement learning ,Electrical and Electronic Engineering ,Heuristics ,Mathematics - Optimization and Control ,Software ,Selection (genetic algorithm) ,Information Systems - Abstract
Existing deep reinforcement learning (DRL) based methods for solving the capacitated vehicle routing problem (CVRP) intrinsically cope with homogeneous vehicle fleet, in which the fleet is assumed as repetitions of a single vehicle. Hence, their key to construct a solution solely lies in the selection of the next node (customer) to visit excluding the selection of vehicle. However, vehicles in real-world scenarios are likely to be heterogeneous with different characteristics that affect their capacity (or travel speed), rendering existing DRL methods less effective. In this paper, we tackle heterogeneous CVRP (HCVRP), where vehicles are mainly characterized by different capacities. We consider both min-max and min-sum objectives for HCVRP, which aim to minimize the longest or total travel time of the vehicle(s) in the fleet. To solve those problems, we propose a DRL method based on the attention mechanism with a vehicle selection decoder accounting for the heterogeneous fleet constraint and a node selection decoder accounting for the route construction, which learns to construct a solution by automatically selecting both a vehicle and a node for this vehicle at each step. Experimental results based on randomly generated instances show that, with desirable generalization to various problem sizes, our method outperforms the state-of-the-art DRL method and most of the conventional heuristics, and also delivers competitive performance against the state-of-the-art heuristic method, i.e., SISR. Additionally, the results of extended experiments demonstrate that our method is also able to solve CVRPLib instances with satisfactory performance., Comment: This paper has been accepted at IEEE Transactions on Cybernetics
- Published
- 2022
33. A Multi-Branch Decoder Network Approach to Adaptive Temporal Data Selection and Reconstruction for Big Scientific Simulation Data
- Author
-
Lanyu Shang, Tom Peterka, Hanqi Guo, Dong Wang, and Yang Zhang
- Subjects
Information Systems and Management ,Computer science ,Iterative reconstruction ,computer.software_genre ,Temporal database ,Data modeling ,Scientific simulation ,Data integrity ,Adaptive system ,Data mining ,computer ,Network approach ,Selection (genetic algorithm) ,Information Systems - Published
- 2022
34. RL-Recruiter+: Mobility-Predictability-Aware Participant Selection Learning for From-Scratch Mobile Crowdsensing
- Author
-
Bo Wu, Jiangtao Wang, Yunfan Hu, and Sumi Helal
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Process (engineering) ,Machine learning ,computer.software_genre ,Electronic mail ,Software deployment ,Task analysis ,Reinforcement learning ,Artificial intelligence ,Electrical and Electronic Engineering ,Predictability ,business ,Decision model ,computer ,Software ,Selection (genetic algorithm) - Abstract
Participant selection is a fundamental research issue in Mobile Crowdsensing (MCS). Previous approaches commonly assume that adequately long periods of candidate participants' historical mobility trajectories are available to model their patterns before the selection process, which is not realistic for some new MCS applications or platforms. The sparsity or even absence of mobility traces will incur inaccurate location prediction, thus undermining the deployment of new MCS applications. To this end, this paper investigates a novel problem called From-Scratch MCS (FS-MCS for short), in which we study how to intelligently select participants to minimize such cold-start effect. Specifically, we propose a novel framework based on reinforcement learning, named RL-Recruiter+. With the gradual accumulation of mobility trajectories over time, RL-Recruiter+ is able to make a good sequence of participant selection decisions for each sensing slot. Compared to its previous version, RL-Recruiter, Re-Recruiter+ jointly considers both the previous coverage and current mobility predictability when training the participant selection decision model. We evaluate our approach experimentally based on two real-world mobility datasets. The results demonstrate that RL-Recruiter+ outperforms the baseline approaches, including RL-Recruiter under various settings.
- Published
- 2022
35. Conservation and Convergence of Genetic Architecture in the Adaptive Radiation of Anolis Lizards
- Author
-
Megan E. Kobiela, Jonathan B. Losos, Jason J. Kolbe, Joel W. McGlothlin, Edmund D. Brodie, and Helen V. Wright
- Subjects
Constraint (information theory) ,biology ,Phylogenetic tree ,Evolutionary biology ,Phylogenetics ,Adaptive radiation ,Convergence (relationship) ,biology.organism_classification ,Anolis ,Genetic architecture ,Selection (genetic algorithm) ,Ecology, Evolution, Behavior and Systematics - Abstract
The G matrix, which quantifies the genetic architecture of traits, is often viewed as an evolutionary constraint. However, G can evolve in response to selection and may also be viewed as a product of adaptive evolution. Convergent evolution of G in similar environments would suggest that G evolves adaptively, but it is difficult to disentangle such effects from phylogeny. Here, we use the adaptive radiation of Anolis lizards to ask whether convergence of G accompanies the repeated evolution of habitat specialists, or ecomorphs, across the Greater Antilles. We measured G in seven species representing three ecomorphs (trunk-crown, trunk- ground, and grass-bush). We found that the overall structure of G does not converge. Instead, the structure of G is well conserved and displays a phylogenetic signal consistent with Brownian motion. However, several elements of G showed signatures of convergence, indicating that some aspects of genetic architecture have been shaped by selection. Most notably, genetic correlations between limb traits and body traits were weaker in long-legged trunk-ground species, suggesting effects of recurrent selection on limb length. Our results demonstrate that common selection pressures may have subtle but consistent effects on the evolution of G, even as its overall structure remains conserved.
- Published
- 2022
36. Detecting Compiler Warning Defects Via Diversity-Guided Program Mutation
- Author
-
Zhilei Ren, Weiqiang Kong, Xiaochen Li, Zhide Zhou, He Jiang, and Yixuan Tang
- Subjects
Dead code ,Programming language ,Computer science ,Mutation (genetic algorithm) ,Program compilation ,Test program ,Compiler ,Construct (python library) ,computer.software_genre ,Abstract syntax tree ,computer ,Software ,Selection (genetic algorithm) - Abstract
Compiler diagnostic warnings help developers identify potential programming mistakes during program compilation. However, these warnings could be erroneous due to the defects of compiler warning diagnostics. Although many techniques have been proposed to automatically generate test programs for compiler warning defect detection, the effectiveness of these techniques on defect-nding is still limited, due to their ability at generating warning-sensitive test program structures. Therefore, in this paper, we propose a DIversity-guided PROgram Mutation approach, called DIPROM, to construct diverse warning-sensitive programs for effective compiler warning defect detection. Given a seed test program, DIPROM rst removes its dead code to reduce false positive warning defects. Then, the abstract syntax tree (AST) of the test program is constructed; DIPROM iteratively mutates the structures of the AST to generate warning-sensitive program variants. To improve the diversity of program variants, DIPROM applies a novel diversity function to guide the selection of the best program variants in each iteration. With the selected program variants, differential testing is conducted to effectively detect warning defects in different compilers. In the experiments, we evaluate DIPROM with two popular C compilers (i.e., GCC and Clang). Experimental results show that DIPROM can detect 75.36% and 218.42% more warning defects than two state-of-the-art approaches (i.e., Epiphron and Csmith), respectively. Meanwhile, DIPROM is efcient, which only spends 61.14% of the time of comparative approaches on average in nding the same number of warning defects. We at last applied DIPROM on the latest development versions of GCC and Clang. After two months running, DIPROM reported 12 new warning defects; nine of them have been conrmed/xed by developers.
- Published
- 2022
37. Improving artificial bee colony algorithm using modified nearest neighbor sequence
- Author
-
Kai Li, Zhihua Cui, Hui Wang, Feng Wang, and Wenjun Wang
- Subjects
Artificial bee colony algorithm ,Sequence ,Roulette ,General Computer Science ,Computer science ,Process (computing) ,Benchmark (computing) ,Evolutionary algorithm ,Algorithm ,Selection (genetic algorithm) ,k-nearest neighbors algorithm - Abstract
Nearest neighbor (NN) is a simple machine learning algorithm, which is often used in classification problems. In this paper, a concept of modified nearest neighbor (MNN) is proposed to strengthen the optimization capability of artificial bee colony (ABC) algorithm. The new approach is called ABC based on modified nearest neighbor sequence (NNSABC). Firstly, MNN is used to construct solution sequences. Unlike the original roulette selection, NNSABC randomly chooses a solution from the corresponding nearest neighbor sequence to generate offspring. Then, two novel search strategies based on the modified nearest neighbor sequence are employed to build a strategy pool. In the optimization process, different search strategies are dynamically chosen from the strategy pool according to the current search status. In order to study the optimization capability of NNSABC, two benchmark sets including 22 classical problems and 28 CEC 2013 complex problems are tested. Experimental results show NNSABC obtains competitive performance when compared with twenty-three other ABCs and evolutionary algorithms.
- Published
- 2022
38. Bubbly firm dynamics and aggregate fluctuations
- Author
-
Donghai Zhang and Haozhou Tang
- Subjects
History ,Transmission channel ,Economics and Econometrics ,Polymers and Plastics ,Bubble ,Aggregate (data warehouse) ,Impulse (physics) ,Industrial and Manufacturing Engineering ,Econometrics ,Economics ,Asset (economics) ,Business and International Management ,Selection (genetic algorithm) ,Finance - Abstract
This study generalizes a standard heterogeneous firm model with endogenous entry and exit by allowing for asset bubbles. We highlight the selection effect of bubbles that incentivizes low-productivity firms to enter or remain in the market. We show that a rise in the aggregate bubble can boost real economic activities by increasing the number of entrants and decreasing the number of exits. Using firm-level data, we find that an overvalued firm is less likely to exit the market, which supports the novel transmission channel of bubbles. Moreover, we show that the model-implied impulse responses are consistent with those identified in the data. Finally, we demonstrate that a model without bubbles fails to reproduce our empirical findings.
- Published
- 2022
39. The selection of transcatheter heart valves in transcatheter aortic valve replacement
- Author
-
Peter Nguyen, Sameer Arora, Zachary Tugaoen, and John P. Vavalle
- Subjects
medicine.medical_specialty ,Transcatheter aortic ,business.industry ,medicine.medical_treatment ,Aortic Valve Stenosis ,Prosthesis Design ,United States ,Transcatheter Aortic Valve Replacement ,Treatment Outcome ,Balloon expandable stent ,medicine.anatomical_structure ,Valve replacement ,Heart Valve Prosthesis ,Aortic Valve ,Internal medicine ,medicine ,Cardiology ,Humans ,Heart valve ,Cardiology and Cardiovascular Medicine ,business ,Selection (genetic algorithm) - Abstract
Transcatheter heart valve technology has rapidly progressed since initial approval in the United States. There are currently two widely available transcatheter heart valve delivery systems approved in the US; however limited data exist on optimal device selection for various patient populations. This review explores the characteristics of currently approved transcatheter heart valve systems and scenarios where one valve system may be favored over others. We provide a simplified decision tree for selecting the optimal transcatheter valve system for specific patient-centered characteristics.
- Published
- 2022
40. Foraging for the self: Environment selection for agency inference
- Author
-
Kelsey Perrykkad, Jakob Hohwy, and Robinson Je
- Subjects
Computer science ,business.industry ,Self ,Foraging ,Inference ,Experimental and Cognitive Psychology ,Machine learning ,computer.software_genre ,Arts and Humanities (miscellaneous) ,Agency (sociology) ,Developmental and Educational Psychology ,Artificial intelligence ,business ,computer ,Selection (genetic algorithm) - Abstract
Sometimes agents choose to occupy environments that are neither traditionally rewarding nor worth exploring, but which rather promise to help minimise uncertainty related to what they can control. Selecting environments that afford inferences about agency seems a foundational aspect of environment selection dynamics – if an agent can’t form reliable beliefs about what they can and can’t control, then they can’t act efficiently to achieve rewards. This relatively neglected aspect of environment selection is important to study so that we can better understand why agents occupy certain environments over others – something that may also be relevant for mental and developmental conditions, such as autism. This online experiment investigates the impact of uncertainty about agency on the way participants choose to freely move between two environments, one that has greater irreducible variability and one that is more complex to model. We hypothesise that increasingly erroneous predictions about the expected outcome of agency-exploring actions can be a driver of switching environments, and we explore which type of environment agents prefer. Results show that participants actively switch between the two environments following increases in prediction error, and that the tolerance for prediction error before switching is modulated by individuals’ autism traits. Further, we find that participants more frequently occupy the variable environment, which is predicted by greater accuracy and higher confidence than the complex environment. This is the first online study to investigate relatively unconstrained ongoing foraging dynamics in support of judgements of agency, and in doing so represents a significant methodological advance.
- Published
- 2022
41. Un método para la selección de aves bioindicadoras con base en sus posibilidades de monitoreo
- Author
-
Marco Antonio, Luis Enrique Domínguez Velázquez, Jaqueline Guzmán Hernández, Martin Gómez, and Altamirano Gonzalez Ortega
- Subjects
monitoreo ,Geography ,Indicator species ,lcsh:Zoology ,método de selección ,Sampling (statistics) ,Forestry ,lcsh:QL1-991 ,General Medicine ,indicadores biológicos ,Chiapas ,Cartography ,Selection (genetic algorithm) - Abstract
A b s t r a c t A method to select bird indicator species taking into account its monitoring possibilities Based on criteria proposed by some specialists to characterize terrestrial birds that respond to environmental changes, we designed a numeric matrix for the selection of indicator species with the objective of recognizing those with more possibilities of monitoring. We assigned weighted values that allowed to evaluate through this matrix, each species recorded in sampling sites from an area located in North-western Chiapas, visited during the year 2002. The result of its application pointed out 14 species with possibilities of monitoring, of a universe of 272 species recorded. The proposed method, besides taking into account most of the basic considerations for the selection of indicator species, it has the capacity to discern among the community of species recorded to use pondered numeric values. We described the method and some recommendations for its use.
- Published
- 2022
42. Early Lessons Learned with the Independent IR Residency Selection Process: Similarities and Differences From the Vascular and Interventional Radiology Fellowship
- Author
-
M. Victoria Marx, Shantanu Warhadpande, Paul J. Rochon, S Sabri, Claire Kaufman, and Minhaj S. Khaja
- Subjects
medicine.medical_specialty ,Career Choice ,medicine.diagnostic_test ,Process (engineering) ,business.industry ,Internship and Residency ,Interventional radiology ,Radiology, Interventional ,United States ,Education, Medical, Graduate ,Humans ,Medicine ,Radiology, Nuclear Medicine and imaging ,Medical physics ,Fellowships and Scholarships ,business ,Selection (genetic algorithm) - Published
- 2022
43. Conspicuous consumption: A meta-analytic review of its antecedents, consequences, and moderators
- Author
-
Lalita A. Manrai, Ajay K. Manrai, Bipul Kumar, and Richard P. Bagozzi
- Subjects
Marketing ,Econometrics ,Conspicuous consumption ,Psychology ,Selection (genetic algorithm) ,Structural equation modeling ,Stock (geology) ,Independent research - Abstract
This paper documents a comprehensive theoretical framework that has been developed to understand conspicuous consumption behavior. The proposed framework identifies three antecedents and two consequences of conspicuous consumption. We tested hypotheses concerning this framework using a meta-analytic approach. We also meta-analytically tested the effect of contextual, methodological, and individual-level moderators on the relationship between conspicuous consumption and its consequences. Additionally, we examined the mediating role of conspicuous consumption behavior in the relationship between its antecedents and consequences using meta-analytic structural equation modeling. After an extensive literature search based on multiple selection criteria, we use 59 independent research studies and 97 unique effect sizes to test hypotheses. The findings theoretically contribute to the stock of knowledge on conspicuous consumption and provide new insights for practitioners.
- Published
- 2022
44. Hash Bit Selection Based on Collaborative Neurodynamic Optimization
- Author
-
Jun Wang, Sam Kwong, and Xinqi Li
- Subjects
Mathematical optimization ,Computer science ,Hash function ,Particle swarm optimization ,Binary number ,Models, Theoretical ,Computer Science Applications ,Human-Computer Interaction ,Cardinality ,Control and Systems Engineering ,Mutation (genetic algorithm) ,Astrophysics::Solar and Stellar Astrophysics ,Quadratic programming ,Electrical and Electronic Engineering ,Algorithms ,Software ,Selection (genetic algorithm) ,Information Systems ,Premature convergence - Abstract
Hash bit selection determines an optimal subset of hash bits from a candidate bit pool. It is formulated as a zero-one quadratic programming problem subject to binary and cardinality constraints. In this article, the problem is equivalently reformulated as a global optimization problem. A collaborative neurodynamic optimization (CNO) approach is applied to solve the problem by using a group of neurodynamic models initialized with particle swarm optimization iteratively in the CNO. Lévy mutation is used in the CNO to avoid premature convergence by ensuring initial state diversity. A theoretical proof is given to show that the CNO with the Lévy mutation operator is almost surely convergent to global optima. Experimental results are discussed to substantiate the efficacy and superiority of the CNO-based hash bit selection method to the existing methods on three benchmarks.
- Published
- 2022
45. Handling Constrained Multiobjective Optimization Problems via Bidirectional Coevolution
- Author
-
Ke Tang, Bing-Chuan Wang, and Zhi-Zhong Liu
- Subjects
0209 industrial biotechnology ,education.field_of_study ,Mathematical optimization ,Computer science ,Feasible region ,Population ,Evolutionary algorithm ,Sorting ,Boundary (topology) ,02 engineering and technology ,Multi-objective optimization ,Computer Science Applications ,Human-Computer Interaction ,020901 industrial engineering & automation ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,education ,Software ,Selection (genetic algorithm) ,Information Systems - Abstract
Constrained multiobjective optimization problems (CMOPs) involve both conflicting objective functions and various constraints. Due to the presence of constraints, CMOPs' Pareto-optimal solutions are very likely lying on constraint boundaries. The experience from the constrained single-objective optimization has shown that to quickly obtain such an optimal solution, the search should surround the boundary of the feasible region from both the feasible and infeasible sides. In this article, we extend this idea to cope with CMOPs and, accordingly, we propose a novel constrained multiobjective evolutionary algorithm with bidirectional coevolution, called BiCo. BiCo maintains two populations, that is: 1) the main population and 2) the archive population. To update the main population, the constraint-domination principle is equipped with an NSGA-II variant to move the population into the feasible region and then to guide the population toward the Pareto front (PF) from the feasible side of the search space. While for updating the archive population, a nondominated sorting procedure and an angle-based selection scheme are conducted in sequence to drive the population toward the PF within the infeasible region while maintaining good diversity. As a result, BiCo can get close to the PF from two complementary directions. In addition, to coordinate the interaction between the main and archive populations, in BiCo, a restricted mating selection mechanism is developed to choose appropriate mating parents. Comprehensive experiments have been conducted on three sets of CMOP benchmark functions and six real-world CMOPs. The experimental results suggest that BiCo can obtain quite competitive performance in comparison to eight state-of-the-art-constrained multiobjective evolutionary optimizers.
- Published
- 2022
46. A Ranking Model for the Selection and Ranking of Commercial Off-the-Shelf Components
- Author
-
Rakesh Garg
- Subjects
Basis (linear algebra) ,Computer science ,Strategy and Management ,Fuzzy set ,computer.software_genre ,Fuzzy logic ,Ranking (information retrieval) ,Compatibility (mechanics) ,Data mining ,Electrical and Electronic Engineering ,Commercial off-the-shelf ,computer ,Selection (genetic algorithm) ,Reliability (statistics) - Abstract
In this article, a deterministic model for the selection and ranking of commercial off-the-shelf (COTS) components is developed on the basis of fuzzy modified distance-based approach (FMDBA). The COTS selection and ranking problem is modeled as multicriteria decision making problem due to the involvement of multiple ranking criteria like functionality, reliability, compatibility, etc. FMDBA is the combination of the fuzzy set theory and the modified distance-based approach. To show the working of the developed ranking model, a case study of the e-payment system is demonstrated which aims on the selection and ranking of eight COTS components over four major categories of ranking criteria. FMDBA provides a comprehensive ranking of the components based on their calculated composite distance values. To depict the docility of FMDBA method, the results obtained are also compared with the existing decision-making methodologies.
- Published
- 2022
47. Application of Novel MCDM for Location Selection of Surface Water Treatment Plant
- Author
-
Sudipa Choudhury, Apu Kumar Saha, Mrinmoy Majumder, and Prasenjit Howladar
- Subjects
education.field_of_study ,Relation (database) ,Treated water ,Operations research ,Computer science ,Process (engineering) ,Strategy and Management ,Population ,Multiple-criteria decision analysis ,Surface water treatment ,Polynomial neural network ,Electrical and Electronic Engineering ,education ,Selection (genetic algorithm) - Abstract
Surface water treatment plants (SWTPs) are responsible for supplying treated water to urban or rural consumers to satisfy the demand of the proximal population for drinking water. One important reason why an SWTP may fail is the poor selection of its location. This article proposes an automated framework for decision making which was developed empirically to identify a location objectively and cognitively, with consideration of all relevant indicators, selected in relation to their role in ensuring the optimality of SWTP performance. Thus, a new multicriteria decision making method was developed to identify the most common and significant indicators and to develop an index that would capture the feasibility of a given location for the installation of an SWTP. In addition to this method, another novel predictive model was developed, based on a polynomial neural network architecture and bagged modeling, to automate the process of the feasibility assessment of the location. These two models were developed and utilized for the first time for the automatic assessment of location feasibility for an SWTP installation. The prototype application of the two new models concerned a few test locations of a peri-urban metro city, located in northeast India, where the results obtained from the model seconded the real scenario for the test locations. The results encourage further application of this process.
- Published
- 2022
48. Multi criteria decision making through TOPSIS and COPRAS on drilling parameters of magnesium AZ91
- Author
-
G. Jayaprakash, N. Baskar, M. Bhuvanesh Kumar, M. Varatharajulu, and Muthukannan Duraiselvam
- Subjects
010302 applied physics ,Mathematical optimization ,Materials science ,Metals and Alloys ,Drilling ,TOPSIS ,02 engineering and technology ,Ideal solution ,021001 nanoscience & nanotechnology ,Multiple-criteria decision analysis ,01 natural sciences ,Cost reduction ,Mechanics of Materials ,0103 physical sciences ,Drill bit ,Minification ,0210 nano-technology ,Selection (genetic algorithm) - Abstract
Magnesium (Mg) alloys are extensively used in the automotive and aircraft industries due to their prominent properties. The selection of appropriate process parameters is an important decision to be made because of the cost reduction and quality improvement. This decision entails the selection of suitable process parameters concerning various conflicting factors, so it has to be addressed with the Multiple Criteria Decision Making (MCDM) method. Therefore, this work addresses the MCDM problem through the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and COPRAS (COmplex PRoportional ASsessment) methods. The assessment carried out in the material Mg AZ91 with the Solid Carbide (SC) drill bit. The dependent parameters like drilling time, burr height, burr thickness, and roughness are considered with the independent parameters like spindle speed and feed rate. Drilling alternatives are ranked using the above said two methods and the results are evaluated. The optimum combination was found on the basis of TOPSIS and COPRAS for simultaneous minimization of all the responses which is found with a spindle speed of 4540 rpm and a feed rate of 0.076 mm/rev. The identical sequencing order was observed in TOPSIS and COPRAS method. The empirical model was developed through Box-Behnken design for each response. Superior empirical model developed for drilling time which is 3.959 times accurate than the conventional equation. The trends of various dependents based on the heterogeneity of various independents are not identical, these complex mechanisms are identified and reported. The optimized results of the Desirability Function Approach are greater accordance with the TOPSIS and COPRAS top rank. The confirmation results are observed with lesser deviation suggesting the selection of the above independent parameters.
- Published
- 2022
49. A Diversity-Enhanced Subset Selection Framework for Multimodal Multiobjective Optimization
- Author
-
Yiming Peng and Hisao Ishibuchi
- Subjects
Mathematical optimization ,Modal ,Optimization problem ,Computational Theory and Mathematics ,Computer science ,Space (commercial competition) ,Implementation ,Multi-objective optimization ,Software ,Field (computer science) ,Selection (genetic algorithm) ,Theoretical Computer Science ,Diversity (business) - Abstract
Multi-modality is commonly seen in real-world multi-objective optimization problems. In such optimization problems, namely, multi-modal multi-objective optimization problems (MMOPs), multiple decision vectors can be projected to the same solution in the objective space (i.e., there are multiple implementations corresponding to that solution). Therefore, the diversity in the decision space is very important for the decision maker when tackling MMOPs. Subset selection methods have been widely used in the field of evolutionary multi-objective optimization for selecting well-distributed solutions (in the objective space) to be presented to the decision maker. However, since most subset selection methods do not consider the diversity of solutions in the decision space, they are not suitable for MMOPs. In this paper, we aim to clearly demonstrate the usefulness of subset selection for multi-modal multi-objective optimization. We propose a novel subset selection framework that can be easily integrated into existing multi-modal multi-objective optimization algorithms. By selecting a pre-specified number of solutions with good diversity in both the objective and decision spaces from all the examined solutions, the proposed framework significantly improves the performance of state-of-the-art multi-modal multi-objective optimization algorithms on various test problems.
- Published
- 2022
50. Applying Markov decision process to adaptive dynamic route selection model
- Author
-
Ali Edrisi, Koosha Bagherzadeh, and Ali Nadi
- Subjects
Transportation planning ,Operations research ,Computer science ,Traffic engineering ,business.industry ,ComputerSystemsOrganization_MISCELLANEOUS ,Road networks ,Transportation ,Markov decision process ,Routing (electronic design automation) ,business ,Selection (genetic algorithm) ,Civil and Structural Engineering - Abstract
Routing technologies have long been available in many automobiles and smart phones, but the nearly random nature of traffic on road networks has always encouraged further efforts to improve the reliability of navigation systems. Given the networks' uncertainty, an adaptive dynamic route selection model based on reinforcement learning is proposed. In the proposed method, the Markov decision process (MDP) is used to train simulated agents in a network so that they are able to make independent decisions under random conditions and, accordingly, determine the set of routes with the shortest travel time. The aim of the research was to integrate the MDP with a multi-nomial logit model (a widely used stochastic discrete-choice model) to improve finding the stochastic shortest path by computing the probability of selecting an arc from several interconnected arcs based on observations made at the arc location. The proposed model, tested with real data from part of the road network in Isfahan, Iran, and the results obtained demonstrated its good performance under 100 randomly applied stochastic scenarios.
- Published
- 2022
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.