333 results
Search Results
2. Improved FunkSVD Algorithm Based on RMSProp.
- Author
-
Yue, Xiaochen and Liu, Qicheng
- Subjects
ALGORITHMS ,DEEP learning ,MACHINE learning ,MATHEMATICAL optimization ,PROBLEM solving - Abstract
To solve the problem of low accuracy in the traditional FunkSVD recommendation algorithm, an improved FunkSVD algorithm (RM-FS) is proposed. RM-FS is an improvement of the traditional FunkSVD algorithm, using RMSProp, a deep learning optimization algorithm. The RM-FS algorithm can not only solve the problem of reduced accuracy of the traditional FunkSVD algorithm because of iterative oscillations but also alleviate the impact of data sparseness on the accuracy of the algorithm, achieving the effect of improving the accuracy of the traditional algorithm. The experimental results show that the RM-FS algorithm proposed in this paper effectively improves the accuracy of the recommendation algorithm, which is better than the traditional FunkSVD recommendation algorithm and other improved FunkSVD algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Literature Review on Hybrid Evolutionary Approaches for Feature Selection.
- Author
-
Piri, Jayashree, Mohapatra, Puspanjali, Dey, Raghunath, Acharya, Biswaranjan, Gerogiannis, Vassilis C., and Kanavos, Andreas
- Subjects
FEATURE selection ,METAHEURISTIC algorithms ,LITERATURE reviews ,MACHINE learning ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
The efficiency and the effectiveness of a machine learning (ML) model are greatly influenced by feature selection (FS), a crucial preprocessing step in machine learning that seeks out the ideal set of characteristics with the maximum accuracy possible. Due to their dominance over traditional optimization techniques, researchers are concentrating on a variety of metaheuristic (or evolutionary) algorithms and trying to suggest cutting-edge hybrid techniques to handle FS issues. The use of hybrid metaheuristic approaches for FS has thus been the subject of numerous research works. The purpose of this paper is to critically assess the existing hybrid FS approaches and to give a thorough literature review on the hybridization of different metaheuristic/evolutionary strategies that have been employed for supporting FS. This article reviews pertinent documents on hybrid frameworks that were published in the period from 2009 to 2022 and offers a thorough analysis of the used techniques, classifiers, datasets, applications, assessment metrics, and schemes of hybridization. Additionally, new open research issues and challenges are identified to pinpoint the areas that have to be further explored for additional study. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. A fixed structure learning automata‐based optimization algorithm for structure learning of Bayesian networks.
- Author
-
Asghari, Kayvan, Masdari, Mohammad, Soleimanian Gharehchopogh, Farhad, and Saneifard, Rahim
- Subjects
ANT algorithms ,BEES algorithm ,MATHEMATICAL optimization ,MACHINE learning ,ALGORITHMS ,PROBLEM solving ,KNOWLEDGE representation (Information theory) ,METAHEURISTIC algorithms - Abstract
One of the useful knowledge representation tools, which can describe the joint probability distribution between some random variables with a graphical model and can be trained by a dataset, is the Bayesian network (BN). A BN is composed of a network structure and a conditional probability distribution table for each node. Discovering an optimal BN structure is an NP‐hard optimization problem that various meta‐heuristic algorithms are applied to solve this problem by researchers. The genetic algorithms, ant colony optimization, evolutionary programming, artificial bee colony, and bacterial foraging optimization are some of the meta‐heuristic methods to solve this problem using a dataset. Most of these methods are applying a scoring metric to generate the best network structure from a set of candidates. A Fixed Structure Learning Automata‐Based (FSLA‐B) algorithm is presented in this paper to solve the structure learning problem of BNs. There is a fixed structure learning automaton for each pair of vertices in the BN's graph structure in the proposed algorithm. The action of this automaton determines the presence and direction of an edge between the vertices. The proposed algorithm performs a guided search procedure using the FSLA and escapes from local optimums. Several datasets are utilised in this paper to evaluate the performance of the proposed algorithm. By performing various experiments, multiple meta‐heuristic algorithms are compared with the introduced new one. The obtained results represented that the proposed algorithm could produce competitive results and find the near‐optimal solution for the BN structure learning problem. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. A note ‘On a single machine-scheduling problem with separated position and resource effects’.
- Author
-
Rudek, Radosław and Rudek, Agnieszka
- Subjects
MATHEMATICAL optimization ,RESOURCE allocation ,ALGORITHMS ,PRODUCTION scheduling ,MACHINE learning - Abstract
This note concerns the paper [Janiak A, Kovalyov MY, Lichtenstein M. On a single machine-scheduling problem with separated position and resource effects. Optimization; 2013. doi:10.1080/02331934.2013.804077], which presents an analysis, a counterexample and a pseudocode related with our proof of optimality for a resource allocation algorithm given in [Rudek A, Rudek R. A note on optimization in deteriorating systems using scheduling problems with the aging effect and resource allocation models. Comput. Math. Appl. 2011;62:1870–1878]. We show that the discussed analysis is based only on one part of our proof omitting its integral second part, which is the source of misunderstanding. The considered counterexample is applied for an algorithm, which was not the method presented in our paper, whereas our algorithm provides the correct result for the mentioned counterexample. The provided pseudocode of the resource allocation algorithm, which is presented as the correct method, is a pseudocode of the algorithm described in our paper. Therefore, we show that the results presented in our paper are correct. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
6. Classroom education effect evaluation model based on MFO intelligent optimization algorithm.
- Author
-
Weimin, Ouyang
- Subjects
MATHEMATICAL optimization ,ASSESSMENT of education ,ALGORITHMS ,CLASSROOMS ,MACHINE learning - Abstract
In order to improve the evaluation effect of classroom education, this paper proposes the MFO intelligent optimization algorithm based on the idea of machine learning, and builds the classroom education effect evaluation model based on the MFO intelligent optimization algorithm. Moreover, this paper uses a logarithmic spiral to simulate the path of the moth to the flame and invert the pending parameters in the mathematical model, and adds vertical and horizontal algorithms and chaos operators on this basis. The crisscross algorithm allows different moth individuals and the same moth to perform cross calculations with different computing dimensions to increase the diversity of moth populations, so that moths in the search space can traverse the entire search space as much as possible to find a better solution. Moreover, in view of the problems of BP neural network such as low fitting accuracy, this paper applies the CCMFO algorithm to improve the BP neural network to form the CCMFO-BP algorithm, and improves the weight and threshold update process of the BP neural network to make the network operation more efficient and accurate. Finally, this paper designs experiments to analyze the performance of the model constructed in this paper. The research results show that the model constructed in this paper meets the expected requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Riemannian stochastic fixed point optimization algorithm.
- Author
-
Iiduka, Hideaki and Sakai, Hiroyuki
- Subjects
MATHEMATICAL optimization ,RIEMANNIAN manifolds ,CONVEX sets ,ALGORITHMS ,MACHINE learning ,CONVEX functions ,GEODESICS - Abstract
This paper considers a stochastic optimization problem over the fixed point sets of quasinonexpansive mappings on Riemannian manifolds. The problem enables us to consider Riemannian hierarchical optimization problems over complicated sets, such as the intersection of many closed convex sets, the set of all minimizers of a nonsmooth convex function, and the intersection of sublevel sets of nonsmooth convex functions. We focus on adaptive learning rate optimization algorithms, which adapt step-sizes (referred to as learning rates in the machine learning field) to find optimal solutions quickly. We then propose a Riemannian stochastic fixed point optimization algorithm, which combines fixed point approximation methods on Riemannian manifolds with the adaptive learning rate optimization algorithms. We also give convergence analyses of the proposed algorithm for nonsmooth convex and smooth nonconvex optimization. The analysis results indicate that, with small constant step-sizes, the proposed algorithm approximates a solution to the problem. Consideration of the case in which step-size sequences are diminishing demonstrates that the proposed algorithm solves the problem with a guaranteed convergence rate. This paper also provides numerical comparisons that demonstrate the effectiveness of the proposed algorithms with formulas based on the adaptive learning rate optimization algorithms, such as Adam and AMSGrad. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Training circuit-based quantum classifiers through memetic algorithms.
- Author
-
Acampora, Giovanni, Chiatto, Angela, and Vitiello, Autilia
- Subjects
- *
OPTIMIZATION algorithms , *EVOLUTIONARY algorithms , *ALGORITHMS , *EVOLUTIONARY computation , *MACHINE learning , *MATHEMATICAL optimization - Abstract
• Variational Quantum Circuits (VQCs) play a key role in several applications. • VQCs are parameterized quantum circuits trained by using classical optimizers. • The paper proposes to apply memetic algorithms to train VQCs. • The designed memetic algorithm outperforms the state-of-the-art classical optimizers. Among the ready-to-implement quantum algorithms, Variational Quantum Circuits (VQCs) play a key role in several applications, including machine learning. Their strength lies in the use of a parameterized quantum circuit that is trained by means of an optimization algorithm run on a classical computer. In such a scenario, there is a strong need to design appropriate classical optimization schemes that deal efficiently with VQCs and pave the way for quantum advantage in machine learning. Among possible optimization schemes, those based on evolutionary computation are finding increasing interest, given the unconventional and nonanalytical nature of the problem to be solved. This paper proposes to apply memetic algorithms to train VQCs used as quantum classifiers and shows the benefits of exploiting this evolutionary optimization technique through a comparative experimental session. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. A stochastic topology optimization algorithm for improved fluid dynamics systems.
- Author
-
Furrokh, Fox and Zhang, Nic
- Subjects
FLUID dynamics ,MATHEMATICAL optimization ,SYSTEM dynamics ,TOPOLOGY ,INTERNAL combustion engines ,ADJOINT differential equations - Abstract
The use of topology optimization in the design of fluid dynamics systems is still in its infancy. With the decreasing cost of additive manufacture, the application of topology optimization in the design of structural components has begun to increase. This paper provides a method for using topology optimization to reduce the power dissipation of fluid dynamics systems, with the novelty of it being the first application of stochastic mechanisms in the design of 3D fluid–solid geometrical interfaces. The optimization algorithm uses the continuous adjoint method for sensitivity analysis and is optimized against an objective function for fluid power dissipation. The paper details the methodology behind a vanilla gradient descent approach before introducing stochastic behavior through a minibatch-based system. Both algorithms are then applied to a novel case study for an internal combustion engine's piston cooling gallery before the performance of each algorithm's resulting geometry is analyzed and compared. The vanilla gradient descent algorithm achieves an 8.9% improvement in pressure loss through the case study, and this is surpassed by the stochastic descent algorithm which achieved a 9.9% improvement, however this improvement came with a large time cost. Both approaches produced similarly unintuitive geometry solutions to successfully improve the performance of the cooling gallery. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Quantum beetle swarm algorithm optimized extreme learning machine for intrusion detection.
- Author
-
Dong, Yumin, Hu, Wanbin, Zhang, Jinlei, Chen, Min, Liao, Wei, and Chen, Zhengquan
- Subjects
MACHINE learning ,PARTICLE swarm optimization ,BEETLES ,ALGORITHMS ,QUANTUM mechanics ,MATHEMATICAL optimization - Abstract
Because of the low accuracy in intrusion detection, a model of extreme learning machine based on the optimization of quantum beetle swarm algorithm is proposed. First of all, this paper proposes a quantum beetle swarm optimization algorithm, which introduces quantum mechanics and combines the advantages of beetle antennae search and particle swarm optimization. In this way, the individual can learn both their own experience and group experience, which enables the beetle to move purposefully and instructively, and improves the convergence performance of the algorithm. In extreme learning machine, it is more difficult to solve the problem in high-dimensional data. This paper proposed an improved extreme learning machine that uses the least squares QR algorithm to decompose the matrix, which can reduce the computational complexity of the traditional extreme learning machine. The improved extreme learning machine model optimized by quantum beetle swarm optimization algorithm is applied to intrusion detection, and the simulation results show that the model proposed in this paper can significantly improve detection accuracy and increase convergence rate. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Better trees: an empirical study on hyperparameter tuning of classification decision tree induction algorithms.
- Author
-
Gomes Mantovani, Rafael, Horváth, Tomáš, Rossi, André L. D., Cerri, Ricardo, Barbon Junior, Sylvio, Vanschoren, Joaquin, and Carvalho, André C. P. L. F. de
- Subjects
DECISION trees ,MACHINE learning ,ALGORITHMS ,EMPIRICAL research ,MATHEMATICAL optimization ,CLASSIFICATION - Abstract
Machine learning algorithms often contain many hyperparameters whose values affect the predictive performance of the induced models in intricate ways. Due to the high number of possibilities for these hyperparameter configurations and their complex interactions, it is common to use optimization techniques to find settings that lead to high predictive performance. However, insights into efficiently exploring this vast space of configurations and dealing with the trade-off between predictive and runtime performance remain challenging. Furthermore, there are cases where the default hyperparameters fit the suitable configuration. Additionally, for many reasons, including model validation and attendance to new legislation, there is an increasing interest in interpretable models, such as those created by the decision tree (DT) induction algorithms. This paper provides a comprehensive approach for investigating the effects of hyperparameter tuning for the two DT induction algorithms most often used, CART and C4.5. DT induction algorithms present high predictive performance and interpretable classification models, though many hyperparameters need to be adjusted. Experiments were carried out with different tuning strategies to induce models and to evaluate hyperparameters' relevance using 94 classification datasets from OpenML. The experimental results point out that different hyperparameter profiles for the tuning of each algorithm provide statistically significant improvements in most of the datasets for CART, but only in one-third for C4.5. Although different algorithms may present different tuning scenarios, the tuning techniques generally required few evaluations to find accurate solutions. Furthermore, the best technique for all the algorithms was the Irace. Finally, we found out that tuning a specific small subset of hyperparameters is a good alternative for achieving optimal predictive performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. A relative labeling importance estimation algorithm based on global-local label correlations for multi-label learning.
- Author
-
Liu, Yilu and Cao, Fuyuan
- Subjects
MACHINE learning ,ALGORITHMS ,COSINE function ,MATHEMATICAL optimization ,GRAPH labelings ,LEARNING ,FOOD labeling - Abstract
In multi-label learning, considering the relative importance between labels can yield better performance than considering the equal importance. To explore relative labeling importance, many existing algorithms introduce the global label correlations. However, the global correlations can only reflect the semantic relation between labels, while ignoring the label correlation differences between different instances. In practical applications, labels with high semantic relevance may not be highly relevant in all instances. In this paper, we consider both global and local label correlations to estimate relative labeling importance. Firstly, we calculate a global label correlation matrix in the whole label space. Secondly, each instance subset is assigned a local label correlation matrix, which is learned from the cosine similarity of labels within the cluster. Based on the assumption that label correlations can be transferred from the original categorical space to the numerical label space, we add global and local label correlation regularization terms. Finally, we integrate the importance estimating and the model training into a unified framework, and propose an alternative optimization algorithm to solve it. To validate the effectiveness of the proposed algorithm, we conduct experiments on thirteen multi-label datasets. Experimental results show that the proposed algorithm outperforms existing multi-label learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Advanced metaheuristic optimization techniques in applications of deep neural networks: a review.
- Author
-
Abd Elaziz, Mohamed, Dahou, Abdelghani, Abualigah, Laith, Yu, Liyang, Alshinwan, Mohammad, Khasawneh, Ahmad M., and Lu, Songfeng
- Subjects
METAHEURISTIC algorithms ,MATHEMATICAL optimization ,SWARM intelligence ,MACHINE learning ,DEEP learning ,TASK performance - Abstract
Deep neural networks (DNNs) have evolved as a beneficial machine learning method that has been successfully used in various applications. Currently, DNN is a superior technique of extracting information from massive sets of data in a self-organized method. DNNs have different structures and parameters, which are usually produced for particular applications. Nevertheless, the training procedures of DNNs can be protracted depending on the given application and the size of the training set. Further, determining the most precise and practical structure of a deep learning method in a reasonable time is a possible problem related to this procedure. Meta-heuristics techniques, such as swarm intelligence (SI) and evolutionary computing (EC), represent optimization frames with specific theories and objective functions. These methods are adjustable and have been demonstrated their effectiveness in various applications; hence, they can optimize the DNNs models. This paper presents a comprehensive survey of the recent optimization methods (i.e., SI and EC) employed to enhance DNNs performance on various tasks. This paper also analyzes the importance of optimization methods in generating the optimal hyper-parameters and structures of DNNs in taking into consideration massive-scale data. Finally, several potential directions that still need improvements and open problems in evolutionary DNNs are identified. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. A novel Q-learning algorithm based on improved whale optimization algorithm for path planning.
- Author
-
Li, Ying, Wang, Hanyu, Fan, Jiahao, and Geng, Yanyu
- Subjects
MATHEMATICAL optimization ,REINFORCEMENT learning ,ROBOTIC path planning ,MACHINE learning ,MOBILE robots ,ALGORITHMS - Abstract
Q-learning is a classical reinforcement learning algorithm and one of the most important methods of mobile robot path planning without a prior environmental model. Nevertheless, Q-learning is too simple when initializing Q-table and wastes too much time in the exploration process, causing a slow convergence speed. This paper proposes a new Q-learning algorithm called the Paired Whale Optimization Q-learning Algorithm (PWOQLA) which includes four improvements. Firstly, to accelerate the convergence speed of Q-learning, a whale optimization algorithm is used to initialize the values of a Q-table. Before the exploration process, a Q-table which contains previous experience is learned to improve algorithm efficiency. Secondly, to improve the local exploitation capability of the whale optimization algorithm, a paired whale optimization algorithm is proposed in combination with a pairing strategy to speed up the search for prey. Thirdly, to improve the exploration efficiency of Q-learning and reduce the number of useless explorations, a new selective exploration strategy is introduced which considers the relationship between current position and target position. Fourthly, in order to balance the exploration and exploitation capabilities of Q-learning so that it focuses on exploration in the early stage and on exploitation in the later stage, a nonlinear function is designed which changes the value of ε in ε-greedy Q-learning dynamically based on the number of iterations. Comparing the performance of PWOQLA with other path planning algorithms, experimental results demonstrate that PWOQLA achieves a higher level of accuracy and a faster convergence speed than existing counterparts in mobile robot path planning. The code will be released at https://github.com/wanghanyu0526/improveQL.git. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Investigators from University of Galatzi Target Photobioreactors (Machine Learning Algorithms That Emulate Controllers Based On Particle Swarm Optimization-an Application To a Photobioreactor for Algal Growth)
- Subjects
Mathematical optimization ,Machine learning ,Data mining ,Algorithms ,Data warehousing/data mining ,Algorithm ,Biotechnology industry ,Pharmaceuticals and cosmetics industries - Abstract
2024 JUN 19 (NewsRx) -- By a News Reporter-Staff News Editor at Biotech Week -- Current study results on Biotechnology - Photobioreactors have been published. According to news reporting originating [...]
- Published
- 2024
16. Unsupervised feature selection based on kernel fisher discriminant analysis and regression learning.
- Author
-
Shang, Ronghua, Meng, Yang, Liu, Chiyang, Jiao, Licheng, Esfahani, Amir M. Ghalamzan, and Stolkin, Rustam
- Subjects
FEATURE selection ,FISHER discriminant analysis ,MACHINE learning ,CLUSTER analysis (Statistics) ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
In this paper, we propose a new feature selection method called kernel fisher discriminant analysis and regression learning based algorithm for unsupervised feature selection. The existing feature selection methods are based on either manifold learning or discriminative techniques, each of which has some shortcomings. Although some studies show the advantages of two-steps method benefiting from both manifold learning and discriminative techniques, a joint formulation has been shown to be more efficient. To do so, we construct a global discriminant objective term of a clustering framework based on the kernel method. We add another term of regression learning into the objective function, which can impose the optimization to select a low-dimensional representation of the original dataset. We use L
2,1 -norm of the features to impose a sparse structure upon features, which can result in more discriminative features. We propose an algorithm to solve the optimization problem introduced in this paper. We further discuss convergence, parameter sensitivity, computational complexity, as well as the clustering and classification accuracy of the proposed algorithm. In order to demonstrate the effectiveness of the proposed algorithm, we perform a set of experiments with different available datasets. The results obtained by the proposed algorithm are compared against the state-of-the-art algorithms. These results show that our method outperforms the existing state-of-the-art methods in many cases on different datasets, but the improved performance comes with the cost of increased time complexity. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
17. A novel improved whale optimization algorithm for optimization problems with multi-strategy and hybrid algorithm.
- Author
-
Deng, Huaijun, Liu, Linna, Fang, Jianyin, Qu, Boyang, and Huang, Quanzhen
- Subjects
- *
MACHINE learning , *LEARNING strategies , *ALGORITHMS , *MATHEMATICAL optimization , *ANT algorithms - Abstract
Whale optimization algorithm (WOA), as an advanced optimization algorithm with simple structure, has been favored by various fields. However, there are some disadvantages of WOA, such as slow convergence speed, low precision and falling into local optimal value easily. In this paper, a novel improved whale optimization algorithm (IWOA) with multi-strategy and hybrid algorithm is proposed to overcome above shortcomings. Firstly, IWOA initializes the population by chaotic mapping to avoid the initial population distribution of WOA deviating from the optimal value. Secondly, IWOA combines the pheromone of the black widow algorithm and the opposition-based learning strategy to modify the population, which improves the convergence speed and the global performance of WOA respectively. Finally, the adaptive coefficients and the new update modes replace the original update modes, which makes the structure of WOA simpler and more accurate. In addition, the convergence of IWOA is also proved in this paper. On the one hand, to demonstrate the effectiveness of IWOA, 23 benchmark functions are used to test various performance of the algorithm. On the other hand, in order to prove the superiority of IWOA, the experimental results are compared and analyzed with other optimization algorithms. Simulation results show that IWOA proposed in this paper owns excellent performance in convergence speed, stability, accuracy and global performance, compared with other algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Stochastic Allocation of Photovoltaic Energy Resources in Distribution Systems Considering Uncertainties Using New Improved Meta-Heuristic Algorithm.
- Author
-
Alanazi, Abdulaziz, Alanazi, Mohana, Abdelaziz, Almoataz Y., Kotb, Hossam, Milyani, Ahmad H., and Azhari, Abdullah Ahmed
- Subjects
POWER resources ,MACHINE learning ,DISTRIBUTION (Probability theory) ,ALGORITHMS ,MATHEMATICAL optimization ,PARTICLE swarm optimization - Abstract
In this paper, a stochastic-metaheuristic model is performed for multi-objective allocation of photovoltaic (PV) resources in 33-bus and 69-bus distribution systems to minimize power losses of the distribution system lines, improving the voltage profile and voltage stability of the distribution system buses, considering the uncertainty of PV units' power and network demand. The decision-making variables, including installation location and the size of PVs, are determined optimally via an improved human learning optimization algorithm (IHLOA). The conventional human learning optimization algorithm (IHLOA) is improved based on Gaussian mutation to enhance the exploration capability and avoid getting trapped in local optimal. The methodology is implemented in two cases as deterministic and stochastic without and with uncertainties, respectively. Monte Carol Simulation (MCS) based on probability distribution function (PDF) is used for uncertainties modeling. The deterministic results proved the superiority of the IHLOA compared with conventional HLOA, particle swarm optimization (PSO), to obtain better values of the different objectives and faster convergence speed and accuracy. The results are clear that enhancing the conventional HLOA has increased the algorithm's ability to explore and achieve the optimal global solution with higher convergence accuracy. Moreover, the stochastic results were clear that considering the uncertainties leads to correct and robust decision-making against existing uncertainties and accurate knowledge of the network operator against the exact values of various objectives compared to the deterministic case. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Regularized logistic regression and multiobjective variable selection for classifying MEG data.
- Author
-
Santana, Roberto, Bielza, Concha, and Larrañaga, Pedro
- Subjects
LOGISTIC regression analysis ,MATHEMATICAL variables ,ACCURACY ,MAGNETOENCEPHALOGRAPHY ,MACHINE learning ,ALGORITHMS ,MATHEMATICAL optimization ,PROBABILISTIC generative models - Abstract
This paper addresses the question of maximizing classifier accuracy for classifying task-related mental activity from Magnetoencelophalography (MEG) data. We propose the use of different sources of information and introduce an automatic channel selection procedure. To determine an informative set of channels, our approach combines a variety of machine learning algorithms: feature subset selection methods, classifiers based on regularized logistic regression, information fusion, and multiobjective optimization based on probabilistic modeling of the search space. The experimental results show that our proposal is able to improve classification accuracy compared to approaches whose classifiers use only one type of MEG information or for which the set of channels is fixed a priori. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
20. The Significance of Parameters' Optimization in Fair Benchmarking of Software Defects' Prediction Performances.
- Author
-
Ghunaim, Hussam and Dichter, Julius
- Subjects
SOFTWARE engineering ,COMPUTER performance ,ALGORITHMS ,SUPPORT vector machines ,MULTILAYER perceptrons ,MATHEMATICAL optimization - Abstract
Software engineering research in general and software defects' prediction research in particular are facing serious challenges to their reliability and validity. The major reason is that many of the published research outcomes contradict each other. This phenomenon is mainly caused by the lack of research standards as it exists in many well-established scientific and engineering disciplines. The scope of this paper is to focus on fair benchmarking of the defects' prediction models. By experimenting three prediction algorithms, we found that the quality of the resultant predictions would significantly fluctuate as parameters' values changed. Therefore, any published research results not based on optimized prediction algorithms methods can cause inaccurate and misleading benchmarking and recommendations. Thus, we propose optimizing parameters as an essential research standard to conduct reliable and valid benchmarking. We believe if this standard were approved by interested software quality practitioners and research communities, it will present a vital role in soothing the severity of this phenomenon. The three prediction algorithms we used in our analysis were Support Vector Machine SVM, Multilayer Perceptron MLP, and Naïve Bayesian NB. We used KNIME as a data mining platform to design and run all optimization loops on open source Eclipse 2.0 data set. [ABSTRACT FROM AUTHOR]
- Published
- 2016
21. Investigators from Taiyuan University Have Reported New Data on Machine Translation (Intelligent English Translation System Based On Evolutionary Multi-objective Optimization Algorithm)
- Subjects
Translating and interpreting ,Mathematical optimization ,Machine learning ,Data mining ,Algorithms ,Data warehousing/data mining ,Algorithm ,Computers - Abstract
2021 JUN 15 (VerticalNews) -- By a News Reporter-Staff News Editor at Information Technology Newsweekly -- A new study on Machine Translation is now available. According to news reporting originating [...]
- Published
- 2021
22. Simultaneous Feature Selection and Support Vector Machine Optimization Using an Enhanced Chimp Optimization Algorithm.
- Author
-
Wu, Di, Zhang, Wanying, Jia, Heming, and Leng, Xin
- Subjects
FEATURE selection ,MATHEMATICAL optimization ,CHIMPANZEES ,SUPPORT vector machines ,ALGORITHMS ,PARTICLE swarm optimization ,SEARCH algorithms ,MACHINE learning - Abstract
Chimp Optimization Algorithm (ChOA), a novel meta-heuristic algorithm, has been proposed in recent years. It divides the population into four different levels for the purpose of hunting. However, there are still some defects that lead to the algorithm falling into the local optimum. To overcome these defects, an Enhanced Chimp Optimization Algorithm (EChOA) is developed in this paper. Highly Disruptive Polynomial Mutation (HDPM) is introduced to further explore the population space and increase the population diversity. Then, the Spearman's rank correlation coefficient between the chimps with the highest fitness and the lowest fitness is calculated. In order to avoid the local optimization, the chimps with low fitness values are introduced with Beetle Antenna Search Algorithm (BAS) to obtain visual ability. Through the introduction of the above three strategies, the ability of population exploration and exploitation is enhanced. On this basis, this paper proposes an EChOA-SVM model, which can optimize parameters while selecting the features. Thus, the maximum classification accuracy can be achieved with as few features as possible. To verify the effectiveness of the proposed method, the proposed method is compared with seven common methods, including the original algorithm. Seventeen benchmark datasets from the UCI machine learning library are used to evaluate the accuracy, number of features, and fitness of these methods. Experimental results show that the classification accuracy of the proposed method is better than the other methods on most data sets, and the number of features required by the proposed method is also less than the other algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
23. Quantum Architecture Search with Meta‐Learning.
- Author
-
He, Zhimin, Chen, Chuangtao, Li, Lvzhou, Zheng, Shenggen, and Situ, Haozhen
- Subjects
METAHEURISTIC algorithms ,QUANTUM gates ,MATHEMATICAL optimization ,MACHINE learning ,ALGORITHMS - Abstract
Variational quantum algorithms (VQAs) have been successfully applied to quantum approximate optimization algorithms, variational quantum compiling and quantum machine learning models. The performances of VQAs largely depend on the architecture of parameterized quantum circuits (PQCs). Quantum architecture search (QAS) aims to automate the design of PQCs in different VQAs with classical optimization algorithms. However, current QAS algorithms do not use prior experiences and search the quantum architecture from scratch for each new task, which is inefficient and time consuming. In this paper, a meta quantum architecture search (MetaQAS) algorithm is proposed, which learns good initialization heuristics of the architecture (i.e., meta‐architecture), along with the meta‐parameters of quantum gates from a number of training tasks such that they can adapt to new tasks with fewer gradient updates, which leads to fast learning on new tasks. The proposed MetaQAS can be used with arbitrary gradient‐based QAS algorithms. Simulation results on variational quantum compiling (VQC) and quantum approximate optimization algorithm (QAOA) show that the architectures optimized by MetaQAS converge faster than a state‐of‐the‐art gradient‐based QAS algorithm, namely DQAS. MetaQAS also achieves a better solution than DQAS after fine‐tuning of gate parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Design Method for a Wideband Non-Uniformly Spaced Linear Array Using the Modified Reinforcement Learning Algorithm.
- Author
-
Kang, Seyoung, Kim, Seonkyo, Park, Cheolsun, and Chung, Wonzoo
- Subjects
REINFORCEMENT learning ,MACHINE learning ,VECTOR spaces ,COST functions ,MATHEMATICAL optimization ,ALGORITHMS ,ANGLES - Abstract
In this paper, we present a design method for a wideband non-uniformly spaced linear array (NUSLA), with both symmetric and asymmetric geometries, using the modified reinforcement learning algorithm (MORELA). We designed a cost function that provided freedom to the beam pattern by setting limits only on the beam width (BW) and side-lobe level (SLL) in order to satisfy the desired BW and SLL in the wide band. We added the scan angle condition to the cost function to design the scanned beam pattern, as the ability to scan a beam in the desired direction is important in various applications. In order to prevent possible pointing angle errors for asymmetric NUSLA, we employed a penalty function to ensure the peak at the desired direction. Modified reinforcement learning algorithm (MORELA), which is a reinforcement learning-based algorithm used to determine a global optimum of the cost function, is applied to optimize the spacing and weights of the NUSLA by minimizing the proposed cost function. The performance of the proposed scheme was verified by comparing it with that of existing heuristic optimization algorithms via computer simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. Zeroth and First Order Stochastic Frank-Wolfe Algorithms for Constrained Optimization.
- Author
-
Akhtar, Zeeshan and Rajawat, Ketan
- Subjects
STOCHASTIC orders ,MATHEMATICAL optimization ,SEMIDEFINITE programming ,NP-hard problems ,SPARSE matrices ,CONSTRAINED optimization ,DETERMINISTIC algorithms ,ALGORITHMS - Abstract
This paper considers stochastic convex optimization problems with two sets of constraints: (a) deterministic constraints on the domain of the optimization variable, which are difficult to project onto; and (b) deterministic or stochastic constraints that admit efficient projection. Problems of this form arise frequently in the context of semidefinite programming as well as when various NP-hard problems are solved approximately via semidefinite relaxation. Since projection onto the first set of constraints is difficult, it becomes necessary to explore projection-free algorithms, such as the stochastic Frank-Wolfe (FW) algorithm. On the other hand, the second set of constraints cannot be handled in the same way, and must be incorporated as an indicator function within the objective function, thereby complicating the application of FW methods. Similar problems have been studied before; however, they suffer from slow convergence rates. This work, equipped with momentum based gradient tracking technique, guarantees fast convergence rates on par with the best-known rates for problems without the second set of constraints. Zeroth-order variants of the proposed algorithms are also developed and again improve upon the state-of-the-art rate results. We further propose the novel trimmed FW variants that enjoy the same convergence rates as their classical counterparts, but are empirically shown to require significantly fewer calls to the linear minimization oracle speeding up the overall algorithm. The efficacy of the proposed algorithms is tested on relevant applications of sparse matrix estimation, clustering via semidefinite relaxation, and uniform sparsest cut problem. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning.
- Author
-
Rafique, Hassan, Liu, Mingrui, Lin, Qihang, and Yang, Tianbao
- Subjects
MATHEMATICAL optimization ,SUBGRADIENT methods ,NONSMOOTH optimization ,DATA distribution ,NONCONVEX programming ,ALGORITHMS ,MACHINE learning - Abstract
Min–max problems have broad applications in machine learning, including learning with non-decomposable loss and learning with robustness to data distribution. Convex–concave min–max problem is an active topic of research with efficient algorithms and sound theoretical foundations developed. However, it remains a challenge to design provably efficient algorithms for non-convex min–max problems with or without smoothness. In this paper, we study a family of non-convex min–max problems, whose objective function is weakly convex in the variables of minimization and is concave in the variables of maximization. We propose a proximally guided stochastic subgradient method and a proximally guided stochastic variance-reduced method for the non-smooth and smooth instances, respectively, in this family of problems. We analyse the time complexities of the proposed methods for finding a nearly stationary point of the outer minimization problem corresponding to the min–max problem. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. State reduction for network intervention in probabilistic Boolean networks
- Author
-
Xiaoning Qian, Noushin Ghaffari, Ivan Ivanov, and Edward R. Dougherty
- Subjects
Statistics and Probability ,Mathematical optimization ,Gene regulatory network ,Machine learning ,computer.software_genre ,Biochemistry ,Models, Biological ,Reduction (complexity) ,Dimension (vector space) ,State space ,Humans ,Gene Regulatory Networks ,Molecular Biology ,Mathematics ,Gastrointestinal Neoplasms ,Probability ,Models, Statistical ,Markov chain ,Models, Genetic ,business.industry ,Probabilistic logic ,Original Papers ,Markov Chains ,Computer Science Applications ,Computational Mathematics ,Computational Theory and Mathematics ,Key (cryptography) ,State (computer science) ,Artificial intelligence ,business ,computer ,Algorithms - Abstract
Motivation: A key goal of studying biological systems is to design therapeutic intervention strategies. Probabilistic Boolean networks (PBNs) constitute a mathematical model which enables modeling, predicting and intervening in their long-run behavior using Markov chain theory. The long-run dynamics of a PBN, as represented by its steady-state distribution (SSD), can guide the design of effective intervention strategies for the modeled systems. A major obstacle for its application is the large state space of the underlying Markov chain, which poses a serious computational challenge. Hence, it is critical to reduce the model complexity of PBNs for practical applications. Results: We propose a strategy to reduce the state space of the underlying Markov chain of a PBN based on a criterion that the reduction least distorts the proportional change of stationary masses for critical states, for instance, the network attractors. In comparison to previous reduction methods, we reduce the state space directly, without deleting genes. We then derive stationary control policies on the reduced network that can be naturally induced back to the original network. Computational experiments study the effects of the reduction on model complexity and the performance of designed control policies which is measured by the shift of stationary mass away from undesirable states, those associated with undesirable phenotypes. We consider randomly generated networks as well as a 17-gene gastrointestinal cancer network, which, if not reduced, has a 217 × 217 transition probability matrix. Such a dimension is too large for direct application of many previously proposed PBN intervention strategies. Contact: xqian@cse.usf.edu Supplementary information: Supplementary information are available at Bioinformatics online.
- Published
- 2010
28. Multi-verse optimizer algorithm: a comprehensive survey of its results, variants, and applications.
- Author
-
Abualigah, Laith
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,MACHINE learning ,BIOLOGICALLY inspired computing - Abstract
This review paper presents a comprehensive and full review of the so-called optimization algorithm, multi-verse optimizer algorithm (MOA), and reviews its main characteristics and procedures. This optimizer is a kind of the most recent powerful nature-inspired meta-heuristic algorithms, where it has been successfully implemented and utilized in several optimization problems in a variety of several fields, which are covered in this context, such as benchmark test functions, machine learning applications, engineering applications, network applications, parameters control, and other applications of MOA. This paper covers all the available publications that have been used MOA in its application, which are published in the literature including the variants of MOA such as binary, modifications, hybridizations, chaotic, and multi-objective. Followed by its applications, the assessment and evaluation, and finally the conclusions, which interested in the current works on the optimization algorithm, recommend potential future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. An improved opposition based learning firefly algorithm with dragonfly algorithm for solving continuous optimization problems.
- Author
-
Abedi, Mehdi and Gharehchopogh, Farhad Soleimanian
- Subjects
METAHEURISTIC algorithms ,MACHINE learning ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
Nowadays, the existence of continuous optimization problems has led researchers to come up with a variety of methods to solve continues optimization problems. The metaheuristic algorithms are one of the most popular and common ways to solve continuous optimization problems. Firefly Algorithm (FA) is a successful metaheuristic algorithm for solving continuous optimization problems; however, although this algorithm performs very well in local search, it has weaknesses and disadvantages in finding solution in global search. This problem has caused this algorithm to be trapped locally and the balance between exploration and exploitation cannot be well maintained. In this paper, three different approaches based on the Dragonfly Algorithm (DA) processes and the OBL method are proposed to improve exploration, performance, efficiency and information-sharing of the FA and to avoid the FA getting stuck in local trap. In the first proposed method (FADA), the robust processes of DA are used to improve the exploration, performance and efficiency of the FA; and the second proposed method (OFA) uses an Opposition-Based Learning (OBL) algorithm to accelerate the convergence and exploration of the FA. Finally, in the third approach, which is referred to as OFADA in this paper, a hybridization of the hybrid FADA and the OBL method is used to improve the convergence and accuracy of the FA. The three proposed methods were implemented on functions with 2, 4, 10, and 30 dimensions. The results of the implementation of these three proposed methods showed that OFADA approach outperformed the other two proposed methods and other compared metaheuristic algorithms in different dimensions. In addition, all the three proposed methods provided better results compared with other metaheuristic algorithms on small-dimensional functions. However, performance of many metaheuristic algorithms decreased with increasing the dimensions of the functions. While the three proposed methods, in particular the OFADA approach, have been able to make better converge with the higher-dimensional optimization functions toward the target in comparison with other metaheuristic algorithms, and to show a high performance. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Fault Diagnosis of Tennessee Eastman Process with XGB-AVSSA-KELM Algorithm.
- Author
-
Hu, Mingfei, Hu, Xinyi, Deng, Zhenzhou, and Tu, Bing
- Subjects
FAULT diagnosis ,CHEMICAL processes ,MACHINE learning ,ALGORITHMS ,SEARCH algorithms ,MATHEMATICAL optimization - Abstract
In fault detection and the diagnosis of large industrial systems, whose chemical processes usually exhibit complex, high-dimensional, time-varying and non-Gaussian characteristics, the classification accuracy of traditional methods is low. In this paper, a kernel limit learning machine (KELM) based on an adaptive variation sparrow search algorithm (AVSSA) is proposed. Firstly, the dataset is optimized by removing redundant features using the eXtreme Gradient Boosting (XGBOOST) model. Secondly, a new optimization algorithm, AVSSA, is proposed to automatically adjust the network hyperparameters of KELM to improve the performance of the fault classifier. Finally, the optimized feature sequences are fed into the proposed classifier to obtain the final diagnosis results. The Tennessee Eastman (TE) chemical process is used to verify the effectiveness of the proposed method through multidimensional diagnostic metrics. The results show that our proposed diagnosis method can significantly improve the accuracy of TE process fault diagnosis compared with traditional optimization algorithms. The average diagnosis rate for 21 faults was 91.00%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Intrusion Detection System for Energy Efficient Cluster Based Vehicular Adhoc Networks.
- Author
-
Lavanya, R. and Kannan, S.
- Subjects
VEHICULAR ad hoc networks ,SUPPORT vector machines ,AD hoc computer networks ,FUZZY logic ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
A vehicular ad hoc network (VANET), a subfield of mobile adhoc network (MANET) is defined by its high mobility by demonstrating the dissimilar mobility patterns. So, VANET clustering techniques are needed with the consideration of the mobility parameters amongst the nearby nodes for constructing the stable clustering techniques. At the same time, security is also a major design issue in VANET, this can be resolved by the intrusion detection systems (IDS). In contrast to the conventional IDS, VANET based IDS are required to be designed in such a way that the functioning of the system does not affect the real-time efficiency of the performance of VANET applications. With this motivation, this paper presents an efficient Fuzzy Logic based Clustering with optimal fuzzy support vector machine (FSVM), called FLC-OFSVM based on the Intrusion Detection System for VANET. The proposed FLC-OFSVM model involves two stages of operations namely clustering and intrusion detections. Primarily, FLC technique is employed for selecting an appropriate set of cluster heads (CHs) and for constructing the clusters. Besides, a lightweight anomaly IDS model named FSVM optimized with krill herd (KH) optimization algorithm is developed for detecting the existence of malevolent attacks in VANET. The KH algorithm based on the herding behavior of krills is used for optimally tuning the parameters of the FSVM model. In order to investigate the performance of the FLC-OFSVM model, an extensive set of simulations have been carried out and the results thus showcased that the OFSVM model has gained maximum outcome with an accuracy of 99.98%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure.
- Author
-
Zhang, Dewei, Liu, Yin, and Davanloo Tajbakhsh, Sam
- Subjects
- *
STATISTICAL learning , *MATHEMATICAL optimization , *DIRECTED acyclic graphs , *NONSMOOTH optimization , *MACHINE learning , *GRAPH algorithms , *ALGORITHMS , *INTERIOR-point methods , *TUMOR classification - Abstract
In many statistical learning problems, it is desired that the optimal solution conform to an a priori known sparsity structure represented by a directed acyclic graph. Inducing such structures by means of convex regularizers requires nonsmooth penalty functions that exploit group overlapping. Our study focuses on evaluating the proximal operator of the latent overlapping group lasso developed by Jacob et al. in 2009. We implemented an alternating direction method of multiplier with a sharing scheme to solve large-scale instances of the underlying optimization problem efficiently. In the absence of strong convexity, global linear convergence of the algorithm is established using the error bound theory. More specifically, the paper contributes to establishing primal and dual error bounds when the nonsmooth component in the objective function does not have a polyhedral epigraph. We also investigate the effect of the graph structure on the speed of convergence of the algorithm. Detailed numerical simulation studies over different graph structures supporting the proposed algorithm and two applications in learning are provided. Summary of Contribution: The paper proposes a computationally efficient optimization algorithm to evaluate the proximal operator of a nonsmooth hierarchical sparsity-inducing regularizer and establishes its convergence properties. The computationally intensive subproblem of the proposed algorithm can be fully parallelized, which allows solving large-scale instances of the underlying problem. Comprehensive numerical simulation studies benchmarking the proposed algorithm against five other methods on the speed of convergence to optimality are provided. Furthermore, performance of the algorithm is demonstrated on two statistical learning applications related to topic modeling and breast cancer classification. The code along with the simulation studies and benchmarks are available on the corresponding author's GitHub website for evaluation and future use. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Hybrid Reptile Search Algorithm and Remora Optimization Algorithm for Optimization Tasks and Data Clustering.
- Author
-
Almotairi, Khaled H. and Abualigah, Laith
- Subjects
SEARCH algorithms ,MATHEMATICAL optimization ,REPTILES ,MACHINE learning ,DATA mining - Abstract
Data clustering is a complex data mining problem that clusters a massive amount of data objects into a predefined number of clusters; in other words, it finds symmetric and asymmetric objects. Various optimization methods have been used to solve different machine learning problems. They usually suffer from local optimal problems and unbalance between the search mechanisms. This paper proposes a novel hybrid optimization method for solving various optimization problems. The proposed method is called HRSA, which combines the original Reptile Search Algorithm (RSA) and Remora Optimization Algorithm (ROA) and handles these mechanisms' search processes by a novel transition method. The proposed HRSA method aims to avoid the main weaknesses raised by the original methods and find better solutions. The proposed HRSA is tested on solving various complicated optimization problems—twenty-three benchmark test functions and eight data clustering problems. The obtained results illustrate that the proposed HRSA method performs significantly better than the original and comparative state-of-the-art methods. The proposed method overwhelmed all the comparative methods according to the mathematical problems. It obtained promising results in solving the clustering problems. Thus, HRSA has a remarkable efficacy when employed for various clustering problems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. IDENTIFICATION AND DETECTION OF THE PROCESS FAULT IN A CEMENT ROTARY KILN BY EXTREME LEARNING MACHINE AND ANT COLONY OPTIMIZATION.
- Author
-
KADRI, Ouahab and MOUSS, Leila Hayet
- Subjects
MACHINE learning ,ANT algorithms ,MATHEMATICAL optimization ,ALGORITHMS ,MANUFACTURING processes - Abstract
The aim of this paper is to propose a new fault diagnosis method for complex manufacturing system. We have used an artificial neural network (ANN) and an Ant Colony Optimization (ACO) algorithm to diagnosis the condition monitoring of a rotary cement kiln. The Ant Colony algorithm can found a small features subset from the original real time signals and the Extreme Learning Machine (ELM) enables a good accuracy with a limiting learning time. Many benchmark datasets have used to evaluate the performances of our algorithm and the result indicates its higher efficiency and effectiveness comparing to other methods. [ABSTRACT FROM AUTHOR]
- Published
- 2017
35. Experimental analysis on Sarsa(λ) and Q(λ) with different eligibility traces strategies.
- Author
-
Lenga, Jinsong, Fyfe, Colin, and Jain, Lakhmi C.
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,LINEAR programming ,SYSTEMS engineering ,MATHEMATICAL programming ,MACHINE learning - Abstract
Temporal difference learning and eligibility traces are two mechanisms for solving reinforcement learning problems. The temporal difference technique bootstraps the state value or state-action value at every step as with dynamic programming, and learns by sampling episodes from experience as in the Monte Carlo approach. Eligibility traces is a mechanism that offers a means for recording the degree of which state is eligible for undergoing learning process. This paper aims to investigate the underlying mechanism of eligibility traces strategies using on-policy and off-policy learning algorithms. In doing so, the performance metrics can be obtained by defining the learning problem in a simulation environment, in conjunction with different learning algorithms. However, measuring learning performance and analysing sensibility are very expensive because such performance metrics can only be obtained by running an experiment with different parameter values. This paper proposes a comparative study for analysing the mechanism of eligibility traces. The objective of this paper is to compare and investigate the influences on performance caused by those different approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
36. Minibatch Recursive Least Squares Q-Learning.
- Author
-
Zhang, Chunyuan, Song, Qi, and Meng, Zeng
- Subjects
- *
REINFORCEMENT learning , *MACHINE learning , *ALGORITHMS , *MATHEMATICAL optimization - Abstract
The deep Q-network (DQN) is one of the most successful reinforcement learning algorithms, but it has some drawbacks such as slow convergence and instability. In contrast, the traditional reinforcement learning algorithms with linear function approximation usually have faster convergence and better stability, although they easily suffer from the curse of dimensionality. In recent years, many improvements to DQN have been made, but they seldom make use of the advantage of traditional algorithms to improve DQN. In this paper, we propose a novel Q-learning algorithm with linear function approximation, called the minibatch recursive least squares Q-learning (MRLS-Q). Different from the traditional Q-learning algorithm with linear function approximation, the learning mechanism and model structure of MRLS-Q are more similar to those of DQNs with only one input layer and one linear output layer. It uses the experience replay and the minibatch training mode and uses the agent's states rather than the agent's state-action pairs as the inputs. As a result, it can be used alone for low-dimensional problems and can be seamlessly integrated into DQN as the last layer for high-dimensional problems as well. In addition, MRLS-Q uses our proposed average RLS optimization technique, so that it can achieve better convergence performance whether it is used alone or integrated with DQN. At the end of this paper, we demonstrate the effectiveness of MRLS-Q on the CartPole problem and four Atari games and investigate the influences of its hyperparameters experimentally. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. FALCON OPTIMIZATION ALGORITHM FOR BAYESIAN NETWORK STRUCTURE LEARNING.
- Author
-
KAREEM, SHAHAB WAHHAB and OKUR, MEHMET CUDI
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,LEARNING strategies ,SCIENTIFIC models ,MACHINE learning ,SIMULATED annealing - Abstract
In machine-learning, some of the helpful scientific models during the production of a structure of knowledge are Bayesian networks. They can draw the relationships of probabilistic dependency among many variables. The score and search method is a tool that is used as a strategy for learning the structure of a Bayesian network. The authors apply the falcon optimization algorithm (FOA) to the learning structure of a Bayesian network. This paper has employed reversing, deleting, moving, and inserting to obtain the FOA for approaching the optimal solution of a structure. Essentially, the falcon prey search strategy is used in the FOA algorithm. The result of the proposed technique is associated with pigeon-inspired optimization, greedy search, and simulated annealing that apply the BDeu score function. The authors have also examined the performances of the confusion matrix of these techniques by utilizing several benchmark data sets. As shown by the experimental evaluations, the proposed method has a more reliable performance than other algorithms (including the production of excellent scores and accuracy values). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
38. New Information Technology Data Have Been Reported by Researchers at Siberian Federal University (Automatic design of mutation parameter adaptation for differential evolution)
- Subjects
Mathematical optimization ,Machine learning ,Algorithms ,Algorithm ,Biological sciences ,Health - Abstract
2024 FEB 20 (NewsRx) -- By a News Reporter-Staff News Editor at Life Science Weekly -- Investigators publish new report on information technology. According to news originating from Siberian Federal [...]
- Published
- 2024
39. Why Dataset Properties Bound the Scalability of Parallel Machine Learning Training Algorithms.
- Author
-
Cheng, Daning, Li, Shigang, Zhang, Hanping, Xia, Fen, and Zhang, Yunquan
- Subjects
MACHINE learning ,PARALLEL algorithms ,MATHEMATICAL optimization ,RANDOM forest algorithms ,SUPPORT vector machines ,ALGORITHMS - Abstract
As the training dataset size and the model size of machine learning increase rapidly, more computing resources are consumed to speedup the training process. However, the scalability and performance reproducibility of parallel machine learning training, which mainly uses stochastic optimization algorithms, are limited. In this paper, we demonstrate that the sample difference in the dataset plays a prominent role in the scalability of parallel machine learning algorithms. We propose to use statistical properties of dataset to measure sample differences. These properties include the variance of sample features, sample sparsity, sample diversity, and similarity in sampling sequences. We choose four types of parallel training algorithms as our research objects: (1) the asynchronous parallel SGD algorithm (Hogwild! algorithm), (2) the parallel model average SGD algorithm (minibatch SGD algorithm), (3) the decentralization optimization algorithm, and (4) the dual coordinate optimization (DADM algorithm). Our results show that the statistical properties of training datasets determine the scalability upper bound of these parallel training algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. Scheduling parallel-batching processing machines problem with learning and deterioration effect in fuzzy environment.
- Author
-
Wang, Rui, Jia, Zhaohong, and Li, Kai
- Subjects
MACHINE learning ,ALGORITHMS ,COMBINATORIAL optimization ,MATHEMATICAL optimization ,SCHEDULING - Abstract
In this paper, a problem of scheduling jobs with different sizes and fuzzy processing times (FPT) on non-identical parallel batch machines to minimize makespan is investigated. Moreover, the processing time (PT) of each batch is subject to the location-based learning and total-PT-based deterioration effect. Since this is an NP-hard combinatorial optimization problem, an improved intelligent algorithm based on fruit fly optimization algorithm (IFOA) is proposed. To verify the performance of the algorithm, the IFOA is compared with three state-of-the-art algorithms. The comparative results demonstrate that the proposed IFOA outperforms the other compared algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. Detector Distribution in the Vehicle Unit.
- Author
-
Wang Chen, Yong Liu, Jianjun Hu, and Xueyan Sun
- Subjects
MACHINE learning ,ALGORITHMS ,PARTICLE swarm optimization ,MATHEMATICAL optimization ,VEHICLES - Abstract
In view of the needs of the vehicle unit, this paper has carried on the research to the sensor distribution question. In the case of fully taking into account the information such as target identity and target priority, the mathematical model of optimal sensor allocation is established based on the maximum information gain criterion, and then the population-based incremental learning algorithm is used to solve. Finally, the validity of the method is verified by solving the problem of sensor assignment problem of the vehicle unit. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
42. Group Sparse Multiview Patch Alignment Framework With View Consistency for Image Classification.
- Author
-
Gui, Jie, Tao, Dacheng, Sun, Zhenan, Luo, Yong, You, Xinge, and Tang, Yuan Yan
- Subjects
SPARSE approximations ,IMAGE registration ,MACHINE learning ,EMAIL systems ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
No single feature can satisfactorily characterize the semantic concepts of an image. Multiview learning aims to unify different kinds of features to produce a consensual and efficient representation. This paper redefines part optimization in the patch alignment framework (PAF) and develops a group sparse multiview patch alignment framework (GSM-PAF). The new part optimization considers not only the complementary properties of different views, but also view consistency. In particular, view consistency models the correlations between all possible combinations of any two kinds of view. In contrast to conventional dimensionality reduction algorithms that perform feature extraction and feature selection independently, GSM-PAF enjoys joint feature extraction and feature selection by exploiting \({l_{2,1}}\) -norm on the projection matrix to achieve row sparsity, which leads to the simultaneous selection of relevant features and learning transformation, and thus makes the algorithm more discriminative. Experiments on two real-world image data sets demonstrate the effectiveness of GSM-PAF for image classification. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
43. Compressed Gradient Methods With Hessian-Aided Error Compensation.
- Author
-
Khirirat, Sarit, Magnusson, Sindri, and Johansson, Mikael
- Subjects
MATHEMATICAL optimization ,BIG data ,MACHINE learning ,APPROXIMATION algorithms ,ALGORITHMS ,HESSIAN matrices - Abstract
The emergence of big data has caused a dramatic shift in the operating regime for optimization algorithms. The performance bottleneck, which used to be computations, is now often communications. Several gradient compression techniques have been proposed to reduce the communication load at the price of a loss in solution accuracy. Recently, it has been shown how compression errors can be compensated for in the optimization algorithm to improve the solution accuracy. Even though convergence guarantees for error-compensated algorithms have been established, there is very limited theoretical support for quantifying the observed improvements in solution accuracy. In this paper, we show that Hessian-aided error compensation, unlike other existing schemes, avoids accumulation of compression errors on quadratic problems. We also present strong convergence guarantees of Hessian-based error compensation for stochastic gradient descent. Our numerical experiments highlight the benefits of Hessian-based error compensation, and demonstrate that similar convergence improvements are attained when only a diagonal Hessian approximation is used. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. A Modified Fuzzy C-Means Algorithm using Fruit Fly Optimization for Data Clustering.
- Author
-
Reddy, S. Nagarjuna, Reddy, S. Sai Satyanarayana, and Reddy, M. Babu
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,FUZZY algorithms ,MACHINE learning ,DATA analysis - Abstract
Clustering is the most promised technique used for data analysis. It is an unsupervised machine learning technique, means it does not require any target class. In the recent years so many clustering algorithms are invented with better accurate results. In this paper introduced a Modified Fuzzy C-Means Clustering algorithm based on Fruit-Fly optimization algorithm. Fruit-Fly optimization algorithm is used for extracting useful patterns from the data. Then our proposed approach will generate useful clusters. Our technique demonstrated to be an exceptionally effective method for the clustering of numerical data with high rate of accurateness. The function can be assessed in the name of sensitivity, specificity and accuracy. The accurateness can be associated with other prevailing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
45. Adaptive Decision Threshold-Based Extreme Learning Machine for Classifying Imbalanced Multi-label Data.
- Author
-
Gao, Shang, Dong, Wenlu, Cheng, Ke, Yang, Xibei, Zheng, Shang, and Yu, Hualong
- Subjects
MACHINE learning ,PARTICLE swarm optimization ,LABELS ,ALGORITHMS ,DATA distribution ,MATHEMATICAL optimization - Abstract
Multi-label learning is a popular area of machine learning research as it is widely applicable to many real-world scenarios. In comparison with traditional binary and multi-classification tasks, the multi-label data are more easily impacted or destroyed by an imbalanced data distribution. This paper describes an adaptive decision threshold-based extreme learning machine algorithm (ADT-ELM) that addresses the imbalanced multi-label data classification problem. Specifically, the macro and micro F-measure metrics are adopted as the optimization functions for ADT-ELM, and the particle swarm optimization algorithm is employed to determine the optimal decision threshold combination. We use the optimized thresholds to make decision for future multi-label instances. Twelve baseline multi-label data sets are used in a series of experiments o verify the effectiveness and superiority of the proposed algorithm. The experimental results indicate that the proposed ADT-ELM algorithm is significantly superior to many state-of-the-art multi-label imbalance learning algorithms, and it generally requires less training time than more sophisticated algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
46. Rate of penetration modeling using hybridization extreme learning machine and whale optimization algorithm.
- Author
-
Youcefi, Mohamed Riad, Hadjadj, Ahmed, Bentriou, Abdelhak, and Boukredera, Farouk Said
- Subjects
MACHINE learning ,MATHEMATICAL optimization ,WHALES ,ALGORITHMS ,EVOLUTIONARY algorithms - Abstract
Modeling the rate of penetration (ROP) plays a fundamental role in drilling optimization since the achievement of an optimum ROP can drastically reduce the overall cost of drilling activities. Evolved Extreme learning machine (ELM) with the evolutionary algorithms and multi-layer perceptron with Levenberg-Marquardt training algorithm (MLP-LMA) were proposed in this study to predict ROP. This paper focused mainly on two aspects. The first one was the investigation of the whale optimization algorithm (WOA) to optimize the weights and biases between input and hidden layers of ELM to enhance its prediction accuracy. The other was to adopt a prediction methodology that seeks to update the predictive model at each formation in order to reduce the dimension of input data and mitigate the effect of non real-time data such as the formation properties on the bit speed prediction. The prediction models were trained and tested using 3561 data points gathered from an Algerian field. The statistical and graphical evaluation criteria show that the ELM-WOA exhibited higher accuracy and generalization performance compared with the ELM-PSO and MLP-LMA. Furthermore, ELM-WOA was compared with two well-known ROP correlations in the literature, and the comparison results reveal that the proposed ELM-WOA model is superior to the pre-existing correlations. The findings of this study can help for the achievement of an optimum ROP and the reduction of the non-productive time. In addition, the outputs of this study can be used as an objective function during the real-time optimization of the drilling operation. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
47. Novel approach with nature-inspired and ensemble techniques for optimal text classification.
- Author
-
Khurana, Anshu and Verma, Om Prakash
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,CLASSIFICATION ,MACHINE learning ,FEATURE selection ,BIOGEOGRAPHY - Abstract
Text classification reduces the time complexity and space complexity by dividing the complete task into the different classes. The main problem with text classification is a vast number of features extracted from the textual data. Pre-processed dataset have many features, some of which are not desirable and act only like noise. In this paper, a novel approach for optimal text classification based on nature-inspired algorithm and ensemble classifier is proposed. In the proposed model, feature selection was performed with Biogeography Based Optimization (BBO) algorithm along with ensemble classifiers (Bagging). The use of ensemble classifiers for classification delivers better performance for optimal text classification as compared to an individual classifier, and hence, improving the accuracy. Ensemble classifiers combines the weakness of individual classifiers. The individual classifiers are unable to improve the classification results when compared to ensemble classifier. The selected features, after feature selection using BBO algorithm, are classified into various classes using six machine learning classifier. The experimental results are computed on ten text classification datasets taken from UCI repository and one real-time dataset of an airlines. The four different measures namely; Accuracy, Precision, Recall and F- measure are used to validate performance of our model with ten-fold cross-validation. For feature selection process, a comparison is performed among state-of-the-art algorithms available in the literature. Results shows that BBO for feature selection outperforms the other similar nature-based optimization techniques. Our proposed approach of BBO with ensemble classifier is also compared with techniques proposed by other researchers and we analyzed the results quantitatively and qualitatively. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
48. Fast and scalable Lasso via stochastic Frank-Wolfe methods with a convergence guarantee.
- Author
-
Frandi, Emanuele, Ñanculef, Ricardo, Lodi, Stefano, Sartori, Claudio, and Suykens, Johan
- Subjects
STOCHASTIC convergence ,ALGORITHMS ,MACHINE learning ,MATHEMATICAL optimization ,REGRESSION analysis ,MATHEMATICAL regularization - Abstract
Frank-Wolfe (FW) algorithms have been often proposed over the last few years as efficient solvers for a variety of optimization problems arising in the field of machine learning. The ability to work with cheap projection-free iterations and the incremental nature of the method make FW a very effective choice for many large-scale problems where computing a sparse model is desirable. In this paper, we present a high-performance implementation of the FW method tailored to solve large-scale Lasso regression problems, based on a randomized iteration, and prove that the convergence guarantees of the standard FW method are preserved in the stochastic setting. We show experimentally that our algorithm outperforms several existing state of the art methods, including the Coordinate Descent algorithm by Friedman et al. (one of the fastest known Lasso solvers), on several benchmark datasets with a very large number of features, without sacrificing the accuracy of the model. Our results illustrate that the algorithm is able to generate the complete regularization path on problems of size up to four million variables in <1 min. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
49. Solving Multiconstraint Assignment Problems Using Learning Automata.
- Author
-
Horn, Geir and Oommen, B. John
- Subjects
PROBABILISTIC automata ,CONSTRAINTS (Physics) ,MACHINE learning ,STOCHASTIC analysis ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the pioneering LA solutions to this problem, unequivocally demonstrates that LA can play an important role in solving complex combinatorial and integer optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
50. Noise Enhanced Nonparametric Detection.
- Author
-
Hao Chen, Varshney, Pramod K., Kay, Steven, and Michels, James H.
- Subjects
NONPARAMETRIC signal detection ,STATISTICAL hypothesis testing ,DETECTORS ,MACHINE learning ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
This paper investigates potential improvement of non-parametric detection performance via addition of noise and evaluates the performance of noise modified nonparametric detectors. Detection performance comparisons are made between the original detectors and noise modified detectors. Conditions for improvability as well as the optimum additive noise distributions of the widely used sign detector, the Wilcoxon detector, and the dead-zone limiter detector are derived. Finally, a simple and fast learning algorithm to find the optimal noise distribution solely based on received data is presented. A near-optimal solution can be found quickly based on a relatively small dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.