11 results
Search Results
2. Improved FunkSVD Algorithm Based on RMSProp.
- Author
-
Yue, Xiaochen and Liu, Qicheng
- Subjects
ALGORITHMS ,DEEP learning ,MACHINE learning ,MATHEMATICAL optimization ,PROBLEM solving - Abstract
To solve the problem of low accuracy in the traditional FunkSVD recommendation algorithm, an improved FunkSVD algorithm (RM-FS) is proposed. RM-FS is an improvement of the traditional FunkSVD algorithm, using RMSProp, a deep learning optimization algorithm. The RM-FS algorithm can not only solve the problem of reduced accuracy of the traditional FunkSVD algorithm because of iterative oscillations but also alleviate the impact of data sparseness on the accuracy of the algorithm, achieving the effect of improving the accuracy of the traditional algorithm. The experimental results show that the RM-FS algorithm proposed in this paper effectively improves the accuracy of the recommendation algorithm, which is better than the traditional FunkSVD recommendation algorithm and other improved FunkSVD algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. A fixed structure learning automata‐based optimization algorithm for structure learning of Bayesian networks.
- Author
-
Asghari, Kayvan, Masdari, Mohammad, Soleimanian Gharehchopogh, Farhad, and Saneifard, Rahim
- Subjects
ANT algorithms ,BEES algorithm ,MATHEMATICAL optimization ,MACHINE learning ,ALGORITHMS ,PROBLEM solving ,KNOWLEDGE representation (Information theory) ,METAHEURISTIC algorithms - Abstract
One of the useful knowledge representation tools, which can describe the joint probability distribution between some random variables with a graphical model and can be trained by a dataset, is the Bayesian network (BN). A BN is composed of a network structure and a conditional probability distribution table for each node. Discovering an optimal BN structure is an NP‐hard optimization problem that various meta‐heuristic algorithms are applied to solve this problem by researchers. The genetic algorithms, ant colony optimization, evolutionary programming, artificial bee colony, and bacterial foraging optimization are some of the meta‐heuristic methods to solve this problem using a dataset. Most of these methods are applying a scoring metric to generate the best network structure from a set of candidates. A Fixed Structure Learning Automata‐Based (FSLA‐B) algorithm is presented in this paper to solve the structure learning problem of BNs. There is a fixed structure learning automaton for each pair of vertices in the BN's graph structure in the proposed algorithm. The action of this automaton determines the presence and direction of an edge between the vertices. The proposed algorithm performs a guided search procedure using the FSLA and escapes from local optimums. Several datasets are utilised in this paper to evaluate the performance of the proposed algorithm. By performing various experiments, multiple meta‐heuristic algorithms are compared with the introduced new one. The obtained results represented that the proposed algorithm could produce competitive results and find the near‐optimal solution for the BN structure learning problem. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Recent advances on support vector machines research.
- Author
-
Tian, Yingjie, Shi, Yong, and Liu, Xiaohui
- Subjects
SUPPORT vector machines ,MACHINE learning ,BUSINESS forecasting ,MATHEMATICAL optimization ,PROBLEM solving ,LINEAR programming ,ALGORITHMS ,DATA mining ,BANKRUPTCY ,CREDIT risk - Abstract
Support vector machines (SVMs), with their roots in Statistical Learning Theory (SLT) and optimization methods, have become powerful tools for problem solution in machine learning. SVMs reduce most machine learning problems to optimization problems and optimization lies at the heart of SVMs. Lots of SVM algorithms involve solving not only convex problems, such as linear programming, quadratic programming, second order cone programming, semi-definite programming, but also non-convex and more general optimization problems, such as integer programming, semi-infinite programming, bi-level programming and so on. The purpose of this paper is to understand SVM from the optimization point of view, review several representative optimization models in SVMs, their applications in economics, in order to promote the research interests in both optimization-based SVMs theory and economics applications. This paper starts with summarizing and explaining the nature of SVMs. It then proceeds to discuss optimization models for SVM following three major themes. First, least squares SVM, twin SVM, AUC Maximizing SVM, and fuzzy SVM are discussed for standard problems. Second, support vector ordinal machine, semisupervised SVM, Universum SVM, robust SVM, knowledge based SVM and multi-instance SVM are then presented for nonstandard problems. Third, we explore other important issues such as lp-norm SVM for feature selection, LOOSVM based on minimizing LOO error bound, probabilistic outputs for SVM, and rule extraction from SVM. At last, several applications of SVMs to financial forecasting, bankruptcy prediction, credit risk analysis are introduced. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
5. The effect of cooling functions on ensemble clustering using simulated annealing.
- Author
-
Jian Li, Swift, Stephen, and Xiaohui Liu
- Subjects
SIMULATED annealing ,ALGORITHMS ,PROBLEM solving ,MATHEMATICAL optimization ,MACHINE learning ,DATA mining - Abstract
Simulated Annealing (SA) has been adopted by many Ensemble Clustering methods to achieve global combinational optimisation. However the performance of SA is sensitive to the settings of its parameters. Much work has been done for optimising the settings of these parameters over the last two decades, but few of them analysed the behaviour of different cooling functions for Ensemble Clustering. Our work has demonstrated that the clustering results could be invalid if we use SA for Ensemble Clustering without a good understanding of the behaviour of cooling functions. Therefore this paper aims to present the findings of how different cooling functions may affect the performance of Ensemble Clustering methods that use SA. We analyse the effect of cooling functions from three aspects: the convergence rate, the final value of the objective function, and the accuracy of results. Ten different cooling functions are tested on two Ensemble Clustering methods, and thirteen different datasets have been used for the experiments. The findings are particularly helpful for those who are interested in Ensemble Clustering methods as well as those who want to obtain a deep understanding of the behaviour of the cooling functions. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
6. Acceleration of Global Optimization Algorithm by Detecting Local Extrema Based on Machine Learning.
- Author
-
Barkalov, Konstantin, Lebedev, Ilya, and Kozinov, Evgeny
- Subjects
GLOBAL optimization ,MATHEMATICAL optimization ,MACHINE learning ,PROBLEM solving ,ALGORITHMS - Abstract
This paper features the study of global optimization problems and numerical methods of their solution. Such problems are computationally expensive since the objective function can be multi-extremal, nondifferentiable, and, as a rule, given in the form of a "black box". This study used a deterministic algorithm for finding the global extremum. This algorithm is based neither on the concept of multistart, nor nature-inspired algorithms. The article provides computational rules of the one-dimensional algorithm and the nested optimization scheme which could be applied for solving multidimensional problems. Please note that the solution complexity of global optimization problems essentially depends on the presence of multiple local extrema. In this paper, we apply machine learning methods to identify regions of attraction of local minima. The use of local optimization algorithms in the selected regions can significantly accelerate the convergence of global search as it could reduce the number of search trials in the vicinity of local minima. The results of computational experiments carried out on several hundred global optimization problems of different dimensionalities presented in the paper confirm the effect of accelerated convergence (in terms of the number of search trials required to solve a problem with a given accuracy). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. A hybrid PBIL-based harmony search method.
- Author
-
Gao, X., Wang, X., Jokinen, T., Ovaska, S., Arkkio, A., and Zenger, K.
- Subjects
NONLINEAR functions ,MATHEMATICAL optimization ,ALGORITHMS ,MACHINE learning ,COMPUTER simulation ,PROBLEM solving ,PERFORMANCE evaluation - Abstract
The harmony search (HS) method is a popular meta-heuristic optimization algorithm, which has been extensively employed to handle various engineering problems. However, it sometimes fails to offer a satisfactory convergence performance under certain circumstances. In this paper, we propose and study a hybrid HS approach, HS-PBIL, by merging the HS together with the population-based incremental learning (PBIL). Numerical simulations demonstrate that our HS-PBIL is well capable of outperforming the regular HS method in dealing with nonlinear function optimization and a practical wind generator optimization problem. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
8. A note on teaching–learning-based optimization algorithm
- Author
-
Črepinšek, Matej, Liu, Shih-Hsi, and Mernik, Luka
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *QUALITATIVE research , *COMPARATIVE studies , *MACHINE learning , *MATHEMATICAL functions , *CONSTRAINED optimization , *PROBLEM solving - Abstract
Abstract: Teaching–Learning-Based Optimization (TLBO) seems to be a rising star from amongst a number of metaheuristics with relatively competitive performances. It is reported that it outperforms some of the well-known metaheuristics regarding constrained benchmark functions, constrained mechanical design, and continuous non-linear numerical optimization problems. Such a breakthrough has steered us towards investigating the secrets of TLBO’s dominance. This paper reports our findings on TLBO qualitatively and quantitatively through code-reviews and experiments, respectively. Our findings have revealed three important mistakes regarding TLBO: (1) at least one unreported but important step; (2) incorrect formulae on a number of fitness function evaluations; and (3) misconceptions about parameter-less control. Additionally, unfair experimental settings/conditions were used to conduct experimental comparisons (e.g., different stopping criteria). The experimental results for constrained and unconstrained benchmark functions under fairly equal conditions failed to validate its performance supremacy. The ultimate goal of this paper is to provide reminders for metaheuristics’ researchers and practitioners in order to avoid similar mistakes regarding both the qualitative and quantitative aspects, and to allow fair comparisons of the TLBO algorithm to be made with other metaheuristic algorithms. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
9. Preface.
- Author
-
Bertók, Botond, Süle, Zoltán, and Terlaky, Tamás
- Subjects
INFORMATION technology ,MATHEMATICAL optimization ,ALGORITHMS ,CONFERENCES & conventions ,MACHINE learning ,SUPPORT vector machines ,PROBLEM solving - Published
- 2011
- Full Text
- View/download PDF
10. A machine learning-based branch and price algorithm for a sampled vehicle routing problem.
- Author
-
Furian, Nikolaus, O'Sullivan, Michael, Walker, Cameron, and Çela, Eranda
- Subjects
VEHICLE routing problem ,ALGORITHMS ,PROBLEM solving ,MATHEMATICAL optimization ,MACHINE learning - Abstract
Planning of operations, such as routing of vehicles, is often performed repetitively in rea-world settings, either by humans or algorithms solving mathematical problems. While humans build experience over multiple executions of such planning tasks and are able to recognize common patterns in different problem instances, classical optimization algorithms solve every instance independently. Machine learning (ML) can be seen as a computational counterpart to the human ability to recognize patterns based on experience. We consider variants of the classical Vehicle Routing Problem with Time Windows and Capacitated Vehicle Routing Problem, which are based on the assumption that problem instances follow specific common patterns. For this problem, we propose a ML-based branch and price framework which explicitly utilizes those patterns. In this context, the ML models are used in two ways: (a) to predict the value of binary decision variables in the optimal solution and (b) to predict branching scores for fractional variables based on full strong branching. The prediction of decision variables is then integrated in a node selection policy, while a predicted branching score is used within a variable selection policy. These ML-based approaches for node and variable selection are integrated in a reliability-based branching algorithm that assesses their quality and allows for replacing ML approaches by other (classical) better performing approaches at the level of specific variables in each specific instance. Computational results show that our algorithms outperform benchmark branching strategies. Further, we demonstrate that our approach is robust with respect to small changes in instance sizes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. Hamming Distance Method with Subjective and Objective Weights for Personnel Selection
- Author
-
Mohd Syafarudy Abu, Muhammad Zaini Ahmad, M. S. Jusoh, and R. Md Saad
- Subjects
Mathematical optimization ,Article Subject ,Computer science ,Decision Making ,Fuzzy set ,Personnel selection ,lcsh:Medicine ,Machine learning ,computer.software_genre ,Fuzzy logic ,lcsh:Technology ,General Biochemistry, Genetics and Molecular Biology ,Fuzzy Logic ,Complete information ,Humans ,Entropy (information theory) ,Personnel Selection ,lcsh:Science ,Problem Solving ,General Environmental Science ,business.industry ,lcsh:T ,lcsh:R ,Reproducibility of Results ,Vagueness ,Hamming distance ,General Medicine ,Multiple-criteria decision analysis ,lcsh:Q ,Artificial intelligence ,business ,computer ,Algorithms ,Research Article - Abstract
Multicriteria decision making (MCDM) is one of the methods that popularly has been used in solving personnel selection problem. Alternatives, criteria, and weights are some of the fundamental aspects in MCDM that need to be defined clearly in order to achieve a good result. Apart from these aspects, fuzzy data has to take into consideration that it may arise from unobtainable and incomplete information. In this paper, we propose a new approach for personnel selection problem. The proposed approach is based on Hamming distance method with subjective and objective weights (HDMSOW’s). In case of vagueness situation, fuzzy set theory is then incorporated onto the HDMSOW’s. To determine the objective weight for each attribute, the fuzzy Shannon’s entropy is considered. While for the subjective weight, it is aggregated into a comparable scale. A numerical example is presented to illustrate the HDMSOW’s.
- Published
- 2014
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.