8 results on '"Zhou, Zhi-hua"'
Search Results
2. Distribution-Free One-Pass Learning.
- Author
-
Zhao, Peng, Wang, Xinqiang, Xie, Siyu, Guo, Lei, and Zhou, Zhi-Hua
- Subjects
DATA distribution ,MACHINE learning ,COMPUTATIONAL complexity ,PRIOR learning - Abstract
In many large-scale machine learning applications, data are accumulated over time, and thus, an appropriate model should be able to update in an online style. In particular, it would be ideal to have a storage independent from the data volume, and scan each data item only once. Meanwhile, the data distribution usually changes during the accumulation procedure, making distribution-free one-pass learning a challenging task. In this paper, we propose a simple yet effective approach for this task, without requiring prior knowledge about the change, where every data item can be discarded once scanned. We also present a variant for high-dimensional situations, by exploiting compressed sensing to reduce computational and storage complexity. Theoretical analysis shows that our proposal converges under mild assumptions, and the performance is validated on both synthetic and real-world datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Classification Under Streaming Emerging New Classes: A Solution Using Completely-Random Trees.
- Author
-
Mu, Xin, Ting, Kai Ming, and Zhou, Zhi-Hua
- Subjects
DATA modeling ,SUPPORT vector machines ,STREAMING technology ,ANOMALY detection (Computer security) ,COMPUTATIONAL complexity - Abstract
This paper investigates an important problem in stream mining, i.e., classification under streaming emerging new classes or SENC. The SENC problem can be decomposed into three subproblems: detecting emerging new classes, classifying known classes, and updating models to integrate each new class as part of known classes. The common approach is to treat it as a classification problem and solve it using either a supervised learner or a semi-supervised learner. We propose an alternative approach by using unsupervised learning as the basis to solve this problem. The proposed method employs completely-random trees which have been shown to work well in unsupervised learning and supervised learning independently in the literature. The completely-random trees are used as a single common core to solve all three subproblems: unsupervised learning, supervised learning, and model update on data streams. We show that the proposed unsupervised-learning-focused method often achieves significantly better outcomes than existing classification-focused methods. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
4. Switch Analysis for Running Time Analysis of Evolutionary Algorithms.
- Author
-
Yu, Yang, Qian, Chao, and Zhou, Zhi-Hua
- Subjects
EVOLUTIONARY algorithms ,COMPUTATIONAL complexity ,MARKOV processes ,EVOLUTIONARY computation ,HEURISTIC algorithms ,MATHEMATICAL optimization - Abstract
Evolutionary algorithms (EAs) are a large family of heuristic optimization algorithms. They are problem independent and have been applied in various optimization problems. Thus, general analysis tools are especially appealing for guiding the analysis of EAs in various situations. This paper develops the switch analysis approach for running time analysis of EAs, revealing their average computational complexity. Unlike previous analysis approaches that analyze an algorithm from scratch, the switch analysis makes use of another well-analyzed algorithm and, by contrasting them, can lead to better results. We investigate the power of switch analysis by comparing it with two commonly used analysis approaches, the fitness level method and the drift analysis. We define the reducibility between two analysis approaches for comparing their power. By the reducibility relationship, it is revealed that both the fitness level method and the drift analysis are reducible to the switch analysis, as they are equivalent to specific configurations of the switch analysis. We further show that the switch analysis is not reducible to the fitness level method, and compare it with the drift analysis on a concrete analysis case (the discrete linear problem). The reducibility study might shed some light on the unified view of different running time analysis approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
5. Efficient Optimization of Performance Measures by Classifier Adaptation.
- Author
-
Li, Nan, Tsang, Ivor W., and Zhou, Zhi-Hua
- Subjects
MACHINE learning ,PERFORMANCE evaluation ,KERNEL (Mathematics) ,COMPUTATIONAL complexity ,MATHEMATICAL optimization ,REGULARIZATION parameter - Abstract
In practical applications, machine learning algorithms are often needed to learn classifiers that optimize domain specific performance measures. Previously, the research has focused on learning the needed classifier in isolation, yet learning nonlinear classifier for nonlinear and nonsmooth performance measures is still hard. In this paper, rather than learning the needed classifier by optimizing specific performance measure directly, we circumvent this problem by proposing a novel two-step approach called CAPO, namely, to first train nonlinear auxiliary classifiers with existing learning methods and then to adapt auxiliary classifiers for specific performance measures. In the first step, auxiliary classifiers can be obtained efficiently by taking off-the-shelf learning algorithms. For the second step, we show that the classifier adaptation problem can be reduced to a quadratic program problem, which is similar to linear (SVM^perf) and can be efficiently solved. By exploiting nonlinear auxiliary classifiers, CAPO can generate nonlinear classifier which optimizes a large variety of performance measures, including all the performance measures based on the contingency table and AUC, while keeping high computational efficiency. Empirical studies show that CAPO is effective and of high computational efficiency, and it is even more efficient than linear (SVM^perf). [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
6. A new approach to estimating the expected first hitting time of evolutionary algorithms
- Author
-
Yu, Yang and Zhou, Zhi-Hua
- Subjects
- *
COMPUTER algorithms , *EVOLUTIONARY computation , *ARTIFICIAL neural networks , *CONFIGURATIONS (Geometry) , *STOCHASTIC convergence , *CONCENTRATION functions - Abstract
Abstract: Evolutionary algorithms (EA) have been shown to be very effective in solving practical problems, yet many important theoretical issues of them are not clear. The expected first hitting time is one of the most important theoretical issues of evolutionary algorithms, since it implies the average computational time complexity. In this paper, we establish a bridge between the expected first hitting time and another important theoretical issue, i.e., convergence rate. Through this bridge, we propose a new general approach to estimating the expected first hitting time. Using this approach, we analyze EAs with different configurations, including three mutation operators, with/without population, a recombination operator and a time variant mutation operator, on a hard problem. The results show that the proposed approach is helpful for analyzing a broad range of evolutionary algorithms. Moreover, we give an explanation of what makes a problem hard to EAs, and based on the recognition, we prove the hardness of a general problem. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
7. An analysis on recombination in multi-objective evolutionary optimization.
- Author
-
Qian, Chao, Yu, Yang, and Zhou, Zhi-Hua
- Subjects
- *
MATHEMATICAL optimization , *EVOLUTIONARY algorithms , *PARETO principle , *ARTIFICIAL intelligence , *PROBLEM solving , *SPANNING trees - Abstract
Abstract: Evolutionary algorithms (EAs) are increasingly popular approaches to multi-objective optimization. One of their significant advantages is that they can directly optimize the Pareto front by evolving a population of solutions, where the recombination (also called crossover) operators are usually employed to reproduce new and potentially better solutions by mixing up solutions in the population. Recombination in multi-objective evolutionary algorithms is, however, mostly applied heuristically. In this paper, we investigate how from a theoretical viewpoint a recombination operator will affect a multi-objective EA. First, we employ artificial benchmark problems: the Weighted LPTNO problem (a generalization of the well-studied LOTZ problem), and the well-studied COCZ problem, for studying the effect of recombination. Our analysis discloses that recombination may accelerate the filling of the Pareto front by recombining diverse solutions and thus help solve multi-objective optimization. Because of this, for these two problems, we find that a multi-objective EA with recombination enabled achieves a better expected running time than any known EAs with recombination disabled. We further examine the effect of recombination on solving the multi-objective minimum spanning tree problem, which is an NP-hard problem. Following our finding on the artificial problems, our analysis shows that recombination also helps accelerate filling the Pareto front and thus helps find approximate solutions faster. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
8. Maximizing submodular or monotone approximately submodular functions by multi-objective evolutionary algorithms.
- Author
-
Qian, Chao, Yu, Yang, Tang, Ke, Yao, Xin, and Zhou, Zhi-Hua
- Subjects
- *
SUBMODULAR functions , *MATROIDS , *COMBINATORIAL optimization , *MATHEMATICAL optimization , *POLYNOMIAL approximation , *EVOLUTIONARY algorithms , *EVOLUTIONARY computation - Abstract
Evolutionary algorithms (EAs) are a kind of nature-inspired general-purpose optimization algorithm, and have shown empirically good performance in solving various real-word optimization problems. During the past two decades, promising results on the running time analysis (one essential theoretical aspect) of EAs have been obtained, while most of them focused on isolated combinatorial optimization problems, which do not reflect the general-purpose nature of EAs. To provide a general theoretical explanation of the behavior of EAs, it is desirable to study their performance on general classes of combinatorial optimization problems. To the best of our knowledge, the only result towards this direction is the provably good approximation guarantees of EAs for the problem class of maximizing monotone submodular functions with matroid constraints. The aim of this work is to contribute to this line of research. Considering that many combinatorial optimization problems involve non-monotone or non-submodular objective functions, we study the general problem classes, maximizing submodular functions with/without a size constraint and maximizing monotone approximately submodular functions with a size constraint. We prove that a simple multi-objective EA called GSEMO-C can generally achieve good approximation guarantees in polynomial expected running time. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.