8 results on '"Hyongsuk Kim"'
Search Results
2. Accelerating projections to kernel-induced spaces by feature approximation
- Author
-
Krzysztof Ślot, Krzysztof Adamiak, and Hyongsuk Kim
- Subjects
Training set ,Computer science ,Feature extraction ,02 engineering and technology ,01 natural sciences ,Kernel principal component analysis ,Weighting ,Kernel (linear algebra) ,Kernel method ,Artificial Intelligence ,Feature (computer vision) ,Kernel (statistics) ,0103 physical sciences ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,010306 general physics ,Cluster analysis ,Algorithm ,Software ,Eigenvalues and eigenvectors - Abstract
A method for speeding-up data projections onto kernel-induced feature spaces (derived using e.g. kernel Principal Component Analysis - kPCA) is presented in the paper. The proposed idea is to simplify the derived features, implicitly defined by all training samples and dominant eigenvectors of problem-specific generalized eigenproblems, by appropriate approximations. Instead of employing the whole training set, we propose to use a small pool of its appropriately selected representatives and we formulate a rule for deriving the corresponding weight vectors that replace the considered dominant eigenvectors. The representatives are determined via clustering of training data, whereas weighting coefficients are chosen to minimize original feature approximation errors. The concept has been experimentally verified for kernel-PCA using both artificial and real datasets. It has been shown that the presented approach provides reduction in feature-extraction complexity, which implies a proportional increase in data projection speed, by one-to-two orders of magnitude, without sacrificing data analysis accuracy. Therefore, the proposed approach is well-suited for kernel-based, intelligent data analysis applications that are to be executed on resource-limited systems, such as embedded or IoT devices, or for systems where processing time is critical.
- Published
- 2020
- Full Text
- View/download PDF
3. Hybrid no-propagation learning for multilayer neural networks
- Author
-
Krzysztof Slot, Michal Strzelecki, Hyongsuk Kim, Changju Yang, and Shyam Prasad Adhikari
- Subjects
Computer Science::Machine Learning ,Hardware architecture ,0209 industrial biotechnology ,Artificial neural network ,business.industry ,Computer science ,Cognitive Neuroscience ,Computer Science::Neural and Evolutionary Computation ,02 engineering and technology ,Backpropagation ,Computer Science Applications ,020901 industrial engineering & automation ,Artificial Intelligence ,Delta rule ,Learning rule ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,MNIST database - Abstract
A hybrid learning algorithm suitable for hardware implementation of multi-layer neural networks is proposed. Though backpropagation is a powerful learning method for multilayer neural networks, its hardware implementation is difficult due to complexities of the neural synapses and the operations involved in error backpropagation. We propose a learning algorithm with performance comparable to but easier than backpropagation to be implemented in hardware for on-chip learning of multi-layer neural networks. In the proposed learning algorithm, a multilayer neural network is trained with a hybrid of gradient-based delta rule and a stochastic algorithm, called Random Weight Change. The parameters of the output layer are learned using the delta rule, whereas the inner layer parameters are learned using Random Weight Change, thereby the overall multilayer neural network is trained without the need for error backpropagation. Experimental results showing better performance of the proposed hybrid learning rule than either of its constituent learning algorithms, and comparable to that of backpropagation on the benchmark MNIST dataset are presented. Hardware architecture illustrating the ease of implementation of the proposed learning rule in analog hardware vis-a-vis the backpropagation algorithm is also presented.
- Published
- 2018
- Full Text
- View/download PDF
4. Building cellular neural network templates with a hardware friendly learning algorithm
- Author
-
Leon O. Chua, Hyongsuk Kim, Shyam Prasad Adhikari, and Changju Yang
- Subjects
Hardware architecture ,Cloning (programming) ,Multivariate random variable ,Computer science ,Cognitive Neuroscience ,020208 electrical & electronic engineering ,Image processing ,02 engineering and technology ,Computer Science Applications ,Image (mathematics) ,Template ,Artificial Intelligence ,Cellular neural network ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Algorithm - Abstract
A general solution for the construction of Cellular Neural Network (CNN) weights (cloning template) with Random Weight Change (RWC) algorithm is proposed. A target image for each input image is prepared via a sketch or any other kind of image processing technique for learning of Cellular Neural Network templates. A vector of randomly generated small values is added to the original weights and tested upon the input-target image pair. As a result, if the learning error decreases, the weight is taken for learning in the next iteration and updated using the same vector of random values. Otherwise, a new random vector for updating the weights is regenerated. One of the strong benefits of the proposed weight learning method is the simplicity of its learning algorithm and hence a simpler hardware architecture. Moreover the proposed method provides a unified solution to the problem of learning CNN templates without having to modify the original CNN structure and is applicable for all types of CNNs and input images. Successful learning of templates for various image processing tasks using different CNN structures are also demonstrated in this paper.
- Published
- 2018
- Full Text
- View/download PDF
5. Seam-line determination for image mosaicking: A technique minimizing the maximum local mismatch and the global cost
- Author
-
Hyongsuk Kim, Jaechoon Chon, and Chun-Shin Lin
- Subjects
Image mosaicking ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Measure (mathematics) ,GeneralLiterature_MISCELLANEOUS ,Atomic and Molecular Physics, and Optics ,Computer Science Applications ,Image (mathematics) ,Line (geometry) ,Maximum difference ,Computers in Earth Sciences ,Engineering (miscellaneous) ,Dijkstra's algorithm ,Algorithm ,Selection (genetic algorithm) ,Mathematics - Abstract
This paper presents a novel algorithm that selects seam-lines for mosaicking image patches. This technique uses Dijkstra’s algorithm to find a seam-line with the minimal objective function. Since a segment of seam-line with significant mismatch, even if it is short, is more visible than a lengthy one with small differences, a direct summation of the mismatch scores is inadequate. Limiting the level of the maximum difference along a seam-line should be part of the objective in the seam-line selection process. Our technique first determines this desired level of maximum difference, then applies Dijkstra’s algorithm to find the best seam-line. A quantitative measure to evaluate a seam-line is proposed. The measure is defined as the sum of a fixed number of top mismatch scores. The proposed algorithm is compared with other techniques quantitatively and visually about various types of images.
- Published
- 2010
- Full Text
- View/download PDF
6. Comparative study of Matrix exponential and Taylor series discretization methods for nonlinear ODEs
- Author
-
Dong Un An, Hyongsuk Kim, Kil To Chong, and Zheng Zhang
- Subjects
Discretization ,Mathematical analysis ,Sampling (statistics) ,Nonlinear system ,symbols.namesake ,Hardware and Architecture ,Simple (abstract algebra) ,Modeling and Simulation ,Ordinary differential equation ,Control system ,Taylor series ,symbols ,Applied mathematics ,Matrix exponential ,Software ,Mathematics - Abstract
The discretization of nonlinear systems with input time-delay has always been considered of relevant interest in control system studies. For this reason, two different, new discretization schemes are introduced, based on the Matrix exponential and Taylor series, respectively. They are analyzed for the purpose of identifying their advantages and differences. Numerous comparative simulation studies are performed for two systems, i.e., a simple first order process system and a second order system. Various sampling rates, time-delays, and input signals are inspected. Some valuable concluding remarks are made, in order to facilitate the choice between these two methods for a specific system.
- Published
- 2009
- Full Text
- View/download PDF
7. Identification and Adaptive Control of Dynamic Systems Using Self-Organized Distributed Networks
- Author
-
Hyongsuk Kim, Young-Joo Moon, and Jong Soo Choi
- Subjects
Nonlinear system ,Class (computer programming) ,Identification (information) ,Adaptive control ,Computer science ,Distributed computing ,System identification ,Control engineering ,Dynamical system - Abstract
An adaptive control technique, using system identification based on Self-Organized Distributed Networks (SODNs), is presented for a class of discrete nonlinear dynamic systems having unknown dynamics. The SODN belongs to the category of distributed local learning networks and is composed of two main networks called the learning network and the distribution network. The learning network consists of subnets each responsible for a subproblem. The distribution network is responsible for input space decomposition. The learning of the SODN is fast and precise because of the local learning mechanism. In this paper, methods for identification and indirect adaptive control of nonlinear systems using the SODN are presented. Through extensive simulation, the SODN is shown to be effective both for identification and adaptive control of nonlinear dynamic systems.
- Published
- 1997
- Full Text
- View/download PDF
8. Basis function-based adaptive critic learning and its learning parameters selection
- Author
-
C. S. Lin and Hyongsuk Kim
- Subjects
Scheme (programming language) ,Structure (mathematical logic) ,Artificial neural network ,Computer science ,business.industry ,Basis function ,Machine learning ,computer.software_genre ,Computer Science Applications ,Set (abstract data type) ,Connectionism ,Modelling and Simulation ,Modeling and Simulation ,Adaptive system ,Artificial intelligence ,business ,computer ,Selection (genetic algorithm) ,computer.programming_language - Abstract
An adaptive critic learning (ACL) structure consists of two modules: the action and the critic ones. Learning occurs in both modules. The critic module learns to evaluate the system status. It transforms occasionally occurred failure signals into useful evaluation information. Utilizing such information, the action module can learn the control technique. In this paper, we investigate the technique of using basis functions (BFs) in ACL. One difficulty in the scheme is on selection of learning parameters. Without a guideline, the best set of learning parameters must be obtained from a large number of test simulations. This study investigated the effects of parameters through analysis and verified the analytical results by simulations. In addition to the problem of parameter selection, effects of measurement errors on the CMAC-based (one basis function technique) ACL have been also examined and reported.
- Published
- 1995
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.