10 results on '"Jiasen Wang"'
Search Results
2. Task Assignment for Multivehicle Systems Based on Collaborative Neurodynamic Optimization
- Author
-
Jun Wang, Jiasen Wang, and Hangjun Che
- Subjects
Mathematical optimization ,Artificial neural network ,Linear programming ,Artificial Intelligence ,Computer Networks and Communications ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,020201 artificial intelligence & image processing ,02 engineering and technology ,Software ,Computer Science Applications ,Task (project management) - Abstract
This paper addresses task assignment (TA) for multivehicle systems. Multivehicle TA problems are formulated as a combinatorial optimization problem and further as a global optimization problem. To fulfill heterogeneous tasks, cooperation among heterogeneous vehicles is incorporated in the problem formulations. A collaborative neurodynamic optimization approach is developed for solving the TA problems. Experimental results on four types of TA problems are discussed to substantiate the efficacy of the approach.
- Published
- 2020
- Full Text
- View/download PDF
3. Vocal cord lesions classification based on deep convolutional neural network and transfer learning
- Author
-
Jun Ju, Cai Sun, Yang Wang, Dongyan Huang, Yanda Wu, Qian Zhao, Jeremy Jianshuo-li Mahr, Yuqing He, and Jiasen Wang
- Subjects
Diagnostic methods ,Receiver operating characteristic ,medicine.diagnostic_test ,business.industry ,Computer science ,Deep learning ,Laryngoscopy ,Pattern recognition ,General Medicine ,Vocal Cords ,Convolutional neural network ,Machine Learning ,ROC Curve ,Computer-aided diagnosis ,Area Under Curve ,medicine ,Artificial intelligence ,Neural Networks, Computer ,Transfer of learning ,F1 score ,business - Abstract
PURPOSE Laryngoscopy, the most common diagnostic method for vocal cord lesions (VCLs), is based mainly on the visual subjective inspection of otolaryngologists. This study aimed to establish a highly objective computer-aided VCLs diagnosis system based on deep convolutional neural network (DCNN) and transfer learning. METHODS To classify VCLs, our method combined DCNN backbone with transfer learning on a system specifically finetuned for a laryngoscopy image dataset. Laryngoscopy image database was collected to train the proposed system. The diagnostic performance was compared with other DCNN based model. Analysis of F1 score and receiver operating characteristic (ROC) curves were conducted to evaluate the performance of the system. RESULTS Beyond existing VCLs diagnosis method, the proposed system achieved an overall accuracy of 80.23%, an F1 score of 0.7836, and an AUC of 0.9557 for four fine-grained classes of VCLs, namely normal, polyp, keratinization, and carcinoma. It also demonstrated robust classification capacity for detecting urgent (keratinization, carcinoma) and non-urgent (normal, polyp), with an overall accuracy of 0.939, a sensitivity of 0.887, a specificity of 0.993, and an AUC of 0.9828. The proposed method also outperformed clinicians in the classification of normal, polyps, and carcinoma at an extremely low time cost. CONCLUSION The VCLs diagnosis system succeeded in using DCNN to distinguish the most common VCLs and normal cases, holding a practical potential for improving the overall diagnostic efficacy in VCLs examinations. The proposed VCLs diagnosis system could be appropriately integrated into the conventional work-flow of VCLs laryngoscopy as a highly objective auxiliary method. This article is protected by copyright. All rights reserved.
- Published
- 2021
4. Multivehicle Task Assignment Based on Collaborative Neurodynamic Optimization With Discrete Hopfield Networks
- Author
-
Qing-Long Han, Jun Wang, and Jiasen Wang
- Subjects
0209 industrial biotechnology ,education.field_of_study ,Mathematical optimization ,Computer Networks and Communications ,Computer science ,Population ,02 engineering and technology ,Function (mathematics) ,Computer Science Applications ,Task (project management) ,Hopfield network ,020901 industrial engineering & automation ,Quadratic equation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Quadratic unconstrained binary optimization ,Penalty method ,Quadratic programming ,education ,Software - Abstract
This article presents a collaborative neurodynamic optimization (CNO) approach to multivehicle task assignments (TAs). The original combinatorial quadratic optimization problem for TA is reformulated as a quadratic unconstrained binary optimization (QUBO) problem with a quadratic utility function and a penalty function for handling load capacity and cooperation constraints. In the framework of CNO with a population of discrete Hopfield networks (DHNs), a TA algorithm is proposed for solving the formulated QUBO problem. Superior experimental results in four typical multivehicle operation scenarios are reported to substantiate the efficacy of the proposed neurodynamics-based TA approach.
- Published
- 2021
5. Two-Timescale Multilayer Recurrent Neural Networks for Nonlinear Programming
- Author
-
Jun Wang and Jiasen Wang
- Subjects
Mathematical optimization ,Quantitative Biology::Neurons and Cognition ,Computer Networks and Communications ,Computer science ,Computer Science::Neural and Evolutionary Computation ,02 engineering and technology ,Computer Science Applications ,Nonlinear programming ,Local optimum ,Recurrent neural network ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Transient (computer programming) ,Layer (object-oriented design) ,Software ,Sequential quadratic programming - Abstract
This article presents a neurodynamic approach to nonlinear programming. Motivated by the idea of sequential quadratic programming, a class of two-timescale multilayer recurrent neural networks is presented with neuronal dynamics in their output layer operating at a bigger timescale than in their hidden layers. In the two-timescale multilayer recurrent neural networks, the transient states in the hidden layer(s) undergo faster dynamics than those in the output layer. Sufficient conditions are derived on the convergence of the two-timescale multilayer recurrent neural networks to local optima of nonlinear programming problems. Simulation results of collaborative neurodynamic optimization based on the two-timescale neurodynamic approach on global optimization problems with nonconvex objective functions or constraints are discussed to substantiate the efficacy of the two-timescale neurodynamic approach.
- Published
- 2020
6. A Novel Cooperative Divide-and-Conquer Neural Networks Algorithm
- Author
-
Pan Wang, Yandi Zuo, Jiasen Wang, and Jian Zhang
- Subjects
Divide and conquer algorithms ,0209 industrial biotechnology ,020901 industrial engineering & automation ,Artificial neural network ,Computer science ,business.industry ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,02 engineering and technology ,Artificial intelligence ,business - Abstract
Dynamic modularity is one of the fundamental characteristics of the human brain. Cooperative divide and conquer strategy is a basic problem solving approach. This chapter proposes a new subnet training method for modular neural networks with the inspiration of the principle of “an expert with other capabilities.” The key point of this method is that a subnet learns the neighbor data sets while fulfilling its main task: learning the objective data set. Additionally, a relative distance measure is proposed to replace the absolute distance measure used in the classical method and its advantage is theoretically discussed. Both methodology and empirical study are presented. Two types of experiments respectively related with the approximation problem and the prediction problem in nonlinear dynamic systems are designed to verify the effectiveness of the proposed method. Compared with the classical learning method, the average testing error is dramatically decreased and more stable. The superiority of the relative distance measure is also corroborated. Finally, a mind-gut frame is proposed.
- Published
- 2020
- Full Text
- View/download PDF
7. Methodological Research for Modular Neural Networks Based on 'an Expert With Other Capabilities'
- Author
-
Jian Zhang, Pan Wang, and Jiasen Wang
- Subjects
0209 industrial biotechnology ,Information Systems and Management ,Artificial neural network ,Computer science ,business.industry ,Strategy and Management ,02 engineering and technology ,Management Science and Operations Research ,Modular design ,Subnet ,Measure (mathematics) ,Computer Science Applications ,Task (project management) ,Set (abstract data type) ,020901 industrial engineering & automation ,Empirical research ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Business and International Management ,business ,Methodological research - Abstract
This article contains a new subnet training method for modular neural networks, proposed with the inspiration of the principle of “an expert with other capabilities”. The key point of this method is that a subnet learns the neighbor data sets while fulfilling its main task: learning the objective data set. Additionally, a relative distance measure is proposed to replace the absolute distance measure used in the classical subnet learning method and its advantage in the general case is theoretically discussed. Both methodology and empirical study of this new method are presented. Two types of experiments respectively related with the approximation problem and the prediction problem in nonlinear dynamic systems are designed to verify the effectiveness of the proposed method. Compared with the classical subnet learning method, the average testing error of the proposed method is dramatically decreased and more stable. The superiority of the relative distance measure is also corroborated.
- Published
- 2018
- Full Text
- View/download PDF
8. Design and Analysis of Neural Networks Based on Linearly Translated Features
- Author
-
Jun Wang, Wei Zhang, and Jiasen Wang
- Subjects
Identification (information) ,ComputingMethodologies_PATTERNRECOGNITION ,Quantitative Biology::Neurons and Cognition ,Artificial neural network ,Computer science ,business.industry ,Computer Science::Neural and Evolutionary Computation ,Feedforward neural network ,ComputingMethodologies_GENERAL ,Artificial intelligence ,Translation (geometry) ,business - Abstract
In this paper, neural networks based on linearly translated features (LTFs) are presented. LTFs including uniform, non-uniform, and multiple translation vectors are embedded into feedforward neural networks. Learning algorithms are presented for the neural networks. Learning capabilities of the neural networks are analyzed. Experimental results on approximation’ identification, and evaluation problems are reported to substantiate the efficacy of the neural networks and learning algorithms.
- Published
- 2019
- Full Text
- View/download PDF
9. Be expert in multiple aspects and good at many modular neural network with reduced subnet relative training complexity
- Author
-
Jiasen Wang, Chao Huang, and Xudong Ye
- Subjects
Artificial neural network ,business.industry ,Generalization ,Computer science ,Machine learning ,computer.software_genre ,Modular neural network ,Subnet ,Identification (information) ,Dimension (vector space) ,Convergence (routing) ,Artificial intelligence ,Performance improvement ,business ,computer - Abstract
This paper mainly aims at reducing relative learning complexity of subnets originate from “be Expert in Multiple aspects and Good at Many” (EMGM) modular neural network (MNN). Firstly, subnet learning algorithm which has a pure sequential execution style is built and convergence analysis is given. Secondly, in EMGM MNN system, an equivalent learning condition, which satisfies the criterion needed for the efficient training algorithm designed before, is founded for every subnet. Three identification problems have been involved to test the effectiveness and efficiency of the new framework in dealing with low dimension data. Both theoretical and experimental results show the new framework will reduce relative learning complexity of every subnet. The experiment result also shows new framework can achieve comparable generalization capability with original one. Furthermore, Bias Variance analysis shows maximum ability of generalization performance improvement of EMGM MNN may exist and the improvement comes from the improvement of bias estimation accuracy.
- Published
- 2013
- Full Text
- View/download PDF
10. A New MNN's Training Method with Empirical Study
- Author
-
Jiasen Wang and Pan Wang
- Subjects
Approximation theory ,Artificial neural network ,Computer science ,Time delay neural network ,business.industry ,Machine learning ,computer.software_genre ,Modular neural network ,Subnet ,Ensemble learning ,Set (abstract data type) ,Task (computing) ,Artificial intelligence ,business ,computer - Abstract
Based on the thought of """"to be expert in one aspect and good at many"""", a new training method of modular neural network (MNN) is presented. The key point of this method is a subnet learns the neighbor data sets while fulfiling its main task : learning the objective data set. Both methodology and empirical study of this new method are presented. Two examples (static approximation and nonlinear dynamic system pre-diction) are tested to show the new method's effectiveness: average testing error is dramatically decreased compared to original algorithm.
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.