18 results on '"Jiang, Yun"'
Search Results
2. A Brain Tumor Segmentation New Method Based on Statistical Thresholding and Multiscale CNN
- Author
-
Jiang, Yun, Hou, Jinquan, Xiao, Xiao, Deng, Haili, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Huang, De-Shuang, editor, Gromiha, M. Michael, editor, Han, Kyungsook, editor, and Hussain, Abir, editor
- Published
- 2018
- Full Text
- View/download PDF
3. Phylogenomic, morphological, and niche differentiation analyses unveil species delimitation and evolutionary history of endangered maples in Acer series Campestria (Sapindaceae).
- Author
-
Fan, Xiao‐Kai, Wu, Jing, Comes, Hans Peter, Feng, Yu, Wang, Ting, Yang, Shu‐Zhen, Iwasaki, Takaya, Zhu, Hong, Jiang, Yun, Lee, Joongku, and Li, Pan
- Subjects
MAPLE ,ENDANGERED species ,SAPINDACEAE ,SPECIES ,PRINCIPAL components analysis ,BIODIVERSITY conservation - Abstract
Accurate species delimitation is crucial for biodiversity conservation. The Acer series Campestria comprises four species, A. campestre L., A. miyabei Maxim., A. miaotaiense P. C. Tsoong, and A. yangjuechi Fang & P. L. Chiu. To clarify controversies over the taxonomic status of the latter three endangered species, we undertook phylogenomic, morphological, and niche differentiation analyses in series Campestria. Our coalescent species tree of 544 and 77 single‐copy nuclear genes supported series Campestria as monophyletic, with A. yangjuechi having the closest relationship with A. miaotaiense. However, in the plastome‐derived tree based on 64 protein coding sequences, the four species did not cluster together, and each of them grouped with some other sympatric Acer species. Given this nuclear‐cytoplasmic conflict, we hypothesize that A. yangjuechi have been subject to nuclear gene introgression and plastid (pt) capture involving another sympatric maple, that is, A. amplum Rehder. Principal component analysis and machine learning based on morphological data could not separate A. yangjuechi and A. miaotaiense, but they both could be clearly distinguished from A. miyabei. Moreover, the niche overlap tests of the two more widespread species, A. miyabei and A. miaotaiense, showed they clearly occupy distinct niches. Overall, we conclude that A. miyabei and A. miaotaiense are distinct species, while A. yangjuechi (endemic to Mt. Tianmu/East China) should be treated as a subspecies of A. miaotaiense. Our study points out that multiple lines of phylogenomic, morphological, and ecological evidence prove highly useful in species delimitation. Additionally, our results should help to inform conservation measures for endangered species of the genus Acer/series Campestria in East Asia. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Dermoscopic image segmentation based on Pyramid Residual Attention Module.
- Author
-
Jiang, Yun, Cheng, Tongtong, Dong, Jinkun, Liang, Jing, Zhang, Yuan, Lin, Xin, and Yao, Huixia
- Subjects
- *
DEEP learning , *IMAGE segmentation , *PYRAMIDS , *COMPUTER-aided diagnosis , *CONVOLUTIONAL neural networks , *DERMOSCOPY , *FEATURE extraction - Abstract
We propose a stacked convolutional neural network incorporating a novel and efficient pyramid residual attention (PRA) module for the task of automatic segmentation of dermoscopic images. Precise segmentation is a significant and challenging step for computer-aided diagnosis technology in skin lesion diagnosis and treatment. The proposed PRA has the following characteristics: First, we concentrate on three widely used modules in the PRA. The purpose of the pyramid structure is to extract the feature information of the lesion area at different scales, the residual means is aimed to ensure the efficiency of model training, and the attention mechanism is used to screen effective features maps. Thanks to the PRA, our network can still obtain precise boundary information that distinguishes healthy skin from diseased areas for the blurred lesion areas. Secondly, the proposed PRA can increase the segmentation ability of a single module for lesion regions through efficient stacking. The third, we incorporate the idea of encoder-decoder into the architecture of the overall network. Compared with the traditional networks, we divide the segmentation procedure into three levels and construct the pyramid residual attention network (PRAN). The shallow layer mainly processes spatial information, the middle layer refines both spatial and semantic information, and the deep layer intensively learns semantic information. The basic module of PRAN is PRA, which is enough to ensure the efficiency of the three-layer architecture network. We extensively evaluate our method on ISIC2017 and ISIC2018 datasets. The experimental results demonstrate that PRAN can obtain better segmentation performance comparable to state-of-the-art deep learning models under the same experiment environment conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Research on Prediction of Physical Fitness Test Results in Colleges and Universities Based on Deep Learning.
- Author
-
Wang, Jiwen, Wu, Binghui, Jiang, Yun, and Yuan, Yidan
- Subjects
PHYSICAL fitness testing ,DEEP learning ,SCHOOL children ,UNIVERSITIES & colleges ,SECONDARY school students ,PHYSICAL mobility - Abstract
All-round development strategy of quality education makes primary and secondary school students not only pursue the improvement of achievement but also carry out physical exercise. Physical training is the material basis for students to study other disciplines, and the core is to improve students' own physical quality and increase their physique. Having a strong body helps students have certain physical strength to study in other courses. In recent years, in the background of the scientific era, college students in China obviously have some problems, such as insufficient awareness of physical exercise and serious decline in physical fitness. Nowadays, teenagers are addicted to games and go out to become members of the low-headed people. Nowadays, it is very unhealthy for teenagers to go out with their mobile phones as "low-headed people." In order to avoid college students getting rid of this living condition, colleges and universities carry out physical fitness tests every year to promote contemporary college students to strengthen exercise. College students, as the main force in the future construction of the motherland, should not only master professional knowledge but also improve their physical fitness. Good health is the greatest capital in one's life. Every year, some students fail to pass the physical fitness test in universities. It stands to reason that college students are in the age of high youth, and physical fitness test should be a piece of cake for them. In the face of the inconsistency between the predicted results and the actual results, this paper analyzes this. Based on the above situation, With the aim of improving students' training efficiency and physical performance, the physical performance prediction model of deep learning is designed and analyzed to predict the performance, analyze the influencing factors of the model and how to reduce the influencing components of the factors, and analyze and compare the performance of various prediction models to find out the best model, so as to make the predicted value closer to the true value. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. MFI-Net: A multi-resolution fusion input network for retinal vessel segmentation.
- Author
-
Jiang, Yun, Wu, Chao, Wang, Ge, Yao, Hui-Xia, and Liu, Wen-Huan
- Subjects
- *
RETINAL blood vessels , *DEEP learning , *FEATURE extraction , *DIAGNOSIS - Abstract
Segmentation of retinal vessels is important for doctors to diagnose some diseases. The segmentation accuracy of retinal vessels can be effectively improved by using deep learning methods. However, most of the existing methods are incomplete for shallow feature extraction, and some superficial features are lost, resulting in blurred vessel boundaries and inaccurate segmentation of capillaries in the segmentation results. At the same time, the "layer-by-layer" information fusion between encoder and decoder makes the feature information extracted from the shallow layer of the network cannot be smoothly transferred to the deep layer of the network, resulting in noise in the segmentation features. In this paper, we propose the MFI-Net (Multi-resolution fusion input network) network model to alleviate the above problem to a certain extent. The multi-resolution input module in MFI-Net avoids the loss of coarse-grained feature information in the shallow layer by extracting local and global feature information in different resolutions. We have reconsidered the information fusion method between the encoder and the decoder, and used the information aggregation method to alleviate the information isolation between the shallow and deep layers of the network. MFI-Net is verified on three datasets, DRIVE, CHASE_DB1 and STARE. The experimental results show that our network is at a high level in several metrics, with F1 higher than U-Net by 2.42%, 2.46% and 1.61%, higher than R2U-Net by 1.47%, 2.22% and 0.08%, respectively. Finally, this paper proves the robustness of MFI-Net through experiments and discussions on the stability and generalization ability of MFI-Net. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Image segmentation of retinal fundus vessels based on ensembled classified deep neural network.
- Author
-
JIANG Yun, WANG Fa-lin, and ZHANG Hai
- Abstract
Retinal blood vessel detection has important clinical value in the diagnosis and treatment of fundus diseases. However, due to the complexity and diversity of fundus image features, most retinal segmentation methods have some problems such as low performance of blood vessel segmentation, weak anti-nose interference, and sensitivity to lesions. Therefore, a pixel points classification method based on ensemble classified deep neural network is proposed. Firstly, different residual network modes are used to classify pixel points and get the vascular segmentation image. Secondly, through the ensemble earning method, the segmentation results of each model are processed to obtain the final retinal vascular segmentation image. The simulation results on STARE, DRIVE, and CHASE datasets show that the segmentation accuracy is 97.36%, 95.57%, 96.36%, the specificity 98.06%, 97. 76%, 97.84%, and the F-measure 84. 98%, 82. 25%, 79. 87%. The F-measure 0. 23%, 0. 54%, and 0. 59% high¬er than R2U_Net. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Dense-sparse representation matters: A point-based method for volumetric medical image segmentation.
- Author
-
Jiang, Yun, Liu, Bingxi, Zhang, Zequn, Yan, Yao, Guo, Huanting, and Li, Yuhang
- Subjects
- *
IMAGE segmentation , *DIAGNOSTIC imaging , *IMAGE analysis , *DEEP learning , *DIGITAL image processing , *ARTIFICIAL neural networks , *BRAIN tumors - Abstract
Deep learning methods utilizing Convolutional Neural Networks (CNNs) and Transformers have achieved remarkable success in volumetric medical image analysis. While successful, the symmetrical structure of numerous networks pays insufficient attention to the encoding phase, and the large amount of memory occupied by voxels leads to unnecessary redundancy in the network. In this paper, we present a novel approach to handle volumetric medical images by converting them into point cloud and introduce a new asymmetrical segmentation architecture. We propose a dual-path encoder that fully captures both dense and sparse representations of the input point cloud sampled from volumes. Moreover, the two obtained representations are subtracted at the skip connection as a complementary feature during the decoding stage. Experimental results on the Brain Tumor Segmentation (BraTS) and the Multi-sequence Cardiac MR Segmentation tasks demonstrate the great potential of our point-based method for volumetric medical image segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Efficient BFCN for Automatic Retinal Vessel Segmentation.
- Author
-
Jiang, Yun, Wang, Falin, Gao, Jing, and Liu, Wenhuan
- Subjects
- *
GLAUCOMA diagnosis , *ALGORITHMS , *COLOR , *DIABETIC retinopathy , *DIAGNOSTIC imaging , *EYE diseases , *DIGITAL image processing , *ARTIFICIAL neural networks , *RECEIVER operating characteristic curves , *RETINAL vein , *RETINAL artery , *DEEP learning ,CATARACT diagnosis - Abstract
Retinal vessel segmentation has high value for the research on the diagnosis of diabetic retinopathy, hypertension, and cardiovascular and cerebrovascular diseases. Most methods based on deep convolutional neural networks (DCNN) do not have large receptive fields or rich spatial information and cannot capture global context information of the larger areas. Therefore, it is difficult to identify the lesion area, and the segmentation efficiency is poor. This paper presents a butterfly fully convolutional neural network (BFCN). First, in view of the low contrast between blood vessels and the background in retinal blood vessel images, this paper uses automatic color enhancement (ACE) technology to increase the contrast between blood vessels and the background. Second, using the multiscale information extraction (MSIE) module in the backbone network can capture the global contextual information in a larger area to reduce the loss of feature information. At the same time, using the transfer layer (T_Layer) can not only alleviate gradient vanishing problem and repair the information loss in the downsampling process but also obtain rich spatial information. Finally, for the first time in the paper, the segmentation image is postprocessed, and the Laplacian sharpening method is used to improve the accuracy of vessel segmentation. The method mentioned in this paper has been verified by the DRIVE, STARE, and CHASE datasets, with the accuracy of 0.9627, 0.9735, and 0.9688, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. Multi-Path Recurrent U-Net Segmentation of Retinal Fundus Image.
- Author
-
Jiang, Yun, Wang, Falin, Gao, Jing, and Cao, Simin
- Subjects
RETINAL blood vessels ,RETINAL imaging ,OPTIC disc ,CONVOLUTIONAL neural networks ,DIABETIC retinopathy - Abstract
Diabetes can induce diseases including diabetic retinopathy, cataracts, glaucoma, etc. The blindness caused by these diseases is irreversible. Early analysis of retinal fundus images, including optic disc and optic cup detection and retinal blood vessel segmentation, can effectively identify these diseases. The existing methods lack sufficient discrimination power for the fundus image and are easily affected by pathological regions. This paper proposes a novel multi-path recurrent U-Net architecture to achieve the segmentation of retinal fundus images. The effectiveness of the proposed network structure was proved by two segmentation tasks: optic disc and optic cup segmentation and retinal vessel segmentation. Our method achieved state-of-the-art results in the segmentation of the Drishti-GS1 dataset. Regarding optic disc segmentation, the accuracy and Dice values reached 0.9967 and 0.9817, respectively; as regards optic cup segmentation, the accuracy and Dice values reached 0.9950 and 0.8921, respectively. Our proposed method was also verified on the retinal blood vessel segmentation dataset DRIVE and achieved a good accuracy rate. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
11. Magnetic resonance fingerprinting review part 2: Technique and directions.
- Author
-
McGivney, Debra F., Boyacıoğlu, Rasim, Jiang, Yun, Poorman, Megan E., Seiberlich, Nicole, Gulani, Vikas, Keenan, Kathryn E., Griswold, Mark A., and Ma, Dan
- Subjects
MAGNETIC resonance ,PATTERN matching ,DEEP learning ,RESOURCE recovery facilities ,MACHINE learning - Abstract
Magnetic resonance fingerprinting (MRF) is a general framework to quantify multiple MR-sensitive tissue properties with a single acquisition. There have been numerous advances in MRF in the years since its inception. In this work we highlight some of the recent technical developments in MRF, focusing on sequence optimization, modifications for reconstruction and pattern matching, new methods for partial volume analysis, and applications of machine and deep learning. Level of Evidence: 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2020;51:993-1007. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
12. An image data augmentation algorithm based on convolutional neural networks.
- Author
-
JIANG Yun, ZHANG Hai, CHEN Li, and TAO Sheng-xin
- Abstract
Improving the generalization ability and reducing the over-fitting risk is the research focus of deep convolutional neural networks. Occlusion is one of the critical factors affecting the generalization ability of convolutional neural networks. It is usually hoped that the models after complex training can have a good generalization for occlusion images. In order to reduce the over-fitting risk and improve the robustness of the model to random occlusion image recognition, this paper proposes an activation feature processing algorithm. During the training process, the input image is occluded by processing the maximum activation feature map of a convolutional layer, then the occluded new image is used as a new input to the network to go on training the model. The experimental results show that the proposed algorithm can improve the classification performance of multiple convolutional neural network models on different datasets and the trained models have excellent robustness to the identification of random occlusion images. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
13. The optimization of parallel convolutional RBM based on Spark.
- Author
-
Jiang, Yun, Zhuo, Junyu, Zhang, Juan, and Xiao, Xiao
- Subjects
- *
BOLTZMANN machine , *SPEECH perception , *IMAGE recognition (Computer vision) , *X-ray imaging , *MODEL railroads , *DEEP learning , *AUTOMATIC speech recognition - Abstract
With the extensive attention and research of the scholars in deep learning, the convolutional restricted Boltzmann machine (CRBM) model based on restricted Boltzmann machine (RBM) is widely used in image recognition, speech recognition, etc. However, time consuming training still seems to be an unneglectable issue. To solve this problem, this paper mainly uses optimized parallel CRBM based on Spark, and proposes a parallel comparison divergence algorithm based on Spark and uses it to train the CRBM model to improve the training speed. The experiments show that the method is faster than traditional sequential algorithm. We train the CRBM with the method and apply it to breast X-ray image classification. The experiments show that it can improve the precision and the speed of training compared with traditional algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
14. Multi-Scale and Multi-Branch Convolutional Neural Network for Retinal Image Segmentation.
- Author
-
Jiang, Yun, Liu, Wenhuan, Wu, Chao, Yao, Huixiao, and Tiddeman, Bernard
- Subjects
- *
CONVOLUTIONAL neural networks , *RETINAL blood vessels , *IMAGE segmentation , *OPTIC disc , *ARTIFICIAL neural networks , *RETINAL imaging , *DATA mining , *DEEP learning , *FEATURE extraction - Abstract
The accurate segmentation of retinal images is a basic step in screening for retinopathy and glaucoma. Most existing retinal image segmentation methods have insufficient feature information extraction. They are susceptible to the impact of the lesion area and poor image quality, resulting in the poor recovery of contextual information. This also causes the segmentation results of the model to be noisy and low in accuracy. Therefore, this paper proposes a multi-scale and multi-branch convolutional neural network model (multi-scale and multi-branch network (MSMB-Net)) for retinal image segmentation. The model uses atrous convolution with different expansion rates and skip connections to reduce the loss of feature information. Receiving domains of different sizes captures global context information. The model fully integrates shallow and deep semantic information and retains rich spatial information. The network embeds an improved attention mechanism to obtain more detailed information, which can improve the accuracy of segmentation. Finally, the method of this paper was validated on the fundus vascular datasets, DRIVE, STARE and CHASE datasets, with accuracies/F1 of 0.9708/0.8320, 0.9753/0.8469 and 0.9767/0.8190, respectively. The effectiveness of the method in this paper was further validated on the optic disc visual cup DRISHTI-GS1 dataset with an accuracy/F1 of 0.9985/0.9770. Experimental results show that, compared with existing retinal image segmentation methods, our proposed method has good segmentation performance in all four benchmark tests. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. Deep learning-based air temperature mapping by fusing remote sensing, station, simulation and socioeconomic data.
- Author
-
Shen, Huanfeng, Jiang, Yun, Li, Tongwen, Cheng, Qing, Zeng, Chao, and Zhang, Liangpei
- Subjects
- *
DEEP learning , *ATMOSPHERIC temperature , *REMOTE sensing , *ROBUST optimization , *LAND surface temperature , *SURFACE of the earth - Abstract
Air temperature (Ta) is an essential climatological component that controls and influences various earth surface processes. In this study, we make the first attempt to employ deep learning for Ta mapping mainly based on space remote sensing and ground station observations. Considering that Ta varies greatly in space and time and is sensitive to many factors, assimilation data and socioeconomic data are also included for a multi-source data fusion based estimation. Specifically, a 5-layers structured deep belief network (DBN) is employed to better capture the complicated and non-linear relationships between Ta and different predictor variables. Layer-wise pre-training process for essential features extraction and fine-tuning process for weight parameters optimization ensure the robust prediction of Ta spatio-temporal distribution. The DBN model was implemented for 0.01° daily maximum Ta mapping across China. The ten-fold cross-validation results indicate that the DBN model achieves promising results with the RMSE of 1.996 °C, MAE of 1.539 °C, and R of 0.986 at the national scale. Compared with multiple linear regression (MLR), back-propagation neural network (BPNN) and random forest (RF) method, the DBN model reduces the MAE values by 1.340 °C, 0.387 °C and 0.222 °C, respectively. Further analysis on spatial distribution and temporal tendency of prediction errors both validate the great potentials of DBN in Ta estimation. • Deep learning is utilized effectively to estimate Ta for the first attempt. • Fusing remote sensing, station, simulation and socioeconomic data do make sense. • The 0.01° daily maximum Ta across China has been generated accurately. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Decoding human brain activity with deep learning.
- Author
-
Zheng, Xiao, Chen, Wanzhong, Li, Mingyang, Zhang, Tao, You, Yang, and Jiang, Yun
- Subjects
ARTIFICIAL neural networks ,BRAIN-computer interfaces ,DEEP learning ,ARTIFICIAL intelligence ,VISUAL perception ,BRAIN - Abstract
Building a brain-computer fusion system that would integrate biological intelligence and machine intelligence became a research topic of great concern. Recent research has proved that human brain activity can be decoded from neurological data. Meanwhile, deep learning has become an effective way to solve practical problems. Taking advantage of these trends, in this paper, we propose a novel method of decoding brain activity evoked by visual stimuli. To achieve this goal, we first introduce a combined long short-term memory—convolutional neural network (LSTM-CNN) architecture to extract the compact category-dependent representations of electroencephalograms (EEG). Our approach combines the ability of LSTM to extract sequential features and the capability of CNN to distil local features. Next, we employ an improved spectral normalization generative adversarial network (SNGAN) to conditionally generate images using the learned EEG features. We evaluate our approach in terms of the classification accuracy of EEG and the quality of the generated images. The results show that the proposed LSTM-CNN algorithm that discriminates the object classes by using EEG can be more accurate than the existing methods. In qualitative and quantitative tests, the improved SNGAN performs better in the task of generating conditional images from the learned EEG representations; the produced images are realistic and highly resemble the original images. Our method can reconstruct the content of visual stimuli according to the brain's response. Therefore, it helps to decode the human brain activity by using an image-EEG-image transformation. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. Ensemble deep learning for automated visual classification using EEG signals.
- Author
-
Zheng, Xiao, Chen, Wanzhong, You, Yang, Jiang, Yun, Li, Mingyang, and Zhang, Tao
- Subjects
- *
VISUAL learning , *DEEP learning , *BRAIN-computer interfaces , *ELECTROENCEPHALOGRAPHY , *CLASSIFICATION , *BOOTSTRAP aggregation (Algorithms) - Abstract
• An ensemble deep learning method is used to extract EEG features. • A better classification performance is obtained. • An automated visual classification project is developed. This paper proposes an automated visual classification framework in which a novel analysis method (LSTMS-B) of EEG signals guides the selection of multiple networks that leads to the improvement of classification performance. The method, called LSTMS-B, combines deep learning and ensemble learning to extract the category-dependent representations of EEG signals. Specifically, it introduces Swish activation function into traditional LSTM which reduces the effect of vanishing gradient and optimize the training process. Besides, the Bagging theory is applied to increase the generalization. The LSTMS-B method reaches the average precision of 97.13% for learning EEG visual presentations, which greatly outperforms traditional LSTM network and other contrast models. Then, to verify its application value, a ResNet-based regression is trained using original images and relevant EEG representations learned before. We use the output of the regression as the features to classify the images, and finally obtain the average classification accuracy of 90.16%. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. Deep learning in environmental remote sensing: Achievements and challenges.
- Author
-
Yuan, Qiangqiang, Shen, Huanfeng, Li, Tongwen, Li, Zhiwei, Li, Shuwen, Jiang, Yun, Xu, Hongzhang, Tan, Weiwei, Yang, Qianqian, Wang, Jiwen, Gao, Jianhao, and Zhang, Liangpei
- Subjects
- *
REMOTE sensing , *DEEP learning , *LAND surface temperature , *OCEAN color , *ENVIRONMENTAL monitoring , *SOLAR radiation , *HYDROLOGY , *EARTH sciences - Abstract
Various forms of machine learning (ML) methods have historically played a valuable role in environmental remote sensing research. With an increasing amount of "big data" from earth observation and rapid advances in ML, increasing opportunities for novel methods have emerged to aid in earth environmental monitoring. Over the last decade, a typical and state-of-the-art ML framework named deep learning (DL), which is developed from the traditional neural network (NN), has outperformed traditional models with considerable improvement in performance. Substantial progress in developing a DL methodology for a variety of earth science applications has been observed. Therefore, this review will concentrate on the use of the traditional NN and DL methods to advance the environmental remote sensing process. First, the potential of DL in environmental remote sensing, including land cover mapping, environmental parameter retrieval, data fusion and downscaling, and information reconstruction and prediction, will be analyzed. A typical network structure will then be introduced. Afterward, the applications of DL environmental monitoring in the atmosphere, vegetation, hydrology, air and land surface temperature, evapotranspiration, solar radiation, and ocean color are specifically reviewed. Finally, challenges and future perspectives will be comprehensively analyzed and discussed. • The potential of deep learning (DL) in environmental remote sensing is analyzed. • Typical DL network architectures in remote sensing applications are introduced. • Progress on DL in remote sensing of ten more environmental parameters is reviewed. • New insights on combining DL and physical/geographical laws are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.