331 results on '"Jechang Jeong"'
Search Results
2. Despeckling algorithm for reducing speckle noise in images generated from active sensors
- Author
-
Jechang Jeong and Hyunho Choi
- Subjects
Synthetic aperture radar ,Computer science ,020208 electrical & electronic engineering ,Astrophysics::Instrumentation and Methods for Astrophysics ,Speckle noise ,02 engineering and technology ,Filter (signal processing) ,Composite image filter ,Edge detection ,Speckle pattern ,Noise ,Computer Science::Graphics ,Radar imaging ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Algorithm - Abstract
Synthetic aperture radar (SAR) images can be utilised in various fields because they are not affected by the time of day or weather conditions. However, in the process of employing active sensors to obtain a SAR image, speckle noise is generated in the image. Speckle noise degrades the ability of computer vision to observe the Earth. To improve imaging performance, an algorithm for removing speckle noise is necessary. For this purpose, the authors propose a speckle-noise removal algorithm. A speckle reducing anisotropic diffusion filter is employed as a pre-processing filter, in which multiplicative speckle noise can be converted into additive noise using a logarithmic transformation. To remove the additive noise, they use a weighted guided image filter. Experimental results indicate that the proposed method exhibits improved speckle noise suppression and edge preservation results compared with those of existing methods.
- Published
- 2020
- Full Text
- View/download PDF
3. Speckle noise reduction for ultrasound images by using speckle reducing anisotropic diffusion and Bayes threshold
- Author
-
Jechang Jeong and Hyunho Choi
- Subjects
Discrete wavelet transform ,Image quality ,Anisotropic diffusion ,Computer science ,Signal-To-Noise Ratio ,030218 nuclear medicine & medical imaging ,Reduction (complexity) ,03 medical and health sciences ,Speckle pattern ,0302 clinical medicine ,Wavelet ,Humans ,Radiology, Nuclear Medicine and imaging ,Electrical and Electronic Engineering ,Instrumentation ,Ultrasonography ,Radiation ,business.industry ,Bayes Theorem ,Pattern recognition ,Speckle noise ,Filter (signal processing) ,Image Enhancement ,Condensed Matter Physics ,030220 oncology & carcinogenesis ,Anisotropy ,Artificial intelligence ,business ,Algorithms - Abstract
Ultrasound imaging has been used for diagnosing lesions in the human body. In the process of acquiring ultrasound images, speckle noise may occur, affecting image quality and auto-lesion classification. Despite the efforts to resolve this, conventional algorithms exhibit poor speckle noise removal and edge preservation performance. Accordingly, in this study, a novel algorithm is proposed based on speckle reducing anisotropic diffusion (SRAD) and a Bayes threshold in the wavelet domain. In this algorithm, SRAD is employed as a preprocessing filter, and the Bayes threshold is used to remove the residual noise in the resulting image. Compared to the conventional filtering techniques, experimental results showed that the proposed algorithm exhibited superior performance in terms of peak signal-to-noise ratio (average = 28.61 dB) and structural similarity (average = 0.778).
- Published
- 2019
- Full Text
- View/download PDF
4. Fast and Scalable Soft Decision Decoding of Linear Block Codes
- Author
-
Changryoul Choi and Jechang Jeong
- Subjects
Computer science ,Modeling and Simulation ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Approximation algorithm ,020206 networking & telecommunications ,02 engineering and technology ,Electrical and Electronic Engineering ,Algorithm ,Linear code ,Decoding methods ,BCH code ,Computer Science Applications - Abstract
Ordered statistics-based decoding (OSD), which exhibits a near maximum likelihood decoding performance, suffers from huge computational complexity as the order increases. In this letter, we propose a fast and scalable OSD by considering the OSD as a fast searching problem. In the searching process, if the up-to-date minimum cost value is less than a predicted threshold value, then we can safely skip the search for remaining higher orders. The computational complexity of the proposed algorithm converges quickly to that of order one, regardless of the maximum order. Compared with the probabilistic necessary conditions-based OSD, the proposed algorithm exhibits speed-up gains of a factor of approximately 2,740 (at 3.0dB) for (127,64) BCH codes, with an indistinguishable decoding performance.
- Published
- 2019
- Full Text
- View/download PDF
5. NTIRE 2021 Challenge on Video Super-Resolution
- Author
-
Chen Guo, Siqian Yang, Ting Liu, Kelvin C.K. Chan, Tangxin Xie, Zekun Li, Dongliang He, Shijie Zhao, Boyuan Jiang, Ye Zhu, He Zheng, Yunhua Lu, Zhubo Ruan, Yu Li, Xueyang Fu, Junlin Li, Huanwei Liang, Jinjing Li, Chengpeng Chen, Shijie Yue, Hongying Liu, Xu Zhuo, Zhongyuan Wang, Konstantinos Konstantoudakis, Guodong Du, Ruixia Song, Seungjun Nah, Fu Li, Wenhao Zhang, Ruipeng Gang, Peng Yi, Ying Tai, Xiaozhong Ji, Yutong Wang, Donghao Luo, Kyoung Mu Lee, Chengjie Wang, Jechang Jeong, Peng Zhao, Chenghua Li, Xueyi Zou, Hanxi Liu, Junjun Jiang, Pablo Navarrete Michelini, Xueheng Zhang, Renjun Luo, Sourya Dipta Das, Xiaojie Chu, Yuchun Dong, Jie Zhang, Yuanyuan Liu, Shangchen Zhou, Yu Jia, Xinning Chai, Suyoung Lee, Xin Li, Lielin Jiang, Wenqing Chu, Qing Wang, Mengdi Sun, Qian Zheng, Mengxi Guo, Liangyu Chen, Li Chen, Chen Li, Zhiwei Xiong, Fenglong Song, Jeonghwan Heo, Qi Zhang, Li Song, Yixin Bai, Konstantinos Karageorgos, Anastasios Dimou, Yuxiang Chen, Ruisheng Gao, Zeyu Xiao, Zhen Cheng, Fanhua Shang, Petros Daras, Gen Zhan, Kui Jiang, Qingqing Dang, Xiaopeng Sun, Fanglong Liu, Jiayi Ma, Xiangyu Xu, Jia Hao, Nisarg Shah, Radu Timofte, Kassiani Zafirouli, Fanjie Shang, Zhipeng Luo, Yukai Shi, Geyingjie Wen, Feiyue Huang, Haining Li, Qichao Sun, Ruikang Xu, Yiming Li, Xin Lu, Saikat Dutta, Hao Jiang, Seungwoo Wee, Jilin Li, Xiaowei Song, Yuehan Yao, Zhiyu Chen, Chuming Lin, Longjie Shen, Sanghyun Son, Jing Lin, Fangxu Yu, Fei Chen, and Chen Change Loy
- Subjects
business.industry ,Computer science ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Frame rate ,Track (rail transport) ,Superresolution ,Task (project management) ,Challenging environment ,Pattern recognition (psychology) ,Computer vision ,Quality (business) ,Artificial intelligence ,business ,Image restoration ,media_common - Abstract
Super-Resolution (SR) is a fundamental computer vision task that aims to obtain a high-resolution clean image from the given low-resolution counterpart. This paper reviews the NTIRE 2021 Challenge on Video Super-Resolution. We present evaluation results from two competition tracks as well as the proposed solutions. Track 1 aims to develop conventional video SR methods focusing on the restoration quality. Track 2 assumes a more challenging environment with lower frame rates, casting spatio-temporal SR problem. In each competition, 247 and 223 participants have registered, respectively. During the final testing phase, 14 teams competed in each track to achieve state-of-the-art performance on video SR tasks.
- Published
- 2021
- Full Text
- View/download PDF
6. Forward Warping-Based Video Frame Interpolation Using a Motion Selective Network
- Author
-
Jechang Jeong and Jeonghwan Heo
- Subjects
frame rate up-conversion ,optical flow ,flow warping ,deep learning ,Computer Networks and Communications ,Hardware and Architecture ,Control and Systems Engineering ,Signal Processing ,Electrical and Electronic Engineering - Abstract
Recently, deep neural networks have shown surprising results in solving most of the traditional image processing problems. However, the video frame interpolation field does not show relatively good performance because the receptive field requires a vast spatio-temporal range. To reduce the computational complexity, in most frame interpolation studies, motion is first calculated with the optical flow, then interpolated frames are generated through backward warping. However, while the backward warping process is simple to implement, the interpolated image contains mixed motion and ghosting defects. Therefore, we propose a new network that does not use the backward warping method through the proposed max-min warping. Since max-min warping generates a clear warping image in advance according to the size of the motion and the network is configured to select the warping result according to the warped layer, using the proposed method, it is possible to optimize the computational complexity while selecting a contextually appropriate image. The video interpolation method using the proposed method showed 34.847 PSNR in the Vimeo90k dataset and 0.13 PSNR improvement compared to the Quadratic Video Interpolation method, showing that it is an efficient frame interpolation self-supervised learning.
- Published
- 2022
- Full Text
- View/download PDF
7. Fast Soft Decision Decoding of Linear Block Codes Using Partial Syndrome Search
- Author
-
Changryoul Choi and Jechang Jeong
- Subjects
Computer science ,Reliability (computer networking) ,010401 analytical chemistry ,0202 electrical engineering, electronic engineering, information engineering ,020206 networking & telecommunications ,02 engineering and technology ,01 natural sciences ,Hamming code ,Linear code ,Algorithm ,BCH code ,Decoding methods ,0104 chemical sciences - Abstract
Ordered statistics-based decoding (OSD) is a soft decision decoding algorithm for linear block codes, yielding near maximum likelihood decoding performance. The OSD algorithm first sorts the received symbols in descending order based on the reliability and partitions the sorted symbols into the most reliable bases (MRB) and least reliable bases (LRB). Owing to the nature of the ordering symbols in the LRB, we presume that the expected number of errors in the leftmost (or most significant) part of LRB is relatively small compared to that in the other parts of LRB. Based on this observation, we can omit the impossible candidates in advance, by using Hamming weights of partial syndromes. This results in huge computational savings without compromising the decoding performance. Compared with OSD based on probabilistic necessary conditions and probabilistic sufficient conditions [3], [4], incorporation of the proposed algorithm into fast and scalable OSD [7] exhibits speed-up gains of a factor of approximately 405 (at 3.0 dB) for (127,64) BCH codes (maximum order 5), without compromising the decoding performance.
- Published
- 2020
- Full Text
- View/download PDF
8. NTIRE 2020 Challenge on Image and Video Deblurring
- Author
-
Seungjun Nah, Sanghyun Son, Radu Timofte, Kyoung Mu Lee, Yu Tseng, Yu-Syuan Xu, Cheng-Ming Chiang, Yi-Min Tsai, Stephan Brehm, Sebastian Scherer, Dejia Xu, Yihao Chu, Qingyan Sun, Jiaqin Jiang, Lunhao Duan, Jian Yao, Kuldeep Purpohit, Maitreya Suin, A.N. Rajagopalan, Yuichi Ito, P.S. Hrishikesh, Densen Puthussery, K.A. Akhil, C.V. Jiji, Guisik Kim, P.L. Deepa, Zhiwei Xiong, Jie Huang, Dong Liu, Sangmin Kim, Hyungjoon Nam, Jisu Kim, Jechang Jeong, Shihua Huang, Yuchen Fan, Jiahui Yu, Haichao Yu, Thomas S. Huang, Ya Zhou, Xin Li, Sen Liu, Zhibo Chen, Saikat Dutta, Sourya Dipta Das, Shivam Garg, Daniel Sprague, Bhrij Patel, and Thomas Huck
- Subjects
Deblurring ,Relation (database) ,business.industry ,Computer science ,Motion blur ,Photography ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer vision ,Artificial intelligence ,business ,Image (mathematics) - Abstract
Motion blur is one of the most common degradation artifacts in dynamic scene photography. This paper reviews the NTIRE 2020 Challenge on Image and Video Deblurring. In this challenge, we present the evaluation results from 3 competition tracks as well as the proposed solutions. Track 1 aims to develop single-image deblurring methods focusing on restoration quality. On Track 2, the image deblurring methods are executed on a mobile platform to find the balance of the running speed and the restoration accuracy. Track 3 targets developing video deblurring methods that exploit the temporal relation between input frames. In each competition, there were 163, 135, and 102 registered participants and in the final testing phase, 9, 4, and 7 teams competed. The winning methods demonstrate the state-of-the-art performance on image and video deblurring tasks.
- Published
- 2020
- Full Text
- View/download PDF
9. C3Net: Demoiréing Network Attentive in Channel, Color and Concatenation
- Author
-
Hyungjoon Nam, Jechang Jeong, Sangmin Kim, and Jisu Kim
- Subjects
Contextual image classification ,Artificial neural network ,Channel (digital image) ,Computer science ,business.industry ,Speech recognition ,Concatenation ,Feature extraction ,Artificial intelligence ,Image denoising ,business ,Image restoration - Published
- 2020
- Full Text
- View/download PDF
10. Joint Learning of Blind Video Denoising and Optical Flow Estimation
- Author
-
Jechang Jeong, Songhyun Yu, Junwoo Park, and Bum Jun Park
- Subjects
Artificial neural network ,Computer science ,business.industry ,Noise reduction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical flow ,Image (mathematics) ,Physics::Fluid Dynamics ,Optical flow estimation ,Computer vision ,Video denoising ,Noise (video) ,Artificial intelligence ,business ,Joint (audio engineering) - Abstract
Many deep-learning-based image/video denoising models have been developed, and recently, several approaches for training a denoising neural network without using clean images have been proposed. However, Noise2Noise method requires paired noisy data, and obtaining them is occasionally difficult, whereas other existing models trained using unpaired noisy data deliver limited performance. Obtaining an accurate optical flow from noisy videos is also a difficult task because conventional optical flow estimation methods are primarily focused on estimating the optical flow using clean videos. This study proposes a new framework to fine-tune video denoising and optical flow estimation networks using unpaired noisy videos. These two networks are jointly trained to realize synergy; an improvement in the denoising performance increases the accuracy of the flow estimation, and an improvement in the flow-estimation performance enhances the quality of the training data for the denoiser. Our experimental results reveal that proposed approach outperforms the existing training schemes in video denoising and also provides accurate optical flows even when the videos contain a considerable amount of noise.
- Published
- 2020
- Full Text
- View/download PDF
11. NTIRE 2020 Challenge on Real Image Denoising: Dataset, Methods and Results
- Author
-
Kyeongha Rho, Qiong Yan, Marcin Mozejko, Jong Hyun Kim, Abdelrahman Abdelhamed, Kyungmin Song, Ioannis Marras, Youliang Yan, Matteo Maggioni, Yunhua Lu, Jiye Liu, Songhyun Yu, Krzysztof Trojanowski, Gregory G. Slabaugh, Pengliang Tang, Wei Liu, Xiaoling Zhang, Han Junyu, Jaayeon Lee, Gang Zhang, Sungho Kim, Yanhong Wu, Yuzhi Zhao, Zhangyu Ye, Tingniao Wang, Wonjin Kim, Yaqi Wu, Bumjun Park, Shusong Xu, Lukasz Treszczotko, Yunchao Zhang, Xiaomu Lu, Jingtuo Liu, Yanwen Fan, Zengli Yang, Yue Cao, Thomas Tanay, Xiyu Yu, Wangmeng Zuo, Tomasz Latkowski, Teng Xi, Sabari Nathan, Chenghua Li, Siliang Tang, Sujin Kim, Magauiya Zhussip, Xiwen Lu, Changyeop Shin, Fengshuo Hu, Yanpeng Cao, Michal Szafraniuk, Jechang Jeong, Jiangxin Yang, Mahmoud Afifi, Baopu Li, Ziyao Zong, Shuangquan Wang, Zhilu Zhang, Bin Liu, Jungwon Lee, Nan Nan, Youngjung Kim, Zhihao Li, Rajat Gupta, Shuailin Lv, Nisarg A. Shah, Hwechul Cho, Radu Timofte, Changyuan Wen, Yanlong Cao, Thomas S. Huang, Azamat Khassenov, Wendong Chen, Myungjoo Kang, Long Bao, Yuchen Fan, Dongwoon Bai, Yuqian Zhou, Jang-Hwan Choi, Pablo Navarrete Michelini, Meng Liu, Yiyun Zhao, Vineet Kumar, Michael S. Brown, Chunxia Lei, Zhihong Pan, Han-Soo Choi, Shuai Liu, Errui Ding, and Priya Kansal
- Subjects
FOS: Computer and information sciences ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,sRGB ,Noise reduction ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,Pattern recognition ,02 engineering and technology ,Electrical Engineering and Systems Science - Image and Video Processing ,Color space ,Real image ,Image (mathematics) ,FOS: Electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,RGB color model ,Artificial intelligence ,Focus (optics) ,business ,021101 geological & geomatics engineering - Abstract
This paper reviews the NTIRE 2020 challenge on real image denoising with focus on the newly introduced dataset, the proposed methods and their results. The challenge is a new version of the previous NTIRE 2019 challenge on real image denoising that was based on the SIDD benchmark. This challenge is based on a newly collected validation and testing image datasets, and hence, named SIDD+. This challenge has two tracks for quantitatively evaluating image denoising performance in (1) the Bayer-pattern rawRGB and (2) the standard RGB (sRGB) color spaces. Each track ~250 registered participants. A total of 22 teams, proposing 24 methods, competed in the final phase of the challenge. The proposed methods by the participating teams represent the current state-of-the-art performance in image denoising targeting real noisy images. The newly collected SIDD+ datasets are publicly available at: https://bit.ly/siddplus_data.
- Published
- 2020
- Full Text
- View/download PDF
12. Speckle noise reduction technique for SAR images using SRAD and gradient domain guided image filtering
- Author
-
Hyunho Choi and Jechang Jeong
- Subjects
Synthetic aperture radar ,Computer science ,business.industry ,Anisotropic diffusion ,Image quality ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Speckle noise ,Multiplicative noise ,Weighting ,Reduction (complexity) ,Speckle pattern ,Computer Science::Graphics ,Computer vision ,Artificial intelligence ,business - Abstract
In this paper, a novel algorithm is proposed using speckle reducing anisotropic diffusion (SRAD) and gradient domain guided image filtering (GDGIF) to reduce speckle in synthetic aperture radar (SAR) images. SRAD is suitable for reducing multiplicative noise in SAR images because it can directly process log-compressed data. Since GDGIF has edge-aware weighting, it is adaptively applied to SRAD result images to additionally reduce speckle noise. Experimental results demonstrate that the proposed algorithm, compared to existing filtering methods, shows excellent speckle noise reduction performance and a low computational complexity.
- Published
- 2020
- Full Text
- View/download PDF
13. Real time Demosaicking algorithm using derivative difference and curvature for digital camera
- Author
-
Jin Wang and Jechang Jeong
- Subjects
Demosaicing ,business.product_category ,Pixel ,Computer Networks and Communications ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Isophote ,Function (mathematics) ,Derivative ,Curvature ,Hardware and Architecture ,Computer Science::Computer Vision and Pattern Recognition ,Media Technology ,business ,Algorithm ,Software ,Smoothing ,ComputingMethodologies_COMPUTERGRAPHICS ,Sign (mathematics) ,Digital camera - Abstract
Lots of mobile devices adopt single image sensors to acquire scene images. In our algorithm, we propose an adaptive and effective demosaicking algorithm using derivative difference and curvature which can estimate the directional component to reconstruct the to-be-interpolated color pixels. We introduce an function to evaluate the image complexity, which is composed by the derivative difference and isophote smoothing which is calculated as the sign of image curvature.
- Published
- 2019
- Full Text
- View/download PDF
14. Color Filter Array Demosaicking Using Densely Connected Residual Network
- Author
-
Bum Jun Park and Jechang Jeong
- Subjects
General Computer Science ,Computer science ,color filter array interpolation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,convolutional neural network ,Image processing ,02 engineering and technology ,Residual ,01 natural sciences ,Convolutional neural network ,Image (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Demosaicing ,business.industry ,010401 analytical chemistry ,General Engineering ,deep learning ,Pattern recognition ,Demosaicking ,0104 chemical sciences ,020201 artificial intelligence & image processing ,Color filter array ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,business ,lcsh:TK1-9971 ,Interpolation - Abstract
Deep convolutional neural networks have been used extensively in recent image processing research, exhibiting drastically improved performance. In this study, we apply convolutional neural networks to color filter array demosaicking, which plays an essential role in single-sensor digital cameras. Contrary to conventional convolutional neural network-based demosaicking models, the proposed model does not require any initial interpolation step for mosaicked input images, which increases the computational complexity. Using a mosaicked image as input, the proposed model is trained in an end-to-end manner to generate demosaicked images outputs. Many deep neural networks experience vanishing-gradient problem, which makes models hard to be trained. To solve this problem, we apply residual learning and densely connected convolutional neural network. Moreover, we apply block-wise convolutional neural networks to consider local features. Finally, we apply a sub-pixel interpolation layer to generate demosaicked output images more efficiently and accurately. Experimental results show that our proposed model outperforms conventional solutions and state-of-the-art models.
- Published
- 2019
- Full Text
- View/download PDF
15. Local Excitation Network for Restoring a JPEG-Compressed Image
- Author
-
Jechang Jeong and Songhyun Yu
- Subjects
General Computer Science ,Computer science ,Image quality ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Convolutional neural network ,02 engineering and technology ,Lossy compression ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Quantization (image processing) ,Image resolution ,JPEG image restoration ,Image restoration ,Transform coding ,Quantization (signal processing) ,generative adversarial network ,General Engineering ,020206 networking & telecommunications ,computer.file_format ,JPEG ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,lcsh:TK1-9971 ,computer ,Algorithm - Abstract
Joint photographic experts group (JPEG) compression is lossy compression, and degradation of image quality worsens at high compression ratios. Therefore, a reconstruction process is required for a visually pleasant image. In this paper, we propose an end-to-end deep learning architecture for restoring JPEG images with high compression ratios. The proposed architecture changes a core principle of the squeeze and excitation network for low-level vision tasks where pixel-level accuracy is important. Instead of extracting global features, our network extracts locally embedded features and fine-tunes each feature value by using depthwise convolution. To reduce the computational complexity and parameters with large receptive fields, we use a combination of the recursive structure and feature map down- and up-scaling processes. We also propose a compact version of the proposed model by decreasing the number of filters and simplifying the network, which has about one-twentieth of the parameters of the baseline model. Experimental results reveal that our network outperforms conventional networks quantitatively, and the restored images are clear with sharp edges and smooth blocking boundaries. Furthermore, the compact model shows higher objective results while maintaining a low number of parameters. In addition, at a high compression ratio, the overall information, including details in the blocks, are lost owing to high quantization errors. We apply a generative adversarial network structure to restore these highly damaged blocks, and the results reveal that the image produced has details similar to those of the ground truth.
- Published
- 2019
- Full Text
- View/download PDF
16. Multi-Color Space Network for Salient Object Detection
- Author
-
Kyungjun Lee and Jechang Jeong
- Subjects
Image Interpretation, Computer-Assisted ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,salient object detection ,multi-color space learning ,fully convolutional network ,atrous spatial pyramid pooling module ,attention module ,Color ,Electrical and Electronic Engineering ,Biochemistry ,Instrumentation ,Atomic and Molecular Physics, and Optics ,Analytical Chemistry - Abstract
The salient object detection (SOD) technology predicts which object will attract the attention of an observer surveying a particular scene. Most state-of-the-art SOD methods are top-down mechanisms that apply fully convolutional networks (FCNs) of various structures to RGB images, extract features from them, and train a network. However, owing to the variety of factors that affect visual saliency, securing sufficient features from a single color space is difficult. Therefore, in this paper, we propose a multi-color space network (MCSNet) to detect salient objects using various saliency cues. First, the images were converted to HSV and grayscale color spaces to obtain saliency cues other than those provided by RGB color information. Each saliency cue was fed into two parallel VGG backbone networks to extract features. Contextual information was obtained from the extracted features using atrous spatial pyramid pooling (ASPP). The features obtained from both paths were passed through the attention module, and channel and spatial features were highlighted. Finally, the final saliency map was generated using a step-by-step residual refinement module (RRM). Furthermore, the network was trained with a bidirectional loss to supervise saliency detection results. Experiments on five public benchmark datasets showed that our proposed network achieved superior performance in terms of both subjective results and objective metrics.
- Published
- 2022
- Full Text
- View/download PDF
17. Computer Vision-based Method to Detect Fire Using Color Variation in Temporal Domain
- Author
-
Jechang Jeong, Jiyeon Kim, SungHwan Kim, and Ung Hwang
- Subjects
Variation (linguistics) ,business.industry ,Computer science ,Computer vision ,General Medicine ,Artificial intelligence ,business ,Domain (software engineering) - Published
- 2018
- Full Text
- View/download PDF
18. Despeckling Images Using a Preprocessing Filter and Discrete Wavelet Transform-Based Noise Reduction Techniques
- Author
-
Jechang Jeong and Hyunho Choi
- Subjects
Discrete wavelet transform ,Synthetic aperture radar ,Computer science ,Anisotropic diffusion ,business.industry ,Noise reduction ,010401 analytical chemistry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Speckle noise ,02 engineering and technology ,Filter (signal processing) ,01 natural sciences ,Multiplicative noise ,0104 chemical sciences ,Noise ,Speckle pattern ,Computer Science::Graphics ,Computer Science::Computer Vision and Pattern Recognition ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Instrumentation - Abstract
Synthetic aperture radar (SAR) images are difficult to analyze due to speckle noise, which is a characteristic of multiplicative noise. Over the last few decades, a number of studies have been performed regarding the removal of speckle noise. However, the existing studies exhibit edge information loss when removing speckle noise. In this paper, we propose an algorithm using speckle reducing anisotropic diffusion (SRAD), soft thresholding, and a guided filter to effectively remove speckle noise from SAR images while preserving edge information. The proposed algorithm first obtains a filtered image by applying an SRAD filter to a noise image. To further remove the multiplicative noise remaining in the filtered image, a logarithmic transformation is applied to convert it into additive noise. The filtering image was decomposed into multiresolution images using discrete wavelet transform (DWT). Soft thresholding and a guided filter were used for each of the high-frequency subimages and the low-frequency subimage. Then, an inverse DWT and an exponential transform are applied to the denoised image. The experimental results indicate that the proposed algorithm shows better performance than the conventional filtering method in terms of both objective and subjective performances.
- Published
- 2018
- Full Text
- View/download PDF
19. Hash rearrangement scheme for HEVC screen content coding
- Author
-
Jechang Jeong and Ilseung Kim
- Subjects
Theoretical computer science ,Generation time ,Computer science ,Computation ,Hash function ,020206 networking & telecommunications ,02 engineering and technology ,Hash table ,Algorithmic efficiency ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Algorithm ,Encoder ,Software ,Random access ,Coding (social sciences) - Abstract
This study presents a hash rearrangement scheme to improve coding efficiency of high-efficiency video coding for screen content (HEVC-SCC) by sharing hashes of the inter-search with intra block copy (IBC). Of the various methods introduced during the HEVC-SCC development, the IBC search technique can yield tremendous coding gains, but creates a massive computational burden on the encoder side. The authors propose an effective way to generate the IBC hash table to avoid redundant operations required for hash entry computation. Moreover, the authors propose a hash rearrangement scheme to apply the second hashes used in the inter-search to IBC and the corresponding IBC search method to reduce the computational burden and to improve the coding efficiency. The experimental results show that compared with the HEVC-SCC test model (SCM)-8.0, the proposed algorithm results 80% time reduction when considering IBC hash generation itself, and can save 9-30% of hash generation time even taking into account the proposed second hash generation. It can also reduce the hash-based IBC search time by 14.61%. Furthermore, the proposed algorithm can achieve Bjontegaard delta bit rate savings of -0.66, -0.45 and -0.66% on average for all intra, low-delay, and random access coding structures, respectively.
- Published
- 2018
- Full Text
- View/download PDF
20. Fast CU size decision algorithm using machine learning for HEVC intra coding
- Author
-
Jechang Jeong and Dokyung Lee
- Subjects
Computational complexity theory ,business.industry ,Computer science ,Brute-force search ,020207 software engineering ,Sobel operator ,02 engineering and technology ,Linear discriminant analysis ,Machine learning ,computer.software_genre ,Algorithmic efficiency ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer ,Encoder ,Algorithm ,Software ,Data compression ,Coding (social sciences) - Abstract
High Efficiency Video Coding (HEVC) is a state-of-the-art video compression standard which improves coding efficiency significantly compared with the previous coding standard, H.264/AVC. In the HEVC standard, novel technologies consuming massive computational power are adopted, such as quad-tree-based coding unit (CU) partitioning. Although an HEVC encoder can efficiently compress various video sequences, the computational complexity of an exhaustive search has become a critical problem in HEVC encoder implementation. In this paper, we propose a fast algorithm for the CU partitioning process of the HEVC encoder using machine learning methods. A complexity measure based on the Sobel operator and rate-distortion costs are defined as features for our algorithm. A CU size can be determined early by employing Fisher’s linear discriminant analysis and the k-nearest neighbors classifier. The statistical data used for the proposed algorithm is updated by adaptive online learning phase. The experimental results show that the proposed algorithm can reduce encoding time by approximately 54.0% with a 0.68% Bjontegaard-Delta bit-rate increase.
- Published
- 2018
- Full Text
- View/download PDF
21. Novel blind interleaver parameters estimation based on Hamming weight distribution of linear codes
- Author
-
Jechang Jeong, Seungwoo Wee, and Changryoul Choi
- Subjects
Interleaving ,Rank (linear algebra) ,Computational complexity theory ,Estimation theory ,Applied Mathematics ,Context (language use) ,Binomial distribution ,symbols.namesake ,Computational Theory and Mathematics ,Gaussian elimination ,Artificial Intelligence ,Signal Processing ,symbols ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Hamming weight ,Algorithm ,Computer Science::Information Theory ,Mathematics - Abstract
Interleaving techniques are used to improve the probability of error correction in communication systems. In a non-cooperative context, interleaver parameters must be determined first so that the received data can be decoded into relevant information. This paper proposes blind interleaver parameter estimation method based on the Hamming weight distribution. Conventional methods based on rank distributions suffer from the high computational complexity of Gaussian elimination. In this study, we exploit the fact that the Hamming weight distributions of linear codes differ from those of random sequences owing to the linear dependence of linear codes. By exploiting this property, the proposed algorithm can estimate the interleaver period without a rank calculation. The values of the χ 2 test are used to estimate the interleaver period by constructing the Hamming weight distribution that differs the most from the binomial distribution. The experimental results indicate that the proposed algorithm outperforms conventional methods.
- Published
- 2021
- Full Text
- View/download PDF
22. Wavelet-content-adaptive BP neural network-based deinterlacing algorithm
- Author
-
Jin Wang and Jechang Jeong
- Subjects
Artificial neural network ,Pixel ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Wavelet transform ,020206 networking & telecommunications ,02 engineering and technology ,Backpropagation ,Theoretical Computer Science ,Wavelet ,Deinterlacing ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Geometry and Topology ,Algorithm ,Software ,Interpolation - Abstract
In this paper, we introduce an intra-field deinterlacing algorithm based on a wavelet-content-adaptive back propagation (BP) neural network (BP-NN) using pixel classification. During interpolation, there is an issue of different image features having completely different properties, such as smooth regions, edges, and textures. We use the wavelet transform to divide the images into several pieces with different properties. Then, each piece has similar image features and each one is assigned to one neural network. The BP-NN-based deinterlacing algorithm can reduce blurring by recovering the missing pixels via a learning process. Compared with existing deinterlacing algorithms, the proposed algorithm improves the peak signal-to-noise ratio and visual quality while maintaining high efficiency.
- Published
- 2017
- Full Text
- View/download PDF
23. Video thumbnail extraction for HEVC
- Author
-
Wonjin Lee, Jechang Jeong, and Gwanggil Jeon
- Subjects
Pixel ,Computer science ,business.industry ,020208 electrical & electronic engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Thumbnail ,020206 networking & telecommunications ,02 engineering and technology ,Quality enhancement ,Upsampling ,Low complexity ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,Bitstream ,business ,Software ,Coding (social sciences) - Abstract
This paper proposes a thumbnail quality enhancement algorithm that uses a predefined weight table. Conventional thumbnail extraction algorithms in high efficiency video coding use a simple downsampling method to produce thumbnail images with low complexity, resulting in thumbnail quality deterioration. The proposed algorithm estimates the original average values of each thumbnail pixel using a weighted average value of several pixels, based on intra-mode direction. The proposed method improves the visual quality of thumbnail images while maintaining low complexity.
- Published
- 2017
- Full Text
- View/download PDF
24. Fast intra coding unit decision for high efficiency video coding based on statistical information
- Author
-
Jechang Jeong and Dokyung Lee
- Subjects
Computational complexity theory ,Computer science ,Real-time computing ,020206 networking & telecommunications ,02 engineering and technology ,Decision rule ,Coding tree unit ,Algorithmic efficiency ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Algorithm ,Encoder ,Software ,Context-adaptive binary arithmetic coding ,Coding (social sciences) ,Data compression - Abstract
The latest video coding compression standard is known as highefficiency video coding (HEVC). It supports high-resolution video sequences and has better coding performance than the previous standard H.264/AVC. A quad-tree based coding unit (CU) partitioning process is one of the most efficient technologies used in an HEVC encoder. A coding tree unit (typically 6464) can be split into smaller CUs based on rate-distortion optimization, allowing various types of video content to be adaptively compressed. In addition, intra prediction of the HEVC standard supports 35 prediction modes (planar, DC, and 33 angular modes) to improve coding efficiency. However, the computational complexity of HEVC encoder becomes a critical problem when implement with an encoder. Thus, a fast CU size decision algorithm for intra prediction of an HEVC encoder is proposed in this study. We utilize image complexity and an adaptive depth prediction for early split CU decision making. In addition, the Bayesian decision rule and quadratic discriminant analysis are used for early termination of the CU partitioning process. Experimental results show that our proposed algorithm considerably reduces encoding time by approximately 55.47% with only a small BD-BR loss (1.01%) compared to the HEVC reference software HM 16.0. Predicted depth and variance difference are exploited for early-split CU detection.CUs partitioning process can be terminated by the Bayesian decision rule and quadratic discriminant analysis.Using online learning system, thresholds of proposed algorithm are periodically updated.Proposed algorithm reduces encoding time up to 55.47% with 1.01% BD-BR loss.
- Published
- 2017
- Full Text
- View/download PDF
25. Fast CU Size Decision for HEVC Intra Coding by Using Local Characteristics and RD Costs
- Author
-
Jechang Jeong and Dokyung Lee
- Subjects
Computer science ,0202 electrical engineering, electronic engineering, information engineering ,020206 networking & telecommunications ,020201 artificial intelligence & image processing ,02 engineering and technology ,Algorithm ,Coding (social sciences) - Published
- 2017
- Full Text
- View/download PDF
26. Deblocking performance analysis of weak filter on versatile video coding
- Author
-
Jechang Jeong and J. Lee
- Subjects
Deblocking filter ,Computer science ,020208 electrical & electronic engineering ,0202 electrical engineering, electronic engineering, information engineering ,02 engineering and technology ,Boundary value problem ,Electrical and Electronic Engineering ,Algorithm ,Coding (social sciences) - Abstract
The deblocking filter of the versatile video-coding (VVC) standard is included in the form of an in-loop filter, as a high efficiency video coding (HEVC) standard. The specifics of the deblocking filter of the VVC remain unchanged from that of the HEVC standard. The presence and type of deblocking filter depend on the quantisation parameters and the conditions of the block boundary. There is a far greater complexity owing to the calculations for boundary condition checks. To verify the enforcement conditions of the weak filter and the filter effectiveness, to propose an efficient judgement condition and to improve the filter are attempted. The scope for improvement of the parameters of the conventional in-loop filter is evaluated and that the compression efficiency of the weak filter part should be improved is proposed.
- Published
- 2020
- Full Text
- View/download PDF
27. AIM 2019 Challenge on RAW to RGB Mapping: Methods and Results
- Author
-
Jechang Jeong, Jie Li, Jun-Pyo Hong, Chang Zhou, Jingjing Xiong, Jiajie Zhang, Rui Huang, Radu Timofte, Kangfu Mei, Muhammad Haris, Kwang-Hyun Uhm, Seo-Won Ji, Seung Wook Kim, Greg Shakhnarovich, Weifeng Ou, Wing Yin Yu, Sung-Jin Cho, Yuzhi Zhao, Andrey Ignatov, Sung-Jea Ko, Songhyun Yu, Haoyu Wu, Norimichi Ukita, Zhang Yujia, Sangmin Kim, Tiantian Zhang, Yubin Yubin, Xiang Shi, Lai-Man Po, Zongbang Liao, Juncheng Li, Bum Jun Park, Pengfei Xian, and Bingxin Hou
- Subjects
Demosaicing ,Computer science ,Structural similarity ,business.industry ,Noise reduction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Visualization ,Gamma correction ,Metric (mathematics) ,RGB color model ,Computer vision ,Artificial intelligence ,business ,Focus (optics) ,Image resolution - Abstract
This paper reviews the first AIM challenge on mapping camera RAW to RGB images with the focus on proposed solutions and results. The participating teams were solving a real-world photo enhancement problem, where the goal was to map the original low-quality RAW images from the Huawei P20 device to the same photos captured with the Canon 5D DSLR camera. The considered problem embraced a number of computer vision subtasks, such as image demosaicing, denoising, gamma correction, image resolution and sharpness enhancement, etc. The target metric used in this challenge combined fidelity scores (PSNR and SSIM) with solutions' perceptual results measured in a user study. The proposed solutions significantly improved baseline results, defining the state-of-the-art for RAW to RGB image restoration.
- Published
- 2019
- Full Text
- View/download PDF
28. PoSNet: 4x Video Frame Interpolation Using Position-Specific Flow
- Author
-
Jechang Jeong, Songhyun Yu, and Bum Jun Park
- Subjects
Computer science ,business.industry ,Image quality ,Interpolation (computer graphics) ,Feature extraction ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical flow ,Estimator ,020207 software engineering ,02 engineering and technology ,Optical flow estimation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Motion interpolation ,business ,Interpolation - Abstract
Video frame interpolation has been studied for a long time; however, it is still a difficult low-level vision task. Owing to the improved performance of optical flow estimation, frame-interpolation studies based on optical flow are actively conducted. However, the existing methods are generally tested using high-fps sequences and developed for 2× upscaling or generating multiple frames with a single estimator. This paper proposes a 4× video-interpolation framework that aims to convert 15-fps to 60-fps videos based on a structure comprising flow estimation followed by an enhancement network. We improve the performance by training specialized flow estimators for each direction and frame position. Furthermore, we use the original frames and flow maps as additional inputs for the enhancement network to improve the subjective image quality. Consequently, the proposed network interpolates high-quality frames with a fast runtime and demonstrates its superiority in the AIM 2019 video temporal super-resolution challenge. The associated code is available at https://github.com/SonghyunYu/PoSNet.
- Published
- 2019
- Full Text
- View/download PDF
29. AIM 2019 Challenge on Video Temporal Super-Resolution: Methods and Results
- Author
-
Lior Aloni, Eyal Naor, Sanghyun Son, Munchurl Kim, Wenbo Bao, George Pisha, Lijie Zhang, Tong Liu, Yunhua Lu, Songhyun Yu, Wenxiu Sun, Seungjun Nah, Myungsub Choi, Guannan Chen, Xiangyu Xu, Wang Shen, Li Chen, Bum Jun Park, Ze Pan, Heewon Kim, Kyoung Mu Lee, Woonsung Park, Bohyung Han, Li Siyao, Zhiyong Gaon, Guangtao Zhai, Jechang Jeong, Radu Timofte, Ran Duan, Ning Xu, and Sangmin Kim
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,02 engineering and technology ,Frame rate ,Superresolution ,Kernel (image processing) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Motion interpolation ,business ,Interpolation - Abstract
Videos contain various types and strengths of motions that may look unnaturally discontinuous in time when the recorded frame rate is low. This paper reviews the first AIM challenge on video temporal super-resolution (frame interpolation) with a focus on the proposed solutions and results. From low-frame-rate (15 fps) video sequences, the challenge participants are asked to submit higher-frame-rate (60 fps) video sequences by estimating temporally intermediate frames. We employ the REDS_VTSR dataset derived from diverse videos captured in a hand-held camera for training and evaluation purposes. The competition had 62 registered participants, and a total of 8 teams competed in the final testing phase. The challenge winning methods achieve the state-of-the-art in video temporal super-resolution.
- Published
- 2019
- Full Text
- View/download PDF
30. Robust Temporal Super-Resolution for Dynamic Motion Videos
- Author
-
Jechang Jeong, Songhyun Yu, and Bum Jun Park
- Subjects
Source code ,business.industry ,Computer science ,media_common.quotation_subject ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical flow ,Image processing ,Video processing ,Robustness (computer science) ,Computer vision ,Artificial intelligence ,business ,media_common ,Dynamic motion - Abstract
It is difficult to apply most video temporal super-resolution studies for real-world scenes because they are optimized for a specific range of characteristics. In this paper, we propose a video temporal super-resolution method that is tolerant to motion diversity and noise. Our proposed method improves its robustness by fine-tuning the pre-trained SPyNet that is trained for videos with simple motions and moderate conditions. Moreover, our proposed network learns to accurately synthesize two frames generated by a backward warping function without requiring any additional information using the architecture of a modified DHDN. This enables our proposed method to efficiently synthesize two warped frames by saving the computational complexity for pre-training and extracting the additional information. Finally, we apply the self-ensemble method, which is commonly used in studies on image processing but not on video processing. The application of the self-ensemble method enables our network to generate stable output frames with improved quality without any additional training. Our proposed network proved its performance by ranking 5th in the AIM 2019 video temporal super-resolution challenge; the performance gap between our proposed network and the 3rd-and 4th-ranked solutions was very small. The source code and pre-trained models are available at https://github.com/BumjunPark/DVTSR.
- Published
- 2019
- Full Text
- View/download PDF
31. Deep Iterative Down-Up CNN for Image Denoising
- Author
-
Songhyun Yu, Jechang Jeong, and Bum Jun Park
- Subjects
Computer science ,business.industry ,Noise reduction ,Feature extraction ,Pattern recognition ,02 engineering and technology ,Real image ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,Convolution ,03 medical and health sciences ,Noise ,0302 clinical medicine ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,business ,Image resolution - Abstract
Networks using down-scaling and up-scaling of feature maps have been studied extensively in low-level vision research owing to efficient GPU memory usage and their capacity to yield large receptive fields. In this paper, we propose a deep iterative down-up convolutional neural network (DIDN) for image denoising, which repeatedly decreases and increases the resolution of the feature maps. The basic structure of the network is inspired by U-Net which was originally developed for semantic segmentation. We modify the down-scaling and up-scaling layers for image denoising task. Conventional denoising networks are trained to work with a single-level noise, or alternatively use noise information as inputs to address multi-level noise with a single model. Conversely, because the efficient memory usage of our network enables it to handle multiple parameters, it is capable of processing a wide range of noise levels with a single model without requiring noise-information inputs as a work-around. Consequently, our DIDN exhibits state-of-the-art performance using the benchmark dataset and also demonstrates its superiority in the NTIRE 2019 real image denoising challenge.
- Published
- 2019
- Full Text
- View/download PDF
32. Densely Connected Hierarchical Network for Image Denoising
- Author
-
Bum Jun Park, Songhyun Yu, and Jechang Jeong
- Subjects
business.industry ,Computer science ,Noise reduction ,sRGB ,020208 electrical & electronic engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Image processing ,02 engineering and technology ,Real image ,Convolutional neural network ,Convolution ,symbols.namesake ,Additive white Gaussian noise ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Artificial intelligence ,Noise (video) ,business - Abstract
Recently, deep convolutional neural networks have been applied in numerous image processing researches and have exhibited drastically improved performances. In this study, we introduce a densely connected hierarchical image denoising network (DHDN), which exceeds the performances of state-of-the-art image denoising solutions. Our proposed network improves the image denoising performance by applying the hierarchical architecture of the modified U-Net; this makes our network to use a larger number of parameters than other methods. In addition, we induce feature reuse and solve the vanishing-gradient problem by applying dense connectivity and residual learning to our convolution blocks and network. Finally, we successfully apply the model ensemble and self-ensemble methods; this enable us to improve the performance of the proposed network. The performance of the proposed network is validated by winning the second place in the NTRIE 2019 real image denoising challenge sRGB track and the third place in the raw-RGB track. Additional experimental results on additive white Gaussian noise removal also establishes that the proposed network outperforms conventional methods; this is notwithstanding the fact that the proposed network handles a wide range of noise levels with a single set of trained parameters.
- Published
- 2019
- Full Text
- View/download PDF
33. Speckle Noise Removal Technique in SAR Images using SRAD and Weighted Least Squares Filter
- Author
-
Seungwon Yu, Jechang Jeong, and Hyunho Choi
- Subjects
Synthetic aperture radar ,business.industry ,Anisotropic diffusion ,Image quality ,Computer science ,010401 analytical chemistry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Speckle noise ,02 engineering and technology ,Filter (signal processing) ,01 natural sciences ,Least squares ,Multiplicative noise ,0104 chemical sciences ,Speckle pattern ,Noise ,Computer Science::Graphics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business - Abstract
In the process of obtaining synthetic aperture radar (SAR) images, speckle noise exhibited in the SAR images. Speckle noise in the SAR images degrades an image quality and become difficult to interpret the image. Speckle noise reduction technique is a crucial step for using various application of SAR images. In this paper, we proposed a novel algorithm based on speckle reducing anisotropic diffusion (SRAD) and weighted least squares (WLS) filter for removing speckle noise and preserving edge information. The SRAD filter is applied as a preprocessing filter. A logarithmic transformation employed to convert a multiplicative noise remaining in the SRAD filter result image. The multiplicative noise changed as an additive noise. The additive noise in the SRAD filtering result image removed by the WLS filter. We can obtain a despeckled image employing an exponential transform. Experimental results demonstrate that the proposed method exhibits the best speckle noise reduction and edge preservation performance than the conventional filtering techniques.
- Published
- 2019
- Full Text
- View/download PDF
34. Hierarchical motion estimation algorithm using multiple candidates for frame rate up-conversion
- Author
-
Jechang Jeong and Songhyun Yu
- Subjects
Computational complexity theory ,Computer science ,Motion estimation ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Frame rate up conversion ,Motion estimation algorithm ,Resolution (logic) ,Frame rate ,Algorithm ,Motion vector - Abstract
Motion estimation (ME) has the highest computational complexity in motion-compensated frame rate up-conversion (MC-FRUC). For the real-time implementation of FRUC, a fast ME algorithm is required. In this paper, a new hierarchical ME algorithm for MC-FRUC is proposed. It constructs an image pyramid by dividing the frame into several sub-images according to resolution, and performs ME at the top level to reduce complexity while improving accuracy by selecting multiple motion vector candidates. These candidates are refined at the lower levels, and the final motion vector is selected at the bottom level. Thus, the proposed algorithm obtains an average peak signal-to-noise ratio gain of upto 0.85 dB compared to conventional algorithms with lower computational complexity and yields interpolated images with better visual quality than other methods.
- Published
- 2019
- Full Text
- View/download PDF
35. RNN-based bitstream feature extraction method for codec classification
- Author
-
Jechang Jeong and Seungwoo Wee
- Subjects
Recurrent neural network ,business.industry ,Computer science ,Feature extraction ,Codec ,Pattern recognition ,Artificial intelligence ,Bitstream ,business - Published
- 2019
- Full Text
- View/download PDF
36. Bilateral Filtering and Directional Differentiation for Bayer Demosaicking
- Author
-
Jechang Jeong, Gwanggil Jeon, Zhensen Wu, Jiaji Wu, and Jin Wang
- Subjects
Demosaicing ,Pixel ,Physics::Instrumentation and Detectors ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Iterative reconstruction ,Similarity (network science) ,Computer Science::Computer Vision and Pattern Recognition ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Bilateral filter ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Instrumentation ,ComputingMethodologies_COMPUTERGRAPHICS ,Interpolation - Abstract
In this paper, we introduce an efficient image demosaicking method using a bilateral filter and directional differentiation with consideration of both the spatial closeness and the similarity between the interpolated pixel and the neighbor pixels. Spatial closeness is considered as spatial locality. We utilize an adaptive weighted average to estimate the missing pixel value, where the adaptive weight is calculated based on three components: directional differentiation, similarity between the pixel and each of its neighbor pixels, and spatial locality. The experimental results show that the proposed method outperforms existing approaches in both objective and subjective performance.
- Published
- 2017
- Full Text
- View/download PDF
37. Wiener filter-based wavelet domain denoising
- Author
-
Jechang Jeong, Jiaji Wu, Jin Wang, Gwanggil Jeon, and Zhensen Wu
- Subjects
Computer science ,Noise (signal processing) ,business.industry ,Noise reduction ,Wiener filter ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Wiener deconvolution ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Non-local means ,Image (mathematics) ,Human-Computer Interaction ,symbols.namesake ,Wavelet ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Root-raised-cosine filter - Abstract
The wavelet domain Wiener filter has been widely adopted as an effective image denoising method that has low complexity. In this paper we propose a novel Wiener filter with high-resolution estimation that determines the signal power while preserving the edge information. We assume that a noisy image is composed of noise and the original image, which are mutually orthogonal. Based on this assumption, we utilize the local covariance to obtain high-resolution coefficients from the low-resolution coefficients and to estimate the signal variance in the Wiener filter by using the high resolution values. The experimental results show that the proposed algorithm improves the objective and subjective performance significantly.
- Published
- 2017
- Full Text
- View/download PDF
38. High Dynamic Range Image Display Combining Weighted Least Squares Filtering with Color Appearance Model
- Author
-
Jechang Jeong, Mei-Xian Piao, Seung-Woo Wee, and Kyung-Jun Lee
- Subjects
Computer science ,business.industry ,Pattern recognition ,Computer vision ,Artificial intelligence ,Tone mapping ,business ,Image display ,High dynamic range ,Active appearance model - Abstract
최근 넓은 동적 영역 이미지 기술은 컴퓨터 그래픽 분야에서 화제다. 본 논문에서는 가중 최소자승(가중회귀분석) 최적화 체계에 기반하여 넓은 동적 영역 이미지를 처리하는 톤매핑 알고리듬을 제안한다. 제안하는 방법은 시각적 후광 현상을 피하는 동시에 기존의 디스플레이에서 더 지각적인 넓은 동적 영역 이미지들을 보여주기 위해 가중 최소자승 필터링과 iCAM06모델을 결합한다. 제안된 알고리듬은 먼저 넓은 동적 영역 이미지를 base layer와 detail layer로 나눈다. Base layer는 큰 규모의 변화량을 가지고 있으며 가중 최소자승 필터링을 사용하여 얻어지고 iCAM06 모델을 포함한다. 다음으로, 인간의 시각 체계에 따라 base layer는 적응적으로 압축된다. 압축 시에는 base layer만 대비를 줄이고 detail layer을 보존한다. 본 논문에서는 객관적 화질 평가와 주관적 화질 평가를 통하여 제안하는 알고리듬을 적용한 이미지가 기존의 알고리듬을 적용한 이미지들에 비해 원본 넓은 동적 영역 이미지에 더 유사하다는 것을 보여준다.
- Published
- 2016
- Full Text
- View/download PDF
39. Four-Direction Residual Interpolation for Demosaicking
- Author
-
Yonghoon Kim and Jechang Jeong
- Subjects
Demosaicing ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Bilinear interpolation ,020206 networking & telecommunications ,Stairstep interpolation ,02 engineering and technology ,Multivariate interpolation ,Nearest-neighbor interpolation ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Image scaling ,Bicubic interpolation ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,Mathematics ,Interpolation - Abstract
In this paper, we propose a four-direction residual interpolation (FDRI) method for color filter array interpolation. The proposed algorithm exploits a guided filtering process to generate the tentative image. The residual image is generated by exploiting the tentative and original images. We use an FDRI algorithm to more accurately estimate the missing pixel values; the estimated image is adaptively combined with a joint inverse gradient weight. Based on the experimental results, the proposed method provides a superior performance in terms of objective and subjective quality compared with the conventional state-of-the-art demosaicking methods.
- Published
- 2016
- Full Text
- View/download PDF
40. GPU-parallel interpolation using the edge-direction based normal vector method for terrain triangular mesh
- Author
-
Jechang Jeong, Gwanggil Jeon, Long Deng, and Jiaji Wu
- Subjects
Mathematical optimization ,Computer science ,020207 software engineering ,Raised-relief map ,Terrain ,02 engineering and technology ,CUDA ,Computer Science::Graphics ,Triangle mesh ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Polygon mesh ,Node (circuits) ,Normal ,Algorithm ,ComputingMethodologies_COMPUTERGRAPHICS ,Information Systems ,Interpolation - Abstract
In the geographic information field, triangular mesh modes are often used to describe terrain, where the normal vector to the surface at the node of a triangular mesh plays an important role in reconstruction and display. However, the normal vectors on the nodes of triangular meshes cannot be given directly, but instead must be computed using known data. Currently, the most common method of computing the normal vector at the nodes on a triangular mesh is to sum the normal vectors of the adjacent triangular facets using various weighting factors. For complex terrain surfaces, such a method is not very effective, and in some cases is not as good as classical weighted average algorithms. By studying interpolation based on edge-direction, combined with a terrain triangular mesh, we propose using a GPU-Parallel normal vector interpolation meth od based on the edge-direction for a terrain triangular mesh. Since terrain data is usually large, traditional serial algorithms are difficult to use while still meeting real-time requirements. In this paper, we use CUDA optimization strategies to make full use of the GPU (NVIDIA TESLA K80) for effectively solving this problem. The experimental results show that compared to traditional weighted average algorithms, the accuracy of the normal vector to the surface at the node increases significantly, and compared to serial algorithms only on CPU, speeds are increased by 646.4 times with the I/O transfer time being taken into account, meeting the real-time requirements.
- Published
- 2016
- Full Text
- View/download PDF
41. Performance Comparison of Weak Filtering in HEVC and VVC
- Author
-
Jung Hyun Lee and Jechang Jeong
- Subjects
HEVC ,Computer Networks and Communications ,Computer science ,Deblocking filter ,lcsh:TK7800-8360 ,deblocking filter ,02 engineering and technology ,0202 electrical engineering, electronic engineering, information engineering ,Codec ,Electrical and Electronic Engineering ,video signal processing ,lcsh:Electronics ,020208 electrical & electronic engineering ,Filter (signal processing) ,video coding ,in-loop filter ,Computer engineering ,Hardware and Architecture ,Control and Systems Engineering ,Performance comparison ,Signal Processing ,020201 artificial intelligence & image processing ,video compression ,video codecs ,VVC ,Data compression - Abstract
This study describes the need to improve the weak filtering method for the in-loop filter process used identically in versatile video coding (VVC) and high efficiency video coding (HEVC). The weak filtering process used by VVC has been adopted and maintained since Draft Four during H.265/advanced video coding (AVC) standardization. Because the encoding process in the video codec utilizes block structural units, deblocking filters are essential. However, as many of the deblocking filters require a complex calculation process, it is necessary to ensure that they have a reasonable effect. This study evaluated the performance of the weak filtering portion of the VVC and confirmed that it is not functioning effectively, unlike its performance in the HEVC. The method of excluding the whole of weak filtering from VVC, which is a non-weak filtering method, should be considered in VVC standardization. In experimental result in this study, the non-weak filtering method brings 0.40 Y-Bjontegaard-Delta Bit-Rate (BDBR) gain over VVC Test Model (VTM) 6.0.
- Published
- 2020
- Full Text
- View/download PDF
42. Despeckling Algorithm for Removing Speckle Noise from Ultrasound Images
- Author
-
Hyunho Choi and Jechang Jeong
- Subjects
Physics and Astronomy (miscellaneous) ,Computer science ,Anisotropic diffusion ,General Mathematics ,Noise reduction ,02 engineering and technology ,weighted guided image filtering ,01 natural sciences ,Multiplicative noise ,010309 optics ,ultrasound imaging ,Speckle pattern ,Wavelet ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,discrete wavelet transform ,speckle noise ,lcsh:Mathematics ,fungi ,food and beverages ,Speckle noise ,Filter (signal processing) ,lcsh:QA1-939 ,Noise ,Computer Science::Graphics ,Chemistry (miscellaneous) ,Computer Science::Computer Vision and Pattern Recognition ,020201 artificial intelligence & image processing ,gradient domain guided image filtering ,Algorithm - Abstract
Ultrasound (US) imaging can examine human bodies of various ages, however, in the process of obtaining a US image, speckle noise is generated. The speckle noise inhibits physicians from accurately examining lesions, thus, a speckle noise removal method is essential technology. To enhance speckle noise elimination, we propose a novel algorithm using the characteristics of speckle noise and filtering methods based on speckle reducing anisotropic diffusion (SRAD) filtering, discrete wavelet transform (DWT) using symmetry characteristics, weighted guided image filtering (WGIF), and gradient domain guided image filtering (GDGIF). The SRAD filter is exploited as a preprocessing filter because it can be directly applied to a medical US image containing speckle noise without a log-compression. The wavelet domain has the advantage of suppressing the additive noise. Therefore, a homomorphic transformation is utilized to convert the multiplicative noise into additive noise. After two-level DWT decomposition is applied, to suppress the residual noise of an SRAD filtered image, GDGIF and WGIF are exploited to reduce noise from seven high-frequency sub-band images and one low-frequency sub-band image, respectively. Finally, a noise-free image is attained through inverse DWT and an exponential transform. The proposed algorithm exhibits excellent speckle noise elimination and edge conservation as compared with conventional denoising methods.
- Published
- 2020
- Full Text
- View/download PDF
43. Image Demosaicking Using Densely Connected Convolutional Neural Network
- Author
-
Bum Jun Park and Jechang Jeong
- Subjects
Demosaicing ,Computational complexity theory ,Computer science ,business.industry ,010401 analytical chemistry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Pattern recognition ,02 engineering and technology ,01 natural sciences ,Convolutional neural network ,Field (computer science) ,0104 chemical sciences ,Convolution ,Image (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Interpolation - Abstract
In this paper, we propose image demosaicking model using densely connected convolutional neural network. Recently, deep neural networks show improved results in image processing field compared with conventional algorithms. However, they often suffer vanishing-gradient problem which makes models hard to be trained. To solve this problem, we applied densely connected convolutional neural network. More than that, our proposed network doesn't need any initial interpolation which can reduce computational complexity. Finally, we applied sub-pixel interpolation layer which can generate demosaicked output image efficiently and accurately. Experimental results show that our proposed model outperformed conventional methods.
- Published
- 2018
- Full Text
- View/download PDF
44. Multiscale Decomposition Based High Dynamic Range Tone Mapping Method using Guided Image Filter
- Author
-
Ming Gao, Seungwoo Wee, and Jechang Jeong
- Subjects
Scale (ratio) ,Computer science ,business.industry ,Dynamic range ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Filter (signal processing) ,Tone mapping ,Function (mathematics) ,Composite image filter ,Image (mathematics) ,Computer Science::Computer Vision and Pattern Recognition ,Computer vision ,Artificial intelligence ,business ,High dynamic range - Abstract
Guided image filter (GIF) is proposed for manipulating the high dynamic range (HDR) image. In the conventional algorithm, it is used to divide an image into a base layer and a detail layer and then to multiply the detail layer by compression function for enhancing the detail information of the image. However, in most cases, an image displays the detail features and edge information of the objects in different scales. That is to say, all the detail features of the objects cannot be clearly reflected under a certain scale. The multi-scale decomposition utilizes more a scale extracts the edges and details of the image, the main idea is to filter the image through multi-scale decomposition to get a basic layer and several detail layers, which are respectively processed and synthesized at the output. In this paper, an GIF multi-scale decomposition for HDR image is introduced. The experimental results show that the proposed algorithm has better edge preserving performance than the conventional algorithm.
- Published
- 2018
- Full Text
- View/download PDF
45. Multi-Exposure Image Fusion Based on Patch using Global and Local Characteristics
- Author
-
Jechang Jeong, Hyunho Choi, and Jihwan Kim
- Subjects
Brightness ,Image fusion ,Computer science ,business.industry ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Filter (signal processing) ,Signal ,Transformation (function) ,0202 electrical engineering, electronic engineering, information engineering ,Contrast (vision) ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Laplace operator ,High dynamic range ,media_common ,Unsharp masking - Abstract
In this paper, we propose an algorithm that improves the weight map part consisting of signal strength, signal structure, and mean intensity. The patch-based conventional weight map causes the brightness of the image to be shifted to one side, resulting in loss of image information, unexpected artifacts, and an overall unbalance in image brightness. In this study, we propose a novel algorithm by improving the weight map. First, the order-statistic filter using maximum values. Second, the unsharp masking filter using Laplacian. Third, the linear combination using gamma transformation. The proposed algorithm prevents the loss of image information by reducing the over-saturation of the image, accurate representation of dark and bright areas by increasing contrast, and preserve the detail such as the edge. Through subjective and objective experimental results, it is confirmed that the proposed algorithm shows better performance than the conventional algorithms.
- Published
- 2018
- Full Text
- View/download PDF
46. Frame Rate Up-Conversion Considering The Direction and Magnitude of Identical Motion Vectors
- Author
-
Jechang Jeong and Jonggeun Park
- Subjects
Physics ,Motion field ,Control theory ,Motion estimation ,Linear motion ,Mathematical analysis ,Magnitude (mathematics) ,Frame rate up conversion ,Motion (physics) ,Quarter-pixel motion ,Motion system - Published
- 2015
- Full Text
- View/download PDF
47. Multidirectional Weighted Interpolation and Refinement Method for Bayer Pattern CFA Demosaicking
- Author
-
Jechang Jeong, Gwanggil Jeon, Liwen He, and Xiangdong Chen
- Subjects
Demosaicing ,Bayer filter ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Bilinear interpolation ,Stairstep interpolation ,Multivariate interpolation ,Nearest-neighbor interpolation ,Media Technology ,Image scaling ,Bicubic interpolation ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
This paper presents a novel multidirectional weighted interpolation algorithm for color filter array interpolation. Our proposed method has two contributions to demosaicking. First, different from conventional interpolation methods based on two directions or four directions, the proposed method exploits to greater degree correlations among neighboring pixels along eight directions to improve the interpolation performance. Second, we propose an efficient postprocessing method to reduce interpolation artifacts based on the color difference planes. Compared with conventional state-of-the-art demosaicking algorithms, our experimental results show the proposed algorithm provides superior performance in both objective and subjective image quality. Furthermore, this implementation has moderate computational complexity.
- Published
- 2015
- Full Text
- View/download PDF
48. Piecewise Image Denoising with Multi-scale Block Region Detector based on Quadtree Structure
- Author
-
Jeehyun Lee and Jechang Jeong
- Subjects
Pixel ,business.industry ,Detector ,Pattern recognition ,Total variation denoising ,Non-local means ,Computer Science::Computer Vision and Pattern Recognition ,Piecewise ,Quadtree ,Computer vision ,Artificial intelligence ,Bilateral filter ,business ,Image restoration ,Mathematics - Abstract
This paper presents a piecewise image denoising with multi-scale block region detector based on quadtree structure for effective image restoration. Proposed piecewise image denoising method suggests multi-scale block region detector (MBRD) by dividing whole pixels of a noisy image into three parts, with regional characteristics: strong variation region, weak variation region, and flat region. These regions are classified according to total pixels variation between multi-scale blocks and are applied principal component analysis with local pixel grouping, bilateral filtering, and structure-preserving image decomposition operator called relative total variation. The performance of proposed method is evaluated by Experimental results. we can observe that region detection results generated by the detector seems to be well classified along the characteristics of regions. In addition, the piecewise image denoising provides the positive gain with regard to PSNR performance. In the visual evaluation, details and edges are preserved efficiently over the each region; therefore, the proposed method effectively reduces the noise and it proves that it improves the performance of denoising by the restoration process according to the region characteristics.
- Published
- 2015
- Full Text
- View/download PDF
49. An efficient spatial deblocking of images with DCT compression
- Author
-
Jechang Jeong, Gwanggil Jeon, Zhensen Wu, and Jin Wang
- Subjects
Pixel ,Image quality ,Computer science ,Deblocking filter ,business.industry ,Applied Mathematics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Filter (signal processing) ,Ringing artifacts ,Thresholding ,Computational Theory and Mathematics ,Artificial Intelligence ,Computer Science::Computer Vision and Pattern Recognition ,Signal Processing ,Discrete cosine transform ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,business ,Block (data storage) - Abstract
We propose an adaptive spatial postprocessing algorithm for using in block-based discrete cosine transform (BDCT) coded images. The proposed method is comprised of three procedural steps: a thresholding step, a model classification step, and a deblocking filtering step. First, we apply adaptive thresholding to extract the pixel vector containing the blocking artifacts. This threshold has a strong correlation with image quality and the standard deviation of the pixel vector. Next, we update block types using a simple rule, thus influencing deblocking performance. Our research proposes that the activity of the pixel vector can be used to categorize the pixel vector model. With the pixel vector model determined, we are then able to apply a suitable filter to each model with different local properties. For example, we are able to utilize a directional filter to reduce ringing artifacts in edge regions. With various images and bit-rate conditions, images deblocked by the proposed method exhibit both significant visual quality improvement and PSNR gain, with fairly low computational complexity.
- Published
- 2015
- Full Text
- View/download PDF
50. Enhanced Binary Block Matching Method for Constrained One-bit Transform based Motion Estimation
- Author
-
Hyungdo Kim and Jechang Jeong
- Subjects
business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Binary number ,Pattern recognition ,Quarter-pixel motion ,Sum of absolute differences ,Motion estimation ,Sum of absolute transformed differences ,Lapped transform ,Artificial intelligence ,business ,Mathematics ,Block (data storage) ,Block-matching algorithm - Abstract
In this paper, Enhanced binary block matching method for Constrained one-bit transform (C1BT) based motion estimation is proposed. Binary motion estimation exploits the Number of non-matched points (NNMP) as a block matching criterion instead of the Sum of Absolute Differences (SAD) for low complex motion estimation. The motion estimation using SAD could use the smaller block for more accurate motion estimation. In this paper the enhanced binary block matching method using smaller motion estimation block for C1BT is proposed to the more accurate binary matching. Experimental results shows that the proposed algorithm has better Peak Signal to Noise Ration PSNR) results compared with conventional binary transform algorithms. Keyword : Motion estimation, Block matching algorithm, One-bit transform, Constrained one-bit transform, Binary block
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.