16 results on '"An-Dong Gong"'
Search Results
2. NTIRE 2022 Challenge on High Dynamic Range Imaging: Methods and Results
- Author
-
Eduardo Perez-Pellitero, Sibi Catley-Chandar, Richard Shaw, Ales Leonardis, Radu Timofte, Zexin Zhang, Cen Liu, Yunbo Peng, Yue Lin, Gaocheng Yu, Jin Zhang, Zhe Ma, Hongbin Wang, Xiangyu Chen, Xintao Wang, Haiwei Wu, Lin Liu, Chao Dong, Jiantao Zhou, Qingsen Yan, Song Zhang, Weiye Chen, Yuhang Liu, Zhen Zhang, Yanning Zhang, Javen Qinfeng Shi, Dong Gong, Dan Zhu, Mengdi Sun, Guannan Chen, Yang Hu, Haowei Li, Baozhu Zou, Zhen Liu, Wenjie Lin, Ting Jiang, Chengzhi Jiang, Xinpeng Li, Mingyan Han, Haoqiang Fan, Jian Sun, Shuaicheng Liu, Juan Marin-Vega, Michael Sloth, Peter Schneider-Kamp, Richard Rottger, Chunyang Li, Long Bao, Gang He, Ziyao Xu, Li Xu, Gen Zhan, Ming Sun, Xing Wen, Junlin Li, Jinjing Li, Chenghua Li, Ruipeng Gang, Fangya Li, Chenming Liu, Shuang Feng, Fei Lei, Rui Liu, Junxiang Ruan, Tianhong Dai, Wei Li, Zhan Lu, Hengyan Liu, Peian Huang, Guangyu Ren, Yonglin Luo, Chang Liu, Qiang Tu, Sai Ma, Yizhen Cao, Steven Tel, Barthelemy Heyrman, Dominique Ginhac, Chul Lee, Gahyeon Kim, Seonghyun Park, An Gia Vien, Truong Thanh Nhat Mai, Howoon Yoon, Tu Vo, Alexander Holston, Sheir Zaheer, and Chan Y. Park
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
This paper reviews the challenge on constrained high dynamic range (HDR) imaging that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2022. This manuscript focuses on the competition set-up, datasets, the proposed methods and their results. The challenge aims at estimating an HDR image from multiple respective low dynamic range (LDR) observations, which might suffer from under- or over-exposed regions and different sources of noise. The challenge is composed of two tracks with an emphasis on fidelity and complexity constraints: In Track 1, participants are asked to optimize objective fidelity scores while imposing a low-complexity constraint (i.e. solutions can not exceed a given number of operations). In Track 2, participants are asked to minimize the complexity of their solutions while imposing a constraint on fidelity scores (i.e. solutions are required to obtain a higher fidelity score than the prescribed baseline). Both tracks use the same data and metrics: Fidelity is measured by means of PSNR with respect to a ground-truth HDR image (computed both directly and with a canonical tonemapping operation), while complexity metrics include the number of Multiply-Accumulate (MAC) operations and runtime (in seconds)., Comment: CVPR Workshops 2022. 15 pages, 21 figures, 2 tables
- Published
- 2022
- Full Text
- View/download PDF
3. A Lightweight Network for High Dynamic Range Imaging
- Author
-
Qingsen Yan, Song Zhang, Weiye Chen, Yuhang Liu, Zhen Zhang, Yanning Zhang, Javen Qinfeng Shi, and Dong Gong
- Published
- 2022
- Full Text
- View/download PDF
4. Semi-supervised Learning via Conditional Rotation Angle Estimation
- Author
-
Hai-Ming Xu, Lingqiao Liu, and Dong Gong
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Self-supervised learning (SlfSL), aiming at learning feature representations through ingeniously designed pretext tasks without human annotation, has achieved compelling progress in the past few years. Very recently, SlfSL has also been identified as a promising solution for semi-supervised learning (SemSL) since it offers a new paradigm to utilize unlabeled data. This work further explores this direction by proposing to couple SlfSL with SemSL. Our insight is that the prediction target in SemSL can be modeled as the latent factor in the predictor for the SlfSL target. Marginalizing over the latent factor naturally derives a new formulation which marries the prediction targets of these two learning processes. By implementing this idea through a simple-but-effective SlfSL approach -- rotation angle prediction, we create a new SemSL approach called Conditional Rotation Angle Estimation (CRAE). Specifically, CRAE is featured by adopting a module which predicts the image rotation angle conditioned on the candidate image class. Through experimental evaluation, we show that CRAE achieves superior performance over the other existing ways of combining SlfSL and SemSL. To further boost CRAE, we propose two extensions to strengthen the coupling between SemSL target and SlfSL target in basic CRAE. We show that this leads to an improved CRAE method which can achieve the state-of-the-art SemSL performance.
- Published
- 2021
- Full Text
- View/download PDF
5. Memory-augmented Dynamic Neural Relational Inference
- Author
-
Dong Gong, Zhen Zhang, Javen Qinfeng Shi, and Anton Van Den Hengel
- Published
- 2021
- Full Text
- View/download PDF
6. Robust and Accurate Hybrid Structure-From-Moti
- Author
-
Ziwei Wei, Rui Li, Dong Gong, Yu Zhu, Jinqiu Sun, and Yanning Zhang
- Subjects
0209 industrial biotechnology ,Computer science ,GRASP ,Bundle adjustment ,02 engineering and technology ,Iterative reconstruction ,Graph ,Connected dominating set ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Scene graph ,Algorithm - Abstract
In this paper, we propose a hybrid Structure-from-Motion scheme which combines the strength of both global and local incremental SfM methods to get a drift-free and accurate estimation with lower time consumption. More specifically, we propose to construct a robust maximum leaf spanning tree (RMLST) from the initial scene graph and further expand it to a robust graph (RG) to grasp the global picture of camera distribution and scene structure. Then the views in the robust graph are solved in global manner as an initial estimation. After that, the remaining views are estimated with the proposed community-based local incremental approach to guarantee local accuracy and scalability. Bundle adjustment is conducted to optimize the estimation. Experiments show that our method is robust and free from the scene drift as global SfM, and shows much better efficiency than incremental approaches. Besides, our algorithm achieves higher accuracy compared with the state-of-the-art methods.
- Published
- 2019
- Full Text
- View/download PDF
7. Attention-Guided Network for Ghost-Free High Dynamic Range Imaging
- Author
-
Ian Reid, Qinfeng Shi, Chunhua Shen, Dong Gong, Qingsen Yan, Yanning Zhang, and Anton van den Hengel
- Subjects
FOS: Computer and information sciences ,business.industry ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Deep learning ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical flow ,020207 software engineering ,02 engineering and technology ,GeneralLiterature_MISCELLANEOUS ,Hallucinating ,High-dynamic-range imaging ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Ghosting ,business ,High dynamic range ,Block (data storage) - Abstract
Ghosting artifacts caused by moving objects or misalignments is a key challenge in high dynamic range (HDR) imaging for dynamic scenes. Previous methods first register the input low dynamic range (LDR) images using optical flow before merging them, which are error-prone and cause ghosts in results. A very recent work tries to bypass optical flows via a deep network with skip-connections, however, which still suffers from ghosting artifacts for severe movement. To avoid the ghosting from the source, we propose a novel attention-guided end-to-end deep neural network (AHDRNet) to produce high-quality ghost-free HDR images. Unlike previous methods directly stacking the LDR images or features for merging, we use attention modules to guide the merging according to the reference image. The attention modules automatically suppress undesired components caused by misalignments and saturation and enhance desirable fine details in the non-reference images. In addition to the attention model, we use dilated residual dense block (DRDB) to make full use of the hierarchical features and increase the receptive field for hallucinating the missing details. The proposed AHDRNet is a non-flow-based method, which can also avoid the artifacts generated by optical-flow estimation error. Experiments on different datasets show that the proposed AHDRNet can achieve state-of-the-art quantitative and qualitative results., Comment: Accepted to appear at CVPR 2019
- Published
- 2019
- Full Text
- View/download PDF
8. Multi-Scale Dense Networks for Deep High Dynamic Range Imaging
- Author
-
Qingsen Yan, Pingping Zhang, Ian Reid, Dong Gong, Jinqiu Sun, Qinfeng Shi, and Yanning Zhang
- Subjects
Ground truth ,Computer science ,business.industry ,Dynamic range ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Iterative reconstruction ,Convolutional neural network ,Image (mathematics) ,Set (abstract data type) ,High-dynamic-range imaging ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Generating a high dynamic range (HDR) image from a set of sequential exposures is a challenging task for dynamic scenes. The most common approaches are aligning the input images to a reference image before merging them into an HDR image, but artifacts often appear in cases of large scene motion. The state-of-the-art method using deep learning can solve this problem effectively. In this paper, we propose a novel deep convolutional neural network to generate HDR, which attempts to produce more vivid images. The key idea of our method is using the coarse-to-fine scheme to gradually reconstruct the HDR image with the multi-scale architecture and residual network. By learning the relative changes of inputs and ground truth, our method can produce not only artificial free image but also restore missing information. Furthermore, we compare to existing methods for HDR reconstruction, and show high-quality results from a set of low dynamic range (LDR) images. We evaluate the results in qualitative and quantitative experiments, our method consistently produces excellent results than existing state-of-the-art approaches in challenging scenes.
- Published
- 2019
- Full Text
- View/download PDF
9. Self-Paced Kernel Estimation for Robust Blind Image Deblurring
- Author
-
Anton van den Hengel, Dong Gong, Yanning Zhang, Mingkui Tan, and Qinfeng Shi
- Subjects
Deblurring ,Image quality ,Computer science ,Kernel density estimation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Convolution ,Kernel (linear algebra) ,symbols.namesake ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Image restoration ,0105 earth and related environmental sciences ,Pixel ,business.industry ,Pattern recognition ,Kernel (image processing) ,Gaussian noise ,Computer Science::Computer Vision and Pattern Recognition ,Outlier ,symbols ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
The challenge in blind image deblurring is to remove the effects of blur with limited prior information about the nature of the blur process. Existing methods often assume that the blur image is produced by linear convolution with additive Gaussian noise. However, including even a small number of outliers to this model in the kernel estimation process can significantly reduce the resulting image quality. Previous methods mainly rely on some simple but unreliable heuristics to identify outliers for kernel estimation. Rather than attempt to identify outliers to the model a priori, we instead propose to sequentially identify inliers, and gradually incorporate them into the estimation process. The selfpaced kernel estimation scheme we propose represents a generalization of existing self-paced learning approaches, in which we gradually detect and include reliable inlier pixel sets in a blurred image for kernel estimation. Moreover, we automatically activate a subset of significant gradients w.r.t. the reliable inlier pixels, and then update the intermediate sharp image and the kernel accordingly. Experiments on both synthetic data and real-world images with various kinds of outliers demonstrate the effectiveness and robustness of the proposed method compared to the stateof- the-art methods.
- Published
- 2017
- Full Text
- View/download PDF
10. From Motion Blur to Motion Flow: A Deep Learning Solution for Removing Heterogeneous Motion Blur
- Author
-
Dong Gong, Qinfeng Shi, Chunhua Shen, Yanning Zhang, Jie Yang, Lingqiao Liu, Ian Reid, and Anton van den Hengel
- Subjects
FOS: Computer and information sciences ,Artificial neural network ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,Deep learning ,Motion blur ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Kernel (image processing) ,Motion field ,Motion estimation ,Prior probability ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Image restoration ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
Removing pixel-wise heterogeneous motion blur is challenging due to the ill-posed nature of the problem. The predominant solution is to estimate the blur kernel by adding a prior, but extensive literature on the subject indicates the difficulty in identifying a prior which is suitably informative, and general. Rather than imposing a prior based on theory, we propose instead to learn one from the data. Learning a prior over the latent image would require modeling all possible image content. The critical observation underpinning our approach, however, is that learning the motion flow instead allows the model to focus on the cause of the blur, irrespective of the image content. This is a much easier learning task, but it also avoids the iterative process through which latent image priors are typically applied. Our approach directly estimates the motion flow from the blurred image through a fully-convolutional deep neural network (FCN) and recovers the unblurred image from the estimated motion flow. Our FCN is the first universal end-to-end mapping from the blurred image to the dense motion flow. To train the FCN, we simulate motion flows to generate synthetic blurred-image-motion-flow pairs thus avoiding the need for human labeling. Extensive experiments on challenging realistic blurred images demonstrate that the proposed method outperforms the state-of-the-art.
- Published
- 2017
- Full Text
- View/download PDF
11. Blind Image Deconvolution by Automatic Gradient Activation
- Author
-
Yanning Zhang, Mingkui Tan, Qinfeng Shi, Anton van den Hengel, and Dong Gong
- Subjects
Blind deconvolution ,business.industry ,Kernel density estimation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,Real image ,Synthetic data ,Kernel (image processing) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Deconvolution ,business ,Image gradient ,Image restoration ,Mathematics - Abstract
Blind image deconvolution is an ill-posed inverse problem which is often addressed through the application of appropriate prior. Although some priors are informative in general, many images do not strictly conform to this, leading to degraded performance in the kernel estimation. More critically, real images may be contaminated by nonuniform noise such as saturation and outliers. Methods for removing specific image areas based on some priors have been proposed, but they operate either manually or by defining fixed criteria. We show here that a subset of the image gradients are adequate to estimate the blur kernel robustly, no matter the gradient image is sparse or not. We thus introduce a gradient activation method to automatically select a subset of gradients of the latent image in a cutting-plane-based optimization scheme for kernel estimation. No extra assumption is used in our model, which greatly improves the accuracy and flexibility. More importantly, the proposed method affords great convenience for handling noise and outliers. Experiments on both synthetic data and real-world images demonstrate the effectiveness and robustness of the proposed method in comparison with the state-of-the-art methods.
- Published
- 2016
- Full Text
- View/download PDF
12. Joint Motion Deblurring with Blurred/Noisy Image Pair
- Author
-
Yanning Zhang, Haisen Li, Jinqiu Sun, and Dong Gong
- Subjects
Deblurring ,Noise measurement ,business.industry ,Noise reduction ,Motion blur ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Computer Science::Graphics ,Kernel (image processing) ,Robustness (computer science) ,Computer Science::Computer Vision and Pattern Recognition ,Image pair ,Computer vision ,Artificial intelligence ,business ,Image restoration ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
Motion blurred images are widely existing when using a hand-held camera especially under the dim lighting conditions. Since edge information contained in the noisy image may be blurred by the motion blur, a blurred/noisy image pair captured under different exposure time can help to restore a sharp image. In the traditional deblurring methods based on blurred/noisy image pair, the deblurring process is in series with the denoising process, so that restoration result is sensitive to the denoised result. In this paper, we propose a robust algorithm to obtain the sharp image by fusing the blurred image and noisy image. By joint modeling the deblurring model and denoising model, the restoration result can be optimized via estimating the sharp image and blur kernel alternately in the proposed methods, and it is not sensitive to the denoised result benefited by the joint model. Experimental results demonstrated that the proposed method can achieve better performance compared with the state-of-the-art single image denoising methods, single image deblurring methods and blurred/noisy pair deblurring methods.
- Published
- 2014
- Full Text
- View/download PDF
13. Neighbor combination for atmospheric turbulence image reconstruction
- Author
-
Shaobo Dang, Jinqiu Sun, Dong Gong, and Yanning Zhang
- Subjects
Sequence ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Sampling (statistics) ,Pattern recognition ,Iterative reconstruction ,Image (mathematics) ,Resampling ,Redundancy (engineering) ,Artificial intelligence ,Deconvolution ,Combinatory logic ,business ,Mathematics - Abstract
In this paper, we propose a novel neighbor combination framework for the reconstruction of the atmospheric turbulence degenerated image sequence. To utilize the spatial and temporal redundancy, a neighbor vector sampling strategy in spatial and temporal domain is conducted relying on the modeling of the registered sequence. Then, a combinator of neighbor vectors is developed based on a resampling maximum likelihood model and a relative approximation. Relying on the neighbor combination and spatial-invariant deconvolution, a clear image is reconstructed. Experiments on real data sets demonstrate the effectiveness of this framework.
- Published
- 2013
- Full Text
- View/download PDF
14. A UML-based Joint Operation Dynamic Model
- Author
-
Qiang Zhang, Ying Xing, Wei Jia, Li-li Shan, and Wei-dong Gong
- Subjects
Class (computer programming) ,Emulation ,Engineering ,business.industry ,Frame (networking) ,Applications of UML ,Unified Modeling Language ,Action (philosophy) ,Joint (building) ,Software engineering ,business ,Operation model ,computer ,Simulation ,computer.programming_language - Abstract
Expatiating on the important functions of the joint operation model in military emulation, discussing the causations of military operations based on UML, bringing forward dynamic frame class and the rule of the modeling, taking the eliminating fraise action in the attack campaign as the example to illuminating.
- Published
- 2012
- Full Text
- View/download PDF
15. Research on improving time-domain resolution of pulse-echo methods by compensation filtering
- Author
-
Zhiwei Hou, Dong Gong, Qiufeng Li, and Jie Chen
- Subjects
Frequency response ,Engineering ,Transducer ,business.industry ,Acoustics ,Electronic engineering ,Ultrasonic sensor ,Time domain ,Filter (signal processing) ,Ringing ,business ,Digital filter ,Compensation (engineering) - Abstract
In ultrasonic pulse-echo NDT of structures, the transducers usually operate in its harmonic frequencies in order to maximize the amplitude response. However, there exist some unwanted effects in such mode. For instance, the generated/detected signals will have long ringing, which means the time-domain resolution will be reduced. Presented here is a digital filtering method to compensate the unwanted frequency response characteristics of transducers. The first step is to establish a discrete transfer function model of transducer system based on time-domain system identification algorithms. Then the established model can be realized by a digital compensation filter to reduce the ringing effects of the measured signals. Experimental verification of the proposed method is carried out. After calibrating and modeling for transducers by water-immersion test, a compensation model is established and used to the thickness measurement of a concrete specimen. The compensation results show that the ring of transducer can be greatly reduced and the reflection from bottom of the specimen can be extracted from the original overlapped signals. To further validate the effect of the proposed method, its application in B-scan imaging of a concrete element with embedded anomaly is also given.
- Published
- 2010
- Full Text
- View/download PDF
16. Key agreement with authenticated between trusted nodes based on self-issued certificate in WSN
- Author
-
Tao, Liu, primary, Gan, Huang, additional, and Yi-Dong, Gong, additional
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.