3,537 results on '"Motion blur"'
Search Results
2. Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions
- Author
-
Low, Weng Fei, Lee, Gim Hee, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Towards robust visual odometry by motion blur recovery.
- Author
-
Simin Luan, Cong Yang, Xue Qin, Dongfeng Chen, and Wei Sui
- Subjects
VISUAL odometry ,VISUAL perception ,CAMERA movement ,UNITS of measurement ,SCARCITY - Abstract
Introduction: Motion blur, primarily caused by rapid camera movements, significantly challenges the robustness of feature point tracking in visual odometry (VO). Methods: This paper introduces a robust and efficient approach for motion blur detection and recovery in blur-prone environments (e.g., with rapid movements and uneven terrains). Notably, the Inertial Measurement Unit (IMU) is utilized for motion blur detection, followed by a blur selection and restoration strategy within the motion frame sequence. It marks a substantial improvement over traditional visual methods (typically slow and less effective, falling short in meeting VO's realtime performance demands). To address the scarcity of datasets catering to the image blurring challenge in VO, we also present the BlurVO dataset. This publicly available dataset is richly annotated and encompasses diverse blurred scenes, providing an ideal environment for motion blur evaluation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. EHNet: Efficient Hybrid Network with Dual Attention for Image Deblurring.
- Author
-
Ho, Quoc-Thien, Duong, Minh-Thien, Lee, Seongsoo, and Hong, Min-Cheol
- Subjects
- *
CONVOLUTIONAL neural networks , *TRANSFORMER models , *FEATURE extraction , *IMAGE processing , *IMAGE sensors , *DEEP learning - Abstract
The motion of an object or camera platform makes the acquired image blurred. This degradation is a major reason to obtain a poor-quality image from an imaging sensor. Therefore, developing an efficient deep-learning-based image processing method to remove the blur artifact is desirable. Deep learning has recently demonstrated significant efficacy in image deblurring, primarily through convolutional neural networks (CNNs) and Transformers. However, the limited receptive fields of CNNs restrict their ability to capture long-range structural dependencies. In contrast, Transformers excel at modeling these dependencies, but they are computationally expensive for high-resolution inputs and lack the appropriate inductive bias. To overcome these challenges, we propose an Efficient Hybrid Network (EHNet) that employs CNN encoders for local feature extraction and Transformer decoders with a dual-attention module to capture spatial and channel-wise dependencies. This synergy facilitates the acquisition of rich contextual information for high-quality image deblurring. Additionally, we introduce the Simple Feature-Embedding Module (SFEM) to replace the pointwise and depthwise convolutions to generate simplified embedding features in the self-attention mechanism. This innovation substantially reduces computational complexity and memory usage while maintaining overall performance. Finally, through comprehensive experiments, our compact model yields promising quantitative and qualitative results for image deblurring on various benchmark datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Image Motion Blur Removal Algorithm Based on Generative Adversarial Network.
- Author
-
Kim, Jongchol, Kim, Myongchol, Kim, Insong, Han, Gyongwon, Jong, Myonghak, and Ri, Gwuangwon
- Subjects
- *
GENERATIVE adversarial networks , *COMPUTER vision , *OBJECT recognition (Computer vision) , *DEEP learning , *VISUAL fields , *IMAGE reconstruction - Abstract
The restoration of blurred images is a crucial topic in the field of machine vision, with far-reaching implications for enhancing information acquisition quality, improving algorithmic accuracy and enriching image texture. Efforts to mitigate the phenomenon of blur have progressed from statistical approaches to those utilizing deep learning techniques. In this paper, we propose a Generative Adversarial Network (GAN)-based image restoration method to address the limitations of existing techniques in restoring color and detail in motion-blurred images. To reduce the computational complexity of generative adversarial networks and the vanishing gradient during learning, an U-net-based generator is used, and it is configured to emphasize the channel and spatial characteristics of the original information through a proposed CSAR(Channel and Spatial Attention Residual) blocks module rather than a simple concatenate operation. To validate the efficacy of the algorithm, comprehensive comparative experiments have been conducted on the GoPro dataset. Experimental results show that the peak signal-to-noise ratio is improved compared to SRN and MPRNet algorithms with good image restoration ability. Objects detection experiments using Yolo V3 showed that the proposed algorithms can generate deblerring images with higher information quality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A novel method for measuring center-axis velocity of unmanned aerial vehicles through synthetic motion blur images
- Author
-
Quanxi Zhan, Yanmin Zhou, Junrui Zhang, Chenyang Sun, Runjie Shen, and Bin He
- Subjects
Hydroelectric power plants ,UAV ,Motion blur ,Axial velocity measurement ,Electronic computers. Computer science ,QA75.5-76.95 ,Computer engineering. Computer hardware ,TK7885-7895 - Abstract
Abstract Accurate velocity measurement of unmanned aerial vehicles (UAVs) is essential in various applications. Traditional vision-based methods rely heavily on visual features, which are often inadequate in low-light or feature-sparse environments. This study presents a novel approach to measure the axial velocity of UAVs using motion blur images captured by a UAV-mounted monocular camera. We introduce a motion blur model that synthesizes imaging from neighboring frames to enhance motion blur visibility. The synthesized blur frames are transformed into spectrograms using the Fast Fourier Transform (FFT) technique. We then apply a binarization process and the Radon transform to extract light-dark stripe spacing, which represents the motion blur length. This length is used to establish a model correlating motion blur with axial velocity, allowing precise velocity calculation. Field tests in a hydropower station penstock demonstrated an average velocity error of 0.048 m/s compared to ultra-wideband (UWB) measurements. The root-mean-square error was 0.025, with an average computational time of 42.3 ms and CPU load of 17%. These results confirm the stability and accuracy of our velocity estimation algorithm in challenging environments.
- Published
- 2024
- Full Text
- View/download PDF
7. A novel method for measuring center-axis velocity of unmanned aerial vehicles through synthetic motion blur images.
- Author
-
Zhan, Quanxi, Zhou, Yanmin, Zhang, Junrui, Sun, Chenyang, Shen, Runjie, and He, Bin
- Subjects
FAST Fourier transforms ,VELOCITY ,RADON transforms ,MOTION ,VELOCITY measurements ,HYDROELECTRIC power plants - Abstract
Accurate velocity measurement of unmanned aerial vehicles (UAVs) is essential in various applications. Traditional vision-based methods rely heavily on visual features, which are often inadequate in low-light or feature-sparse environments. This study presents a novel approach to measure the axial velocity of UAVs using motion blur images captured by a UAV-mounted monocular camera. We introduce a motion blur model that synthesizes imaging from neighboring frames to enhance motion blur visibility. The synthesized blur frames are transformed into spectrograms using the Fast Fourier Transform (FFT) technique. We then apply a binarization process and the Radon transform to extract light-dark stripe spacing, which represents the motion blur length. This length is used to establish a model correlating motion blur with axial velocity, allowing precise velocity calculation. Field tests in a hydropower station penstock demonstrated an average velocity error of 0.048 m/s compared to ultra-wideband (UWB) measurements. The root-mean-square error was 0.025, with an average computational time of 42.3 ms and CPU load of 17%. These results confirm the stability and accuracy of our velocity estimation algorithm in challenging environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Computer Vision Techniques Demonstrate Robust Orientation Measurement of the Milky Way Despite Image Motion.
- Author
-
Tao, Yiting, Perera, Asanka, Teague, Samuel, McIntyre, Timothy, Warrant, Eric, and Chahl, Javaan
- Subjects
- *
MILKY Way , *COMPUTER vision , *TEST methods , *PSYCHOLOGICAL resilience , *SPECIES - Abstract
Many species rely on celestial cues as a reliable guide for maintaining heading while navigating. In this paper, we propose a method that extracts the Milky Way (MW) shape as an orientation cue in low-light scenarios. We also tested the method on both real and synthetic images and demonstrate that the performance of the method appears to be accurate and reliable to motion blur that might be caused by rotational vibration and stabilisation artefacts. The technique presented achieves an angular accuracy between a minimum of 0.00 ° and a maximum 0.08 ° for real night sky images, and between a minimum of 0.22 ° and a maximum 1.61 ° for synthetic images. The imaging of the MW is largely unaffected by blur. We speculate that the use of the MW as an orientation cue has evolved because, unlike individual stars, it is resilient to motion blur caused by locomotion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A Recurrent Deep Architecture for Enhancing Indoor Camera Localization Using Motion Blur Elimination.
- Author
-
Alam, Muhammad S., Mohamed, Farhan B., Selamat, Ali, and Hossain, AKM B.
- Subjects
RECURRENT neural networks ,ROBOT motion ,COMPUTER vision ,MOBILE robots ,FREQUENCY spectra - Abstract
Rapid growth and technological improvements in computer vision have enabled indoor camera localization. The accurate camera localization of an indoor environment is challenging because it has many complex problems, and motion blur is one of them. Motion blur introduces significant errors, degrades the image quality, and affects feature matching, making it challenging to determine camera pose accurately. Improving the camera localization accuracy for some robotic applications is still necessary. In this study, we propose a recurrent neural network (RNN) approach to solve the indoor camera localization problem using motion blur reduction. Motion blur in an image is detected by analyzing its frequency spectrum. A low-frequency component indicates motion blur, and by investigating the direction of these low-frequency components, the location and amount of blur are estimated. Then, Wiener filtering deconvolution removes the blur and obtains a clear copy of the original image. The performance of the proposed approach is evaluated by comparing the original and blurred images using the peak signal-to-noise ratio (PSNR) and structural similarity index(SSIM). After that, the camera pose is estimated using recurrent neural architecture from deblurred images or videos. The average camera pose error obtained through our approach is (0.16m, 5.61?). In two recent research, Deep Attention and CGAPoseNet, the average pose error is (19m, 6.25?) and (0.27m, 9.39?), respectively. The results obtained through the proposed approach improve the current research results. As a result, some applications of indoor camera localization, such as mobile robots and guide robots, will work more accurately. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Spatiotemporal Phase Aperture Coding for Motion Deblurring
- Author
-
Elmalem, Shay, Giryes, Raja, and Liang, Jinyang, editor
- Published
- 2024
- Full Text
- View/download PDF
11. Underwater Image Enhancement Based on the Fusion of PUIENet and NAFNet
- Author
-
Li, Chao, Yang, Bo, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Sheng, Bin, editor, Bi, Lei, editor, Kim, Jinman, editor, Magnenat-Thalmann, Nadia, editor, and Thalmann, Daniel, editor
- Published
- 2024
- Full Text
- View/download PDF
12. Development of a Camera Motion Estimation Method Utilizing Motion Blur in Images
- Author
-
Zhao, Yuxin, Ishii, Hirotake, Shimoda, Hiroshi, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Stephanidis, Constantine, editor, Antona, Margherita, editor, Ntoa, Stavroula, editor, and Salvendy, Gavriel, editor
- Published
- 2024
- Full Text
- View/download PDF
13. Enhancement of Motion Blurred Crack Images Based on Conditional Generative Adversarial Network
- Author
-
Wang, Wenjun, Su, Chao, and Han, Guohui
- Published
- 2024
- Full Text
- View/download PDF
14. Smartphone video motion deblur order model
- Author
-
Resen Adhab Sallama
- Subjects
smartphone platform ,motion blur ,gaussian orientation ,blur filter ,loss function ,Optics. Light ,QC350-467 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
A method has been proposed to eliminate slight motion blur in the image. The method is implemented in three stages. Blur estimation is achieved by prior information on the distribution image gradient. The Gaussian Orientation Filter (GOF) fits the prior information to find the regression coefficients. Order combines different estimate GOF parameters to generate a removal blur filter. Estimation parameters are fixed and set blur on the image to produce an image without boosting the noise and unwanted. The proposed model optimization solves the problem by minimizing the loss function. The suggested method applies to outdoor and indoor video acquired by modern smartphones. The experiment result display is accurate for the full regression motion blur model. The suggested model example on video dataset conditions has 23 s video time long and 228 MP dataset size. Measurement evaluation established on time consumer, Structural Similarity Index Measure and Peak Signal-to-Noise Ratio. Experimental results show that the image artifact phase is less consuming computational time. The proposed model has a minimized cost function and generates image quality.
- Published
- 2024
- Full Text
- View/download PDF
15. Fusion Images Techniques for Motion Pixel in A blurred Image.
- Author
-
mohammed, Nawras badeaa, Mohamad, Haidar J., and Abbas, Heba Kh.
- Subjects
- *
IMAGE fusion , *DIGITAL image processing , *CROSS correlation , *PIXELS , *STANDARD deviations , *DIGITAL images , *STATISTICAL correlation - Abstract
Fusing digital images is an essential step in digital image processing, as it allows for integrating information from two or more images into a single image of high quality and clarity. This work fused images resulting from motion blur (left and right) with blur block sizes of 3, 5, 7, 9, and 11. The image resulting from the blur towards the right was combined with the image resulting from the blur towards the left for the same degree of blur using traditional techniques such as addition, multiplication, and new suggested techniques, namely absolute real standard deviation, binary standard deviation, real Covariance, and binary Covariance. The data examined by quality assessment methods with the reference depends on Mutual Information, Correlation Coefficient, Structural Similarity Index metric, Structural Content, Normalized Cross Correlation, and without references like Blind Reference, less Spatial Image Quality Evaluator, Naturalness Image Quality Evaluator, Perception-based Image Quality Evaluator, and Entropy. The best combination method was binary covariance and standard binary division. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. P‐39: Simulation of Perceived Motion Blur on 480Hz OLED Monitor.
- Author
-
Yang, ChangMo, Lim, Kyongho, and Park, Tae-Yong
- Subjects
GAMES industry ,FORECASTING - Abstract
Refresh rate is an important specification for gaming OLED monitors. With the development of the gaming industry and the graphics processing unit (GPU), the demand for gaming monitors that supporting high refresh rates is increasing. In this paper, simulation methods for perceived motion blur are proposed to predict the degree of blur according to the refresh rate. Experimental results indicate that the proposed simulation methods are quite effective in predicting the degree of motion blur. In addition, this paper presents the predicted results of blur as the refresh rate increases up to 480Hz. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. P‐35: Evaluation of the Performance of Gaming Monitors and Visual Fatigue.
- Author
-
Blankenbach, Karlheinz and Bhatti, Faraz
- Subjects
REFLECTANCE - Abstract
We evaluated the performance of three flat large‐sized gaming monitors with 35 subjects regarding the recognition of Landolt C with short visualization time and fast movement. A search task using circles in dark and bright conditions and visual fatigue. An OLED monitor performed significantly better than two LCDs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. 58‐4: Late‐News Paper: Dynamic MTF Measurements of Gaming Monitors.
- Author
-
Masaoka, Kenichiro and Bergquist, Johan
- Subjects
TRANSFER functions ,ESPORTS ,METROLOGY ,VELOCITY ,CAMERAS - Abstract
Recent gaming monitors claim high refresh rates of ≥240 Hz with response times ≤1 ms. However, these temporal metrics do not clearly reveal their true spatiotemporal resolution, which depends on the velocity at which the images are displayed. This is the first study to demonstrate the line‐based dynamic modulation transfer function (MTF) measurement method for comparing the performances of different gaming monitors. A single line was scrolled on the monitors, and a small region of the screen was captured with a high‐speed camera during one display refresh period. The dynamic MTF results elucidate the dependence of spatiotemporal resolution characteristics on the scroll speed, refresh rate, and response time of the monitors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Real-Time Motion Blur Using Multi-Layer Motion Vectors.
- Author
-
Lee, Donghyun, Kwon, Hyeoksu, and Oh, Kyoungsu
- Subjects
IMAGE processing ,MOTION - Abstract
Traditional methods for motion blur, often relying on a single layer, deviate from the correct colors. We propose a multilayer rendering method that closely approximates the motion blur effect. Our approach stores motion vectors for each pixel, divides these vectors into multiple sample points, and performs a backward search from the current pixel. The color at a sample point is sampled if it shares the same motion vector as its origin. This procedure repeats across layers, with only the nearest color values sampled for depth testing. The average color sampled at each point becomes that of the motion blur. Our experimental results indicate that our method significantly reduces the color deviation commonly found in traditional approaches, achieving structural similarity index measures (SSIM) of 0.8 and 0.92, which represent substantial improvements over the accumulation method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Improved Transformer-Based Deblurring of Commodity Videos in Dynamic Visual Cabinets.
- Author
-
Huang, Shuangyi, Liang, Qianjie, Xie, Kai, He, Zhengfang, Wen, Chang, He, Jianbiao, and Zhang, Wei
- Subjects
TRANSFORMER models ,CONVOLUTIONAL neural networks ,FEATURE extraction ,VIDEOS ,SIGNAL-to-noise ratio ,INTERACTIVE videos - Abstract
In the dynamic visual cabinet, the occurrence of motion blur when consumers take out commodities will reduce the accuracy of commodity detection. Recently, although Transformer-based video deblurring networks have achieved results compared to Convolutional Neural Networks in some blurring scenarios, they are still challenging for the non-uniform blurring problem that occurs when consumers pick up the commodities, such as the problem of difficult alignment of blurred video frames of small commodities and the problem of underutilizing the effective information between the video frames of commodities. Therefore, an improved Transformer video deblurring network is proposed. Firstly, a multi-scale Transformer feature extraction method is utilized for non-uniform blurring. Secondly, for the problem of difficult alignment of small-item-blurred video frames, a temporal interactive attention mechanism is designed for video frame alignment. Finally, a feature recurrent fusion mechanism is introduced to supplement the effective information of commodity features. The experimental results show that the proposed method has practical significance in improving the accuracy of commodity detection. Moreover, compared with the recent Transformer deblurring algorithm Video Restoration Transformer, the Peak Signal-to-Noise Ratio of this paper's algorithm is higher than that of the Deep Video Deblurring dataset and the Fuzzy Commodity Dataset by 0.23 dB and 0.81 dB, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. An Edge-Enhanced Branch for Multi-Frame Motion Deblurring
- Author
-
Sota Moriyama and Koichi Ichige
- Subjects
Motion blur ,deblurring ,edge enhancement ,image restoration ,optical flow ,SSIM ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Non-uniform deblurring is one of the most important image restoration tasks for providing appropriate information for subsequent applications that require image recognition. Conventional deep learning-based multi-frame deblurring methods collectively handle many types of non-uniform blurring, such as camera shakes and motion blur. However, edge and high-frequency component restoration is still insufficient for severe motion blur. This paper proposes an auxiliary edge-enhanced branch to support motion blur restoration for deep learning-based multi-frame deblurring methods. The background region in an image with little motion generally has more edge information, whereas the moving object region lacks high-frequency components. Thus, we propose a motion orthogonal edge (MOE) feature that extracts only the edge information of moving objects by computing the pixel-wise inner product between the edge information obtained by Sobel filters and the optical flow representing motion in the image. MOEs can emphasize only the edges of moving objects excluding the backgrounds. In this paper, we add an edge-enhanced branch that computes MOEs to a conventional multi-frame deblurring method, the spatio-temporal deformable attention network, and call it ESTDANet. We introduce additional frequency reconstruction loss to restore high-frequency components and compare our proposed ESTDANet with the conventional baseline method in our comparative experiments. Furthermore, we introduce motion-weighted SSIM maps to distinguish the deblurring accuracy in motion regions spatially. The results show that our edge-enhanced branch aids edge restoration in the motion deblurring of conventional methods.
- Published
- 2024
- Full Text
- View/download PDF
22. EHNet: Efficient Hybrid Network with Dual Attention for Image Deblurring
- Author
-
Quoc-Thien Ho, Minh-Thien Duong, Seongsoo Lee, and Min-Cheol Hong
- Subjects
convolution neural networks ,dual attention module ,hybrid architecture ,image deblurring ,motion blur ,Transformer ,Chemical technology ,TP1-1185 - Abstract
The motion of an object or camera platform makes the acquired image blurred. This degradation is a major reason to obtain a poor-quality image from an imaging sensor. Therefore, developing an efficient deep-learning-based image processing method to remove the blur artifact is desirable. Deep learning has recently demonstrated significant efficacy in image deblurring, primarily through convolutional neural networks (CNNs) and Transformers. However, the limited receptive fields of CNNs restrict their ability to capture long-range structural dependencies. In contrast, Transformers excel at modeling these dependencies, but they are computationally expensive for high-resolution inputs and lack the appropriate inductive bias. To overcome these challenges, we propose an Efficient Hybrid Network (EHNet) that employs CNN encoders for local feature extraction and Transformer decoders with a dual-attention module to capture spatial and channel-wise dependencies. This synergy facilitates the acquisition of rich contextual information for high-quality image deblurring. Additionally, we introduce the Simple Feature-Embedding Module (SFEM) to replace the pointwise and depthwise convolutions to generate simplified embedding features in the self-attention mechanism. This innovation substantially reduces computational complexity and memory usage while maintaining overall performance. Finally, through comprehensive experiments, our compact model yields promising quantitative and qualitative results for image deblurring on various benchmark datasets.
- Published
- 2024
- Full Text
- View/download PDF
23. Deep learning in motion deblurring: current status, benchmarks and future prospects
- Author
-
Xiang, Yawen, Zhou, Heng, Li, Chengyang, Sun, Fangwei, Li, Zhongbo, and Xie, Yongqiang
- Published
- 2024
- Full Text
- View/download PDF
24. Integrating DeblurGAN and CNN to improve the accuracy of motion blur X-Ray image classification.
- Author
-
Chiu, Ming-Chuan and Wei, Chia-Jung
- Abstract
X-rays are common tools used in clinical diagnoses. During the X-ray process, a patient is required to lie still or to hold a deep breath. However, when dealing with patients who may shake involuntarily or with restless children, the required conditions are not always achievable, and blurred images often occur. If these patients receive repeated X-rays, they are exposed to additional amounts of radiation. This research integrates the DeblurGAN model and a convolutional neural network (CNN) to increase the accuracy for classifying clinical X-ray motion blurred images, eliminating the need for repeated clinical X-rays. Results show that classification accuracy of the motion blur group was 87% while the deblurred group was 91%, indicating the process not only improves classification accuracy but also restores the deblurred group images to a 92% level of the original image group. This study verifies the feasibility of applying the DeblurGAN model to medical X-ray image deblurring, helping resolve the challenges presented by a lack of X-ray motion blur image data and allowing InceptionV3 to accurately identify the problem. This method can be further applied to other motion blur medical images (such as MRI and CT) to improve overall clinical diagnosis efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. YUVDR: A residual network for image deblurring in YUV color space.
- Author
-
Zhang, Meng, Wang, Haidong, and Guo, Yina
- Subjects
COLOR space ,COMPUTER vision ,IMAGE stabilization ,DIGITAL video ,GRAPHICS processing units - Abstract
Motion blur removal caused by camera shake and object motion in 3D space has long been a challenge in computer vision. Although RGB images are commonly used as input data for CNN-based image deblurring, their inherent issues of color overlap and high dimensionality can limit performance. To address these problems, we propose YUVDR, a residual network based on YUV color space, for image deblurring. By using YUV images, we mitigate the issues of color overlap and mutual influence. We introduce novel loss functions and conduct experiments on three datasets, namely GoPro, DVD and NFS, which offer a wide range of image quality levels, scene complexities, and types of motion blur. Our proposed method outperforms state-of-the-art algorithms, yielding a 3-5 dB improvement in the PSNR of test results. In addition, utilizing the YUV color space as the input data can greatly reduce the number of training parameters and model size, by approximately 15 times. This optimization of GPU memory not only improves training efficiency, but also reduces testing time for practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Estimating Acceleration from a Single Uniform Linear Motion-Blurred Image using Homomorphic Mapping and Machine Learning.
- Author
-
Alexander Cortés-Osorio, Jimy, Bernardo Gómez-Mendoza, Juan, and Carlos Riaño-Rojas, Juan
- Subjects
MACHINE learning ,DEEP learning ,SUPPORT vector machines ,GAUSSIAN processes ,MOTION ,REGRESSION trees ,COMPUTER vision - Abstract
Copyright of Ingeniería (0121-750X) is the property of Ingenieria and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
27. P‐15.32: A Novel Evaluation Method of Organic Light‐Emitting Diode Motion Blur.
- Author
-
Zhang, Yaoren, Zhang, Zhengchuan, Wang, Bo, and Luo, Zhongming
- Subjects
LIGHT emitting diodes ,EVALUATION methodology - Abstract
The motion blur of organic light‐emitting diodes (OLEDs) consists of the brightness and color transition. This paper presents a new and comprehensive evaluation method that can quantify the motion blur more intuitively. By Tri‐stimulus waveform integration, we obtain the brightness and the color evolution trend by every frame after data signal input. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. 62‐3: The Effect of OLED Device Capacitance on Low Gray Levels Motion Blur.
- Author
-
Zhao, Xuesen, Zhang, Xianping, Zhao, Wei, Xu, Jin, Wang, Hongyu, and Song, Wonjun
- Subjects
ORGANIC light emitting diodes ,THIN film transistors ,ELECTRIC capacity - Abstract
In this study, we have demonstrated the phenomenon of motion blur in mobile phone during the process of application and analyzed the relevant factors that affect motion blur at low gray levels, including thin film transistor (TFT), organic light emitting diodes (OLED), and electronic code. Our findings indicated that the OLED capacitance has a more significant impact on phenomenon of motion blur than TFT and electronic code. Furthermore, we discovered that OLED capacitance is inversely proportional to the brightness of the first frame when switching from a black screen to white/red/green/blue screens. By adjusting holes accumulation at the OLED interface and the thickness of the common layer, it is possible to reduce OLED capacitance while improving objective value and subjective visual performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. 10‐2: Research on low brightness Motion blur in low temperature poly‐silicon display field.
- Author
-
Sun, Guangyuan, Zhao, Xuesen, Shangguan, Xiuning, Ma, Zhili, Huang, Genmao, and Zhu, Xiujian
- Subjects
ORGANIC light emitting diodes ,GRAYSCALE model ,LOW temperatures ,CELL phones ,TELEPHONE calls - Abstract
Our research found that there are some low‐brightness application scenarios in the process of using the mobile phone. The smart interface option of the mobile phone is called "dark scene mode/dark color mode". Under this mode, the brightness of the mobile phone is adjusted to the lowest level, the background interface is black, the font displayed is white or the color with gray scale. When read a novel or drag the phone interface, the white text displayed on the screen is dimmer and the color is changed, which is called "Motion blur". In the research process, we compared the TFT device of low‐temperature poly‐silicon with that OLED device of organic light‐emitting material. The research results showed that under low brightness, Motion blurred has been showed different response states when OLED material or its related process parameters has been changed.while the TFT device of low‐temperature poly‐silicon only responded under the condition of high brightness. When brightness decreases, there is a weak correlation between the effects on Motion blurs [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Linear Blur Direction Estimation Using a Convolutional Neural Network
- Author
-
Nasonov, Andrey, Nasonova, Alexandra, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Rousseau, Jean-Jacques, editor, and Kapralos, Bill, editor
- Published
- 2023
- Full Text
- View/download PDF
31. Estimating Acceleration from a Single Uniform Linear Motion-Blurred Image using Homomorphic Mapping and Machine Learning
- Author
-
Jimy Alexander Cortés-Osorio, Juan Bernardo Gómez-Mendoza, and Juan Carlos Riaño-Rojas
- Subjects
acceleration ,computer vision ,deep learning ,machine learning ,motion blur ,vision-based measurement ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Context: Vision-based measurement (VBM) systems are becoming popular as an affordable and suitable alternative for scientific and engineering applications. When cameras are used as instruments, motion blur usually emerges as a recurrent and undesirable image degradation, which in fact contains kinematic information that is usually dismissed. Method: This paper introduces an alternative approach to measure relative acceleration from a real invariant uniformly accelerated linear motion-blurred image. This is done by using homomorphic mapping to extract the characteristic Point Spread Function (PSF) of the blurred image, as well as machine learning regression. A total of 125 uniformly accelerated motion-blurred pictures were taken in a light- and distance-controlled environment, at five different accelerations ranging between 0,64 and 2,4 m/s2. This study evaluated 19 variants such as tree ensembles, Gaussian processes (GPR), and linear, support vector machine (SVM), and tree regression. Results: The best RMSE result corresponds to GPR (Matern 5/2), with 0,2547 m/s2 and a prediction speed of 530 observations per second (obs/s). Additionally, some novel deep learning methods were used to obtain the best RMSE value (0,4639 m/s2 for Inception ResNet v2, with a prediction speed of 11 obs/s. Conclusions: The proposed method (homomorphic mapping and machine learning) is a valid alternative for calculating acceleration from invariant motion blur in real-time applications when additive noise is not dominant, even surpassing the deep learning techniques evaluated.
- Published
- 2024
- Full Text
- View/download PDF
32. Multiple frequency–spatial network for RGBT tracking in the presence of motion blur.
- Author
-
Fan, Shenghua, Chen, Xi, He, Chu, Yu, Lei, Mao, Zhongjie, and Zheng, Yujin
- Subjects
- *
INFRARED imaging , *THERMOGRAPHY , *CAMERA movement , *INFORMATION networks , *CAMERAS - Abstract
RGBT tracking combines visible and thermal infrared images to achieve tracking and faces challenges due to motion blur caused by camera and target movement. In this study, we observe that the tracking in motion blur is significantly affected by both frequency and spatial aspects. And blurred targets exhibit sharp texture details that are represented as high-frequency information. But existing trackers capture low-frequency components while ignoring high-frequency information. To enhance the representation of sharp information in blurred scenes, we introduce multi-frequency and multi-spatial information in network, called FSBNet. First, we construct a modality-specific unsymmetrical architecture and integrate an adaptive soft threshold mechanism into a DCT-based multi-frequency channel attention adapter (DFDA). DFDA adaptively integrates rich multi-frequency information. Second, we propose a masked frequency-based translation adapter (MFTA) to refine drifting failure boxes caused by camera motion. Moreover, we find that small targets get more affected by motion blur compared to larger targets, and we mitigate this issue by designing a cross-scale mutual conversion adapter (CFCA) between the frequency and spatial domains. Extensive experiments on GTOT, RGBT234 and LasHeR benchmarks demonstrate the promising performance of our method in the presence of motion blur. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. A 3.0 µm Pixels and 1.5 µm Pixels Combined Complementary Metal-Oxide Semiconductor Image Sensor for High Dynamic Range Vision beyond 106 dB †.
- Author
-
Iida, Satoko, Kawamata, Daisuke, Sakano, Yorito, Yamanaka, Takaya, Nabeyoshi, Shohei, Matsuura, Tomohiro, Toshida, Masahiro, Baba, Masahiro, Fujimori, Nobuhiko, Basavalingappa, Adarsh, Han, Sungin, Katayama, Hidetoshi, and Azami, Junichiro
- Subjects
- *
PIXELS , *HIGH dynamic range imaging , *CMOS image sensors , *TRAFFIC signs & signals - Abstract
We propose a new concept image sensor suitable for viewing and sensing applications. This is a report of a CMOS image sensor with a pixel architecture consisting of a 1.5 μm pixel with four-floating-diffusions-shared pixel structures and a 3.0 μm pixel with an in-pixel capacitor. These pixels are four small quadrate pixels and one big square pixel, also called quadrate–square pixels. They are arranged in a staggered pitch array. The 1.5 μm pixel pitch allows for a resolution high enough to recognize distant road signs. The 3 μm pixel with intra-pixel capacitance provides two types of signal outputs: a low-noise signal with high conversion efficiency and a highly saturated signal output, resulting in a high dynamic range (HDR). Two types of signals with long exposure times are read out from the vertical pixel, and four types of signals are read out from the horizontal pixel. In addition, two signals with short exposure times are read out again from the square pixel. A total of eight different signals are read out. This allows two rows to be read out simultaneously while reducing motion blur. This architecture achieves both an HDR of 106 dB and LED flicker mitigation (LFM), as well as being motion-artifact-free and motion-blur-less. As a result, moving subjects can be accurately recognized and detected with good color reproducibility in any lighting environment. This allows a single sensor to deliver the performance required for viewing and sensing applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Discovery, Quantitative Recurrence, and Inhibition of Motion-Blur Hysteresis Phenomenon in Visual Tracking Displacement Detection.
- Author
-
Shi, Lixiang and Tan, Jianping
- Subjects
- *
IMAGE registration , *FIX-point estimation , *HYSTERESIS , *FLOW charts , *ARTIFICIAL satellite tracking , *STATISTICAL correlation - Abstract
Motion blur is common in video tracking and detection, and severe motion blur can lead to failure in tracking and detection. In this work, a motion-blur hysteresis phenomenon (MBHP) was discovered, which has an impact on tracking and detection accuracy as well as image annotation. In order to accurately quantify MBHP, this paper proposes a motion-blur dataset construction method based on a motion-blur operator (MBO) generation method and self-similar object images, and designs APSF, a MBO generation method. The optimized sub-pixel estimation method of the point spread function (SPEPSF) is used to demonstrate the accuracy and robustness of the APSF method, showing the maximum error (ME) of APSF to be smaller than others (reduced by 86%, when motion-blur length > 20, motion-blur angle = 0), and the mean square error (MSE) of APSF to be smaller than others (reduced by 65.67% when motion-blur angle = 0). A fast image matching method based on a fast correlation response coefficient (FAST-PCC) and improved KCF were used with the motion-blur dataset to quantify MBHP. The results show that MBHP exists significantly when the motion blur changes and the error caused by MBHP is close to half of the difference of the motion-blur length between two consecutive frames. A general flow chart of visual tracking displacement detection with error compensation for MBHP was designed, and three methods for calculating compensation values were proposed: compensation values based on inter-frame displacement estimation error, SPEPSF, and no-reference image quality assessment (NR-IQA) indicators. Additionally, the implementation experiments showed that this error can be reduced by more than 96%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Veracious Interpolated Measure of Angle and Length for Underwater Motion Blurred Images.
- Author
-
Vimal Raj, M. and Sakthivel Murugan, S.
- Subjects
- *
ANGLES , *PARAMETER estimation , *RADON transforms , *WEATHER , *BODIES of water , *RADON , *INTERPOLATION - Abstract
The quality of the underwater image is impairing with the atmospheric conditions. In this, one of the most significant issues in recent days is due to the motion blur induced by the imaging device or by the movement of the object in underwater image quality degradation. The various parameters of the blurred image must be identified to fix the effect of blurring in post-imaging. Therefore, spectrum-based parameter estimation method is proposed. Initially, to estimate the point spread function (PSF), the angle and the length is measured from image spectrum using radon transform. Then, for the accurate estimation of PSF, Optimized Polynomial Lagrange Interpolation (OPLI) is proposed. The data were collected and analyzed in various natural and structured water bodies in Chennai without affecting the real environment. It is observed that for the underwater images collected, the proposed OPLI approach outperforms compared to few existing traditional estimation methods like cepstral, hough, and radon. Then this veracious interpolated measure of angle and length (VIMAL) is restored using modified Lucy algorithm and is evaluated which results in high performance than the existing classical state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Non-contact high precision pulse-rate monitoring system for moving subjects in different motion states.
- Author
-
Zhang, Qing, Lin, Xingsen, Zhang, Yuxin, Liu, Qian, and Cai, Fuhong
- Subjects
- *
PHOTOPLETHYSMOGRAPHY , *NUMBER systems , *MOTION , *SCALABILITY , *MEDICAL care - Abstract
Remote photoplethysmography (rPPG) enables contact-free monitoring of the pulse rate by using a color camera. The fundamental limitation is that motion artifacts and changes in ambient light conditions greatly affect the accuracy of pulse-rate monitoring. We propose use of a high-speed camera and a motion suppression algorithm with high computational efficiency. This system incorporates a number of major improvements including reproduction of pulse wave details, high-precision pulse-rate monitoring of moving subjects, and excellent scene scalability. A series of quantization methods were used to evaluate the effect of different frame rates and different algorithms in pulse-rate monitoring of moving subjects. The experimental results show that use of 180-fps video and a Plane-Orthogonal-to-Skin (POS) algorithm can produce high-precision pulse-rate monitoring results with mean absolute error can be less than 5 bpm and the relative accuracy reaching 94.5%. Thus, it has significant potential to improve personal health care and intelligent health monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Detection of wind turbines rotary motion by birds: A matter of speed and contrast.
- Author
-
Blary, Constance, Bonadonna, Francesco, Dussauze, Elise, Potier, Simon, Besnard, Aurélien, and Duriez, Olivier
- Subjects
- *
WIND turbines , *WIND speed , *SPEED , *TURBINES , *COLUMBIDAE , *OPERANT conditioning - Abstract
To reduce bird collisions on wind turbines, Automatic Detection Systems have been developed to locate approaching birds and trigger turbines to slowdown to 2–3 rotations per minute (rpm). However, it is unknown whether birds can detect this reduced speed and avoid the turbine. We conducted an operant conditioning experiment on domestic doves (Streptopelia roseogrisea) and Harris's hawks (Parabuteo unicinctus) to assess their ability to discriminate between stationary and rotating miniature wind turbines, depending on the rotation speed and the contrast between the white blades and the background (only for doves for the latter). At high contrast, regardless of the speed tested, hawks were able to differentiate between the rotating and stationary turbines, while doves were not able to discriminate the slow‐rotating turbine (3 rpm) from the stationary one. The discrimination threshold increased to 8 rpm for the doves when the contrast was reduced. Our results suggest that the residual wind turbine speed of 2–3 rpm may not be detected by all bird species under all environmental conditions. Increasing the contrast between wind turbines and their environment may improve the detection of low‐speed rotation by some birds, otherwise, complete turbine shutdown should be recommended. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Computer Vision Techniques Demonstrate Robust Orientation Measurement of the Milky Way Despite Image Motion
- Author
-
Yiting Tao, Asanka Perera, Samuel Teague, Timothy McIntyre, Eric Warrant, and Javaan Chahl
- Subjects
biomimetic ,Milky Way ,object detection ,orientation ,motion blur ,Technology - Abstract
Many species rely on celestial cues as a reliable guide for maintaining heading while navigating. In this paper, we propose a method that extracts the Milky Way (MW) shape as an orientation cue in low-light scenarios. We also tested the method on both real and synthetic images and demonstrate that the performance of the method appears to be accurate and reliable to motion blur that might be caused by rotational vibration and stabilisation artefacts. The technique presented achieves an angular accuracy between a minimum of 0.00° and a maximum 0.08° for real night sky images, and between a minimum of 0.22° and a maximum 1.61° for synthetic images. The imaging of the MW is largely unaffected by blur. We speculate that the use of the MW as an orientation cue has evolved because, unlike individual stars, it is resilient to motion blur caused by locomotion.
- Published
- 2024
- Full Text
- View/download PDF
39. Real-Time Motion Blur Using Multi-Layer Motion Vectors
- Author
-
Donghyun Lee, Hyeoksu Kwon, and Kyoungsu Oh
- Subjects
real-time rendering ,motion blur ,image processing ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Traditional methods for motion blur, often relying on a single layer, deviate from the correct colors. We propose a multilayer rendering method that closely approximates the motion blur effect. Our approach stores motion vectors for each pixel, divides these vectors into multiple sample points, and performs a backward search from the current pixel. The color at a sample point is sampled if it shares the same motion vector as its origin. This procedure repeats across layers, with only the nearest color values sampled for depth testing. The average color sampled at each point becomes that of the motion blur. Our experimental results indicate that our method significantly reduces the color deviation commonly found in traditional approaches, achieving structural similarity index measures (SSIM) of 0.8 and 0.92, which represent substantial improvements over the accumulation method.
- Published
- 2024
- Full Text
- View/download PDF
40. Detection of wind turbines rotary motion by birds: A matter of speed and contrast
- Author
-
Constance Blary, Francesco Bonadonna, Elise Dussauze, Simon Potier, Aurélien Besnard, and Olivier Duriez
- Subjects
bird vision ,collision ,contrast ,motion blur ,rotary motion ,speed detection ,Ecology ,QH540-549.5 ,General. Including nature conservation, geographical distribution ,QH1-199.5 - Abstract
Abstract To reduce bird collisions on wind turbines, Automatic Detection Systems have been developed to locate approaching birds and trigger turbines to slowdown to 2–3 rotations per minute (rpm). However, it is unknown whether birds can detect this reduced speed and avoid the turbine. We conducted an operant conditioning experiment on domestic doves (Streptopelia roseogrisea) and Harris's hawks (Parabuteo unicinctus) to assess their ability to discriminate between stationary and rotating miniature wind turbines, depending on the rotation speed and the contrast between the white blades and the background (only for doves for the latter). At high contrast, regardless of the speed tested, hawks were able to differentiate between the rotating and stationary turbines, while doves were not able to discriminate the slow‐rotating turbine (3 rpm) from the stationary one. The discrimination threshold increased to 8 rpm for the doves when the contrast was reduced. Our results suggest that the residual wind turbine speed of 2–3 rpm may not be detected by all bird species under all environmental conditions. Increasing the contrast between wind turbines and their environment may improve the detection of low‐speed rotation by some birds, otherwise, complete turbine shutdown should be recommended.
- Published
- 2023
- Full Text
- View/download PDF
41. CDMC-Net: Context-Aware Image Deblurring Using a Multi-scale Cascaded Network.
- Author
-
Zhao, Qian, Zhou, Dongming, and Yang, Hao
- Subjects
CONVOLUTIONAL neural networks ,PYRAMIDS ,FEATURE extraction ,COMPUTATIONAL complexity ,DEEP learning - Abstract
Image deblurring is a widely researched topic in low-level vision. Over the last few years, many researchers try to deblur by stacking multi-scale pyramid structures, which inevitably increases the computational complexity. In addition, most of the existing deblurring methods do not adequately model long-range contextual information, making the structure of blurred objects not well restored. To address the above issues, we propose a novel context-aware multi-scale convolutional neural network (CDMC-Net) for image deblurring. We progressively restore latent sharp images in two stages, and a cross-stage feature aggregation (CSFA) strategy is introduced to enhance the information flow interaction between the two stages. The key design of CDMC-Net to reduce the complexity is the use of a multi-input multi-output encoder-decoder at each stage, which can process multi-scale blurry images in a coarse-to-fine manner. Furthermore, to effectively capture long-range context information in different scenarios, we propose a multi-strip feature extraction module (MSFM). Its strip pooling with different kernel sizes allows the network to aggregate rich global and local contextual information. Extensive experimental results demonstrate that CDMC-Net outperforms state-of-the-art motion deblurring methods on both synthetic benchmark datasets and real blurred images. We also use CDMC-Net as a pre-processing step for object detection to further verify the effectiveness of our proposed deblurring method in downstream vision tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Quantitative Analysis of Blurry Color Image Fusion Techniques using Color Transform.
- Author
-
Mohammed, Nawras Badeaa, Mohamad, Haidar, Abbas, Heba Kh., and Salim, Ali Aqeel
- Subjects
IMAGE color analysis ,IMAGE fusion ,COLOR space ,QUANTITATIVE research ,CROSS correlation - Abstract
Copyright of Al-Mustansiriyah Journal of Science is the property of Republic of Iraq Ministry of Higher Education & Scientific Research (MOHESR) and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
43. CNN-Based Image Quality Classification Considering Quality Degradation in Bridge Inspection Using an Unmanned Aerial Vehicle
- Author
-
Gi-Hun Gwon, Jin Hwan Lee, In-Ho Kim, and Hyung-Jo Jung
- Subjects
Convolutional neural networks ,image quality classification ,bridge inspection ,unmanned aerial vehicle ,motion blur ,underexposure ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Key information for the maintenance and diagnosis of structures including bridges can be obtained from the processing of digital images acquired by unmanned aerial vehicle (UAV). However, low-quality images caused by various problems such as UAV movement, inspection environment, and camera parameters can lead to inappropriate structural evaluation due to the difficulty of digital image processing. Therefore, an appropriate assessment method for image quality considering the deterioration of the inspection image in the structural inspection procedure is required. In this study, a new image quality assessment (IQA) using a convolutional neural network (CNN) is proposed in consideration of various degradation factors that may occur in the structure inspection image. The first stage presents a method to obtain consistent quality against various interference factors of deterioration that may occur in inspection images. Adjusting the camera parameters minimizes the degradation of the inspection image. Subsequently, low- and high-quality images are distinguished according to the proposed image acquisition method. The second stage is the classification of the inspection dataset using the CNN-based image quality classifier model through training of data classified according to their quality. Experimental validation of the proposed method shows that the results are similar to the Human Visual System (HVS), which means subjective quality classification, and that the inspection image can be classified with more accurate and shorter processing time.
- Published
- 2023
- Full Text
- View/download PDF
44. Towards Interpretable Video Super-Resolution via Alternating Optimization
- Author
-
Cao, Jiezhang, Liang, Jingyun, Zhang, Kai, Wang, Wenguan, Wang, Qin, Zhang, Yulun, Tang, Hao, Van Gool, Luc, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
45. Describing motions in biological tissues : a continuum active model and improving measurements
- Author
-
Bogdan, Michal and Savin, Thierry
- Subjects
610.28 ,active fluids ,tissue invasion ,metastasis ,fingering instabilities ,Brownian motion ,measurement errors ,motion blur - Abstract
Motions in biological tissues strongly influence their properties and are crucial for their functions. This is true starting from the scale of single molecules, all the way up to the scale of entire tissues. One of the key properties distinguishing motions in living systems from those in dead matter is activity: using chemical energy to generate self-propulsion. Effective theoretical, physics-based models are necessary both to interpret the rich new experimental observations in the field of biological motions, and to properly account for the inherent errors of the experimental methods. In this work we study models related to motion both on the level of tissues and individual molecules. One of our models is driven by the observation that many growing tissues form multicellular protrusions at their edges. It is not fully understood how these are initiated, therefore we propose a minimal continuum physical model to suggest a possible mechanism. We apply our model to a growing circular tumour. We employ our approach to understand how activity affects the tumour’s dynamics and the tendency to form “fingers” at its boundary. This approach rests on just four key biophysical parameters and we can estimate them based on experiments described in the literature. Our modelling of a tumour is experimentally well justified and analytically solvable in many systems. It is, to the best of our knowledge, the first analytical description of tumour interface dynamics incorporating the activity of the tumour bulk. We can explain the propensity of tissues to fingering instabilities, as conditioned by the magnitude of active traction and the growth kinetics. We are also able to derive predictions for the tumour size at the onset of metastasis, and predictions for the number of subsequent invasive fingers. Microscopy-based techniques are essential for observing biological motions at all aforementioned length scales. Brownian particle videotracking is one example of such a technique. In the second part of this thesis, we apply physics-based theory to understand inherent errors and limitations of this method. Using analytic solutions and simulations, we show the effects of errors in particle videotracking on recovering energy landscapes from the distributions of Brownian particles. We point out mechanisms that result in nontrivial systematic biases in the measurements.
- Published
- 2019
- Full Text
- View/download PDF
46. A semantic segmentation scheme for night driving improved by irregular convolution.
- Author
-
Yang Xuantao, Han Junying, and Liu Chenzhong
- Subjects
TRAFFIC safety ,IMAGE segmentation ,MOTOR vehicle driving ,SEMANTICS - Abstract
In order to solve the poor performance of real-time semantic segmentation of night road conditions in video images due to insufficient light andmotion blur, this study proposes a scheme: a fuzzy information complementation strategy based on generativemodels and a network that fuses different intermediate layer outputs to complement spatial semantics which also embeds irregular convolutional attentionmodules for fine extraction of motion target boundaries. First, DeblurGan is used to generate information to fix the lost semantics in the original image; then, the outputs of different intermediate layers are taken out, assigned different weight scaling factors, and fused; finally, the irregular convolutional attention with the best effect is selected. The scheme achieves Global Accuracy of 89.1% Mean and IOU 94.2% on the night driving dataset of this experiment, which exceeds the best performance of DeepLabv3 by 1.3 and 7.2%, and achieves an Accuracy of 83.0% on the small volume label (Moveable). The experimental results demonstrate that the solution can effectively cope with various problems faced by night driving and enhance the model's perception. It also provides a technical reference for the semantic segmentation problem of vehicles driving in the nighttime environment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Deblurring transformer tracking with conditional cross-attention.
- Author
-
Sun, Fuming, Zhao, Tingting, Zhu, Bing, Jia, Xu, and Wang, Fasheng
- Subjects
- *
OBJECT tracking (Computer vision) , *TRACKING algorithms , *PROBLEM solving - Abstract
In object tracking, motion blur is a common challenge induced by rapid movement of target object or long time exposure of the camera, which leads to poor tracking performance. Traditional solutions usually perform image recovery operations before tracking object. However, most image recovery methods usually have higher computational cost, which decreases the tracking speed. In order to solve the above problems, we propose a deblurring Transformer-based tracking method embedding the conditional cross-attention. The proposed method integrates three important modules: (1) an image quality assessment (IQA) module to estimate image quality; (2) an image deblurring module based on lightweight adversarial network to improve image quality; and (3) a tracking module based on Transformer with conditional cross-attention to enhance the object localization ability. Experimental results on two UAV object tracking benchmarks show that the proposed trackers achieve competitive results compared to several state-of-the-art trackers. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Real-time automated detection of older adults' hand gestures in home and clinical settings.
- Author
-
Huang, Guan, Tran, Son N., Bai, Quan, and Alty, Jane
- Subjects
- *
OLDER people , *BRAIN degeneration , *COMPUTER vision , *GESTURE , *GERIATRIC care units - Abstract
There is an urgent need, accelerated by the COVID-19 pandemic, for methods that allow clinicians and neuroscientists to remotely evaluate hand movements. This would help detect and monitor degenerative brain disorders that are particularly prevalent in older adults. With the wide accessibility of computer cameras, a vision-based real-time hand gesture detection method would facilitate online assessments in home and clinical settings. However, motion blur is one of the most challenging problems in the fast-moving hands data collection. The objective of this study was to develop a computer vision-based method that accurately detects older adults' hand gestures using video data collected in real-life settings. We invited adults over 50 years old to complete validated hand movement tests (fast finger tapping and hand opening–closing) at home or in clinic. Data were collected without researcher supervision via a website programme using standard laptop and desktop cameras. We processed and labelled images, split the data into training, validation and testing, respectively, and then analysed how well different network structures detected hand gestures. We recruited 1,900 adults (age range 50–90 years) as part of the TAS Test project and developed UTAS7k—a new dataset of 7071 hand gesture images, split 4:1 into clear: motion-blurred images. Our new network, RGRNet, achieved 0.782 mean average precision (mAP) on clear images, outperforming the state-of-the-art network structure (YOLOV5-P6, mAP 0.776), and mAP 0.771 on blurred images. A new robust real-time automated network that detects static gestures from a single camera, RGRNet, and a new database comprising the largest range of individual hands, UTAS7k, both show strong potential for medical and research applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Quantitative Analysis of Blurry Color Image Fusion Techniques using Color Transform
- Author
-
Nawras Badeaa Mohammed, Haidar Mohamad, Heba Kh. Abbas, and Ali Aqeel Salim
- Subjects
Motion Blur ,Colour Space Transformation ,Fusion Techniques ,Quality Criteria ,Science - Abstract
This work focuses on fused color images resulting from motion blur (left and right) with a blur block size of 11 pixels. The color conversion process was performed from RGB color space (Red, Green, Blue) to HSV (Hue, Saturation, Value), L*a*b*, and Ycbcr (Luminance, Chrominance) color space. The traditional (addition, multiplication) and proposed fusion techniques (absolute real standard deviation) were used for this purpose. The data was examined by quality criteria with reference (Mutual Information, Correlation Coefficient, Structural Content, Normalized Cross Correlation) and without reference (Blind Reference less Image Spatial Quality Evaluator, Naturalness Image Quality Evaluator, and Perception-based Image Quality Evaluator). In results and depending on the criteria, the best fusion method is the proposed real standard deviation.
- Published
- 2023
- Full Text
- View/download PDF
50. SDAN-MD: Supervised dual attention network for multi-stage motion deblurring in frontal-viewing vehicle-camera images
- Author
-
Seong In Jeong, Min Su Jeong, Seon Jong Kang, Kyung Bong Ryu, and Kang Ryoung Park
- Subjects
Semantic segmentation ,Motion blur ,Multi-stage ,Supervised dual attention module ,Perceptual loss ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Motion blur in images usually distorts the information of objects, thus degrading the performance of semantic segmentation. However, there is no previous research on improving the segmentation performance by restoring the frontal-viewing vehicle-camera images taken under motion blur. Therefore, this study proposes a supervised dual attention network for multi-stage motion deblurring (SDAN-MD) for this task. In SDAN-MD, a supervised dual attention module (SDAM) is proposed, which adopts the supervised spatial and channel attention mechanisms to provide a supervisory signal of ground truth. In addition to Charbonnier loss and edge loss, we use perceptual loss utilizing Euclidean distance based on feature maps obtained from the segmentation network.Experiments were conducted with the motion blurred databases from the two open databases of road scene, Cambridge driving Labeled Video Database (CamVid) and Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI).The results show that the proposed SDAN-MD achieves 92.89% and 87.27% pixel accuracies in semantic segmentation using these two databases, respectively, outperforming the state-of-the-art methods.
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.