53 results on '"Shen, Chuan"'
Search Results
2. Beat-based ECG compression using gain-shape vector quantization
- Author
-
Sun, Chia-Chun and Tai, Shen-Chuan
- Subjects
Algorithms -- Research ,Electrocardiogram -- Research ,Electrocardiography -- Research ,Biomedical engineering -- Research ,Algorithm ,Biological sciences ,Business ,Computers ,Health care industry - Abstract
An electrocardiogram (ECG) data compression scheme is presented using the gain-shape vector quantization. The proposed approach utilizes the fact that ECG signals generally show redundancy among adjacent heartbeats and adjacent samples. An ECG signal is QRS detected and segmented according to the detected fiducial points. The segmented heartbeats are vector quantized, and the residual signals are calculated and encoded using the AREA algorithm. The experimental results show that with the proposed method both visual quality and the objective quality are excellent even in low bit rates. An average PRD of 5.97% at 127 b/s is obtained for the entire 48 records in the MIT-BIH database. The proposed method also outperforms others for the same test dataset. Index Terms--AREA, ECG compression, vector quantization (VQ).
- Published
- 2005
3. Deblocking filter for low bit rate MPEG-4 video
- Author
-
Tai, Shen-Chuan, Chen, Yen-Yu, and Sheu, Shin-Feng
- Subjects
Image coding -- Research ,Video recordings -- Research ,Algorithms -- Research ,Algorithms -- Technology application ,Algorithm ,Technology application ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
Increasing the bandwidth or bit rate in real-time video applications to improve the quality of images is typically impossible or too expensive. Postprocessing appears to be the most feasible solution because it does not require any existing standards to be changed. Markedly reducing blocking effects can increase compression ratios for a particular image quality or improve the quality with respect to the specific bit rate of compression. This paper proposes a novel deblocking algorithm based on three filtering modes in terms of the activity across block boundaries. By properly considering the masking effect of the human visual system, an adaptive filtering decision is integrated into the deblocking process. According to three different deblocking modes appropriate for local regions with different characteristics, the perceptual and objective quality are improved without over smoothing the image details or insufficient reducing the strong blocking effect on the flat region. According to the simulation results, the proposed method outperforms methods of deblocking MPEG-4 with respect to peak signal-to-noise ratios and computational complexity. Index Terms--Blocking effects, human visual system (HVS), MPEG4.
- Published
- 2005
4. A 2-D ECG compression method based on wavelet transform and modified SPIHT
- Author
-
Tai, Shen-Chuan, Sun, Chia-Chun, and Yan, Wen-Chien
- Subjects
Wavelet transforms -- Research ,Electrocardiogram -- Research ,Electrocardiography -- Research ,Algorithms ,Algorithm ,Biological sciences ,Business ,Computers ,Health care industry - Abstract
A two-dimensional (2-D) wavelet-based electrocardiogram (ECG) data compression method is presented which employs a modified set partitioning in hierarchical trees (SPIHT) algorithm. This modified SPIHT algorithm utilizes further the redundancy among medium- and high-frequency subbands of the wavelet coefficients and the proposed 2-D approach utilizes the fact that ECG signals generally show redundancy between adjacent beats and between adjacent samples. An ECG signal is cut and aligned to form a 2-D data array, and then 2-D wavelet transform and the modified SPIHT can be applied. Records selected from the MIT-BIH arrhythmia database are tested. The experimental results show that the proposed method achieves high compression ratio with relatively low distortion and is effective for various kinds of ECG morphologies. Index Terms--Electrocardiogram (ECG) compression, set partitioning in hierarchical trees (SPIHT), wavelet transform.
- Published
- 2005
5. An effiicient full frame algorithm for object-based error concealment in 3D depth-based video
- Author
-
Chien-Shiang Hong, Ya-Chiung Luo, Chuen-Ching Wang, and Shen-Chuan Tai
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Object (computer science) ,Residual frame ,Motion vector ,Hardware and Architecture ,Depth map ,Motion estimation ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Algorithm ,Software ,Reference frame ,Block-matching algorithm - Abstract
In real-time video transmission, the loss of packet results in visual quality degraded for the succeeding frames. This paper proposes an efficient full frame algorithm for depth-based 3-D videos. Each frame can be regarded as combination of objects. True motion estimation (TME) and depth map are exploited to calculate the motion vector (MV) for each object. In this paper, the object is defined as the pixels with the same MV and similar depth value. The object-based MV can extrapolate each object in reference frame to reconstruct the damage frame. In the consideration of computational complexity, in this method, only the high frequency regions need to execute TME. In this paper, we provide a new method to obtain the objects according to temporal and depth information. From the simulation results, our algorithm gives better visual quality and PSNR in most case with lower complexity.
- Published
- 2015
- Full Text
- View/download PDF
6. A sharp edge-preserving joint color demosaicking and zooming algorithm using integrated gradients and an iterative back-projection technique
- Author
-
Wen-Jan Chen, Wen-Tsung Huang, and Shen-Chuan Tai
- Subjects
Demosaicing ,Color difference ,Basis (linear algebra) ,business.industry ,Applied Mathematics ,Pipeline (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image (mathematics) ,Computational Theory and Mathematics ,Artificial Intelligence ,Signal Processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Zoom ,business ,Algorithm ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics ,Interpolation - Abstract
Following the advances in single-sensor imaging techniques, interest in producing a zoomed full-color image from a Bayer mosaic data has been increased. Almost all of the recent approaches identified, with respect to the demosaicking step in the imaging pipeline, have chiefly focused on misguidance problems. However, in regions consisting of sharp edges or fine textures, these approaches are prone to large blurring effects. This paper proposes a new joint solution to overcome the above problems associated with demosaicking and zooming operations. On the basis of an enhanced soft-decision framework, we estimate the edge features by computing the integrated gradients. This allows the extraction of gradient information from both color intensity and color difference domains, simultaneously. Then, the edge guidance is incorporated in the interpolation of various stages to preserve edge consistency and improve computational efficiency. In addition, an edge-adaptive, iterative, back-projection technique is developed to compensate for image blurring as well as to further suppress color artifacts. Experimental results indicate that the new algorithm produces outstanding objective performances and sharp, visually pleasing color outputs, when compared to numerous other single-sensor image zooming solutions.
- Published
- 2014
- Full Text
- View/download PDF
7. Automatic White Balance Algorithm through the Average Equalization and Threshold
- Author
-
Tzu-Wen Liao, Tse-Ming Kuo, Yi-Ying Chang, and Shen-Chuan Tai
- Subjects
General Computer Science ,Computer science ,Equalization (audio) ,Color balance ,Algorithm - Published
- 2013
- Full Text
- View/download PDF
8. Sparse Trinary Circulant Measurement Matrices with Random Spacing in Compressive Imaging
- Author
-
Cheng Hong, Yun Xia, Wei Sui, Cheng Zhang, and Shen Chuan
- Subjects
Combinatorics ,Electrical and Electronic Engineering ,Compressive imaging ,Circulant matrix ,Algorithm ,Mathematics - Published
- 2012
- Full Text
- View/download PDF
9. A design framework for hybrid approaches of image noise estimation and its application to noise reduction
- Author
-
Shen-Chuan Tai and Shih-Ming Yang
- Subjects
Noise measurement ,business.industry ,Salt-and-pepper noise ,Gradient noise ,symbols.namesake ,Noise ,Gaussian noise ,Signal Processing ,Media Technology ,symbols ,Median filter ,Image noise ,Computer vision ,Computer Vision and Pattern Recognition ,Value noise ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,Mathematics - Abstract
Noise estimation is an important process in digital imaging systems. Many noise reduction algorithms require their parameters to be adjusted based on the noise level. Filter-based approaches of image noise estimation usually were more efficient but had difficulty on separating noise from images. Block-based approaches could provide more accurate results but usually required higher computation complexity. In this work, a design framework for combining the strengths of filter-based and block-based approaches is presented. Different homogeneity analyzers for identifying the homogeneous blocks are discussed and their performances are compared. Then, two well-known filters, the bilateral and the non-local mean, are reviewed and their parameter settings are investigated. A new bilateral filter with edge enhancement is proposed. A modified non-local mean filter with much less complexity is also present. Compared to the original non-local mean filter, the complexity is dramatically reduced by 75% and yet the image quality is maintained.
- Published
- 2012
- Full Text
- View/download PDF
10. Phase Retrieval Based on Transport-of-intensity Equation
- Author
-
章权兵 Zhang Quan-bi, 韦穗 Wei Sui, 程鸿 Cheng Hong, and 沈川 Shen Chuan
- Subjects
Plane (geometry) ,Adaptive-additive algorithm ,business.industry ,Computer science ,Computer Science::Information Retrieval ,Phase (waves) ,Atomic and Molecular Physics, and Optics ,symbols.namesake ,Fourier transform ,Cardinal point ,symbols ,Computer vision ,Enhanced Data Rates for GSM Evolution ,Artificial intelligence ,Phase retrieval ,business ,Algorithm ,Intensity (heat transfer) - Abstract
Phase retrieval based on the transport-of-intensity equation is researched to calculate phase information from the intensity measurement.A practical phase retrieval system is designed including classic Fourier phase retrieval algorithm and phase retrieval algorithm based on the total variation.Only measurements of the spatial intensity of the optical wave in focal plane and defocus plane around are needed to retrieval the phase by solving a second-order differential equation.Phase retrieval based on the transport-of-intensity equation overcomes the disadvantages of iteration uncertainty and slow convergence compared with the iterative phase retrieval technique.The experimental results show that the phase retrieval based on the transport-of-intensity equation is able to quickly and effectively calculate phase information from the intensity measurement,and information in edge is remained at the same time of phase retrieval based on the total variation compared to Fourier algorithm.
- Published
- 2011
- Full Text
- View/download PDF
11. True Motion-Compensated De-Interlacing Algorithm
- Author
-
Shen-Chuan Tai and Ying-Ru Chen
- Subjects
Motion compensation ,Pixel ,business.industry ,Interlacing ,Quarter-pixel motion ,Deinterlacing ,Motion estimation ,Media Technology ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,Block size ,Mathematics ,Interpolation - Abstract
This paper presents a true motion-compensated de-interlacing (TMCD) algorithm based on a fast true motion estimation scheme. The fast true motion estimation scheme, designated as a variable block size true motion estimation for a translated motion model (VBTME), finds block-based true motion vectors within interlaced fields and then uses these true motion vectors to construct corresponding de-interlaced frames. Although the real motions of a majority of the objects within an interlaced field can be accurately represented by using block-based true motion vectors, the motions of certain objects (e.g., those which disappear suddenly or are occluded by other objects) cannot be adequately compensated by using such vectors, and therefore, the corresponding pixels must be further refined by an alternative method. Accordingly, a TMCD applies a set of three decisional rules and then selects either a new spatial edge direction indicated interpolation method or a traditional temporal mean filter to interpolate those pixels which cannot be adequately compensated by using true motion vectors. The experimental results show that by comparing the PSNR values for CIF format video sequences obtained by a 4-field adaptive motion-compensated (4F-AMC) de-interlacing scheme and a selective motion-compensated (SMC) de-interlacing scheme to a TMCD scheme, an average PSNR improvement of 1.37 and 6.36 dB is achieved, respectively. Furthermore, compared to the motion estimation schemes used in a 4F-AMC and a SMC, the VBTME reduces motion estimation time by 96 and 69%, respectively. Finally, it is shown that the visual quality of videos with a CCIR601 format de-interlaced using a TMCD is better than that obtained by using a 4F-AMC de-interlacing scheme.
- Published
- 2009
- Full Text
- View/download PDF
12. A fast inter residual quad-tree construction method in HEVC
- Author
-
Zhi Yu Yang, Bo Jhih Chen, Chia Ying Chang, and Shen-Chuan Tai
- Subjects
Computer science ,Quantization (signal processing) ,Real-time computing ,Macroblock ,Discrete cosine transform ,Multiview Video Coding ,Residual ,Encoder ,Algorithm ,Coding tree unit ,Context-adaptive binary arithmetic coding - Abstract
High Efficiency Video Coding(HEVC) is an ongoing video coding standard. Compared to H.264, it adopts quad-tree structure to decide the encoding efficiency by computing the rate-distortion(R-D) cost function recursively. The transform unit (TU) is the basic unit used for the transform and quantization processes in HEVC and the ranges size from 4×4 to 32×32. In addition, a large size TU is always chosen, e.g., 32×32, of the RQT if a residual block has a little prediction error, especially the homogeneous-area CUs are encoded. Based on the observations, the proposed method uses two stages to skip unnecessary computations on the residual quad-tree (RQT). First, to decide whether DCT/Q processes can be omitted or not. And then is used to early terminate the TU split process. Experiment results show that the proposed method is capable of reducing a large amount of time of inter prediction RQT decision on average 50% and retaining the coding performance by comparing the original HEVC encoder.
- Published
- 2014
- Full Text
- View/download PDF
13. EARLY TERMINATION FOR RESIDUAL QUADTREE DECISION-MAKING IN HEVC
- Author
-
Bo Jhih Chen, Chia Ying Chang, Yung Gi Wu, Yu Yi Liao, and Shen-Chuan Tai
- Subjects
Theoretical computer science ,Computational complexity theory ,Computer science ,Quantization (signal processing) ,Computation ,Residual ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Hardware and Architecture ,Discrete cosine transform ,Quadtree ,Algorithm ,Software ,Coding (social sciences) - Abstract
The progressive high efficiency video coding (HEVC) standard is based on a quadtree (QT) coding structure. The optimal residual quadtree (RQT) was selected for a given intra- and inter-prediction residual block by comparing the rate-distortion (R-D) cost function via all possible transform unit (TU) partitions recursively. However, zero-quantized blocks (ZQBs) are common after discrete cosine transform (DCT) and quantization (Q) due to the small values of prediction blocks. Therefore, when a large TU has negligible prediction residuals, the TU can be terminated early at the current depth of the RQT. This study proposes the use of ZQB detection techniques to accelerate RQT decision-making. In HEVC, RQT decision-making comprises a TU transform and TU split functions. The proposed method mathematically analyses DCT and Q processes, deriving two sufficient conditions to reduce the computational complexity of TU transform and TU split computations. Experimental results demonstrate that the proposed met...
- Published
- 2014
- Full Text
- View/download PDF
14. A technique using peano scanning for vector quantization with variable codevector dimensions
- Author
-
Shen-Chuan Tai, Wen-Jan Chen, and I-Sheng Kuo
- Subjects
Peano axioms ,General Engineering ,Vector quantization ,Partition (number theory) ,Enhanced Data Rates for GSM Evolution ,Space (mathematics) ,Topology ,Algorithm ,Mathematics ,Variable (mathematics) - Abstract
In this paper, a new direction for Vector Quantization is proposed. To reduce the blocky effect and edge degradation, we combine Peano Scan and VQ ( PSVQ) to achieve this goal in our implementation. All training images (512×512) are reordered to 1D space by efficient Peano Scan in advance. We classify 4 different partition segments including 1×8, 1×16, 1×32, and 1×64 dimensions according to the difference between the maximum and the minimum intensities in the 1D segment. Simulation results show that the performance of the suggested scheme is superior to the normal VQ scheme in the sense of PSNR, and both the edge degradation and the blocky effect is reduced.
- Published
- 2000
- Full Text
- View/download PDF
15. Bit rate reduction techniques for absolute moment block truncation coding
- Author
-
Wen-Jan Chen and Shen-Chuan Tai
- Subjects
Quantization (signal processing) ,Color depth ,General Engineering ,Signal compression ,computer.file_format ,Bit Rate Reduction ,Algorithm ,Block Truncation Coding ,computer ,Harmonic Vector Excitation Coding ,Arithmetic coding ,Mathematics ,Bit field - Abstract
Block Truncation Coding uses a two‐level moment preserving quantizer that adapts to local properties of the images. It has the features of low computation load and low memory requirement while its bit rate is only 2.0 bits per pixel. A more efficient algorithm, the absolute moment BTC (AMBTC) has been extensively used in the field of signal compression because of its simple computation and better MSE performance. We propose postprocessing methods to further reduce the entropy of two output data of AMBTC, including the bit map and two quantization data (a, b). A block of a 2×4 bit map is packaged into a byte‐oriented symbol. The entropy can be reduced from 0.965 bpp to 0.917 bpp on average for our test images. The two subimages of quantization data (a, b) are postprocessed by the Peano Scan. This postprocess can further reduce differential entropy about 0.4 bit for a 4×4 block. By applying arithmetic coding, the total bit reduction is about 0.3∼0.4 bpp. The bit rate can reach 1.6∼1.7 bpp with the ...
- Published
- 1999
- Full Text
- View/download PDF
16. An efficient BTC image compression technique
- Author
-
Yung-Gi Wu and Shen-Chuan Tai
- Subjects
business.industry ,Computer science ,Vector quantization ,Pattern recognition ,Variable bitrate ,Block Truncation Coding ,Moment (mathematics) ,Bit rate ,Media Technology ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,Transform coding ,Data compression ,Image compression - Abstract
This paper presents a moment preserving and visual information dominance technique to achieve the low-bit rate block truncation coding (BTC). Compared with other existing strategies as transform coding and vector quantization, conventional BTC compression has the advantage of simple and fast computation. Nevertheless the compression ratio is limited by its low efficiency. Our proposed technique accomplishes the goal of simple computation with variable bit rate selection by the moment preservation and information extraction algorithm. The proposed technique has the advantage of simple operations and it does not require complicated mathematical computations. Thus, the overall computation does not increase the burden compared with ordinary BTC. The simulations are carried with natural images to evaluate the performance. The generated decoded images have moderate quality with a bit rate of 0.5-1.0 bit/pixel.
- Published
- 1998
- Full Text
- View/download PDF
17. A fast Linde-Buzo-Gray algorithm in image vector quantization
- Author
-
Yih-chuan Lin and Shen-Chuan Tai
- Subjects
Linde–Buzo–Gray algorithm ,Iterative and incremental development ,Pixel ,Computational complexity theory ,Signal Processing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Vector quantization ,Codebook ,Image processing ,Electrical and Electronic Engineering ,Quantization (image processing) ,Algorithm ,Mathematics - Abstract
This paper presents a novel algorithm for speeding up the codebook design in image vector quantization that exploits the correlation among the pixels in an image block to compress the computational complexity of calculating the squared Euclidean distortion measures, and uses the similarity between the codevectors in the consecutive code-books during the iterative clustering-process to reduce the number of codevectors necessary to be checked for one codebook search. Verified test results have shown that the proposed algorithm can provide almost 98% reduction of the execution time when compared to the conventional Linde-Buzo-Gray (LBG) algorithm.
- Published
- 1998
- Full Text
- View/download PDF
18. Single bit-map block truncation coding of color images using a hopfield neural network
- Author
-
Yih-Chuan Lin, Shen-Chuan Tai, and Jung-Feng Lin
- Subjects
Information Systems and Management ,Theoretical computer science ,Artificial neural network ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,computer.file_format ,Block Truncation Coding ,Computer Science Applications ,Theoretical Computer Science ,Primary color ,Artificial Intelligence ,Control and Systems Engineering ,Bitmap ,Cluster analysis ,Encoder ,Algorithm ,computer ,Software ,Mathematics ,Coding (social sciences) ,Color Cell Compression - Abstract
This paper describes a new single bit-map block truncation coding (SBBTC) scheme that works with a Hopfield neural network (HNN) for the coding of color images. An incoming color block is encoded using three block-truncation coding (BTC) encoders with a common bit map for each of the three primary color planes. An HNN is used to generate a good single bit map in the SBBTC by clustering the color vectors in the block into two classes. The considering block is then encoded by the generated bit map and the two reconstruction colors each associated to one of the two classes. In comparison with other existing methods, the proposed color BTC gives the best mean-square error (MSE) performance at the same bit rate.
- Published
- 1997
- Full Text
- View/download PDF
19. Fast full-search block-matching algorithm for motion-compensated video compression
- Author
-
Yih-Chuan Lin and Shen-Chuan Tai
- Subjects
Motion compensation ,Matching (statistics) ,Mean squared error ,business.industry ,Computation ,Frame (networking) ,Pattern recognition ,Reduction (complexity) ,Redundancy (information theory) ,Motion estimation ,Computer vision ,Pattern matching ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,Block (data storage) ,Mathematics ,Data compression - Abstract
This paper proposes a fast block-matching algorithm that uses three fast matching error measures, besides the conventional mean absolute error (MAE) or mean square error (MSE). An incoming reference block in the current frame is compared to candidate blocks within the search window using multiple matching criteria. These three fast matching error measures are established from the integral projections, taking their advantages of good representation for block features and simple complexity in measuring matching errors. Most of the candidate blocks can be rejected by calculating only one or more of the three fast matching error measures. The time-consuming computations of MSE or MAE are performed on only a few candidate blocks that first pass all the three fast matching criteria. Simulation results show a reduction of over 86% in computations is achieved after integrating the fast three matching criteria into the full-search algorithm, while ensuring the optimal accuracy.
- Published
- 1997
- Full Text
- View/download PDF
20. Low bit rate subband DCT image compression
- Author
-
Shen-Chuan Tai and Yung-Gi Wu
- Subjects
business.industry ,Signal compression ,Image processing ,Image segmentation ,Sub-band coding ,Channel capacity ,Media Technology ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,Transform coding ,Harmonic Vector Excitation Coding ,Image compression ,Mathematics - Abstract
This paper describes two novel methods to improve the efficiency of subband coding by means of the energy compactness in the frequency domain. Our coding scheme applies a variable block size DCT to the subband signal to get a high coding efficiency. A class driven segmentation is devised as a variable block partition scheme. Besides, an information oriented coding strategy without side information is developed. The strategy not only retains the image properties but also reduces the computation overhead. Simulations are carried out to the Lena image, and their results show the effectiveness of the two approaches especially for low bit rate applications.
- Published
- 1997
- Full Text
- View/download PDF
21. An Adaptive Image Enhancement Algorithm Based on Zone System
- Author
-
Han-Ru Fan, Shen-Chuan Tai, and Chia Ying Chang
- Subjects
Pixel ,Dynamic range ,Computer science ,business.industry ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Edge enhancement ,Sharpening ,Luminance ,Digital image ,Zone System ,Contrast (vision) ,Computer vision ,Artificial intelligence ,business ,Algorithm ,media_common - Abstract
The main objective of image enhancement is to improve the visual quality of digital images that are captured under extremely low or non-uniform lighting conditions. We present an adaptive image enhancement algorithm based on Zone System. This study reveals hidden image details and increases the contrast of an image with low dynamic range. It is comprised two processes: adaptive luminance enhancement and adaptive contrast enhancement. The adaptive luminance enhancement algorithm is a global intensity transform function based on Zone System information. This process not only increases the luminance of darker pixels but also compresses the dynamic range of the image. The adaptive contrast enhancement adjusts the intensity of each pixel based on the discontinuities of the local luminance. It also improves the contrast of local region and reveals the details of image clearly. The proposed algorithm has good performance on enhancing contrast, preserving more detail of characteristics and sharpening edges of objects in experimental results. The performance with our proposed was better evaluation and comparison than other algorithms in the subjective and objective evaluation.
- Published
- 2013
- Full Text
- View/download PDF
22. Speeding Up the Decisions of Quad-Tree Structures and Coding Modes for HEVC Coding Units
- Author
-
Jui Feng Hu, Bo Jhih Chen, Chia Ying Chang, and Shen-Chuan Tai
- Subjects
Low complexity ,Computer science ,Real-time computing ,Compression ratio ,Codec ,Quadtree ,Temporal correlation ,Coding tree unit ,Fast algorithm ,Algorithm ,Coding (social sciences) - Abstract
High Efficiency Video Coding (HEVC) is being developed by the joint development of ISO/IEC MPEG and ITU-T Video Coding Experts Group (VCEG) and is expected to be a popular next-generation video codec in the future. HEVC can provide higher compression ratio compared to H.264/AVC standard; however, the coding complexity is dramatically increased as well. In this thesis, a fast algorithm for coding unit decision is proposed to reduce the burden of the encoding time in HEVC. The proposed algorithm exploits the temporal correlation in the neigh-boring frames of a video sequence to avoid the unnecessary examinations on CU quad-trees. In addition, based on an adaptive threshold, the best prediction mode is early determined to SKIP mode for reducing the exhaustive evaluations at prediction stage. The performance of the proposed algorithm is verified through the test model for HEVC, HM 5.0. The experimental results show that the proposed algorithm can averagely achieve about 27%, 33%, 20%, and 21% total time encoding time reduction under Low-Delay High Efficiency, Low-Delay Low Complexity, Random-Access High Efficiency, and Random-Access Low Complexity configurations respectively with a negligible degradation of coding performance.
- Published
- 2013
- Full Text
- View/download PDF
23. A Frame Rate Up-Conversion Algorithm for 3-D Video
- Author
-
Chih-Pei Yeh, Yao-Tang Chang, Chien-Shiang Hong, Chuen-Ching Wang, and Shen-Chuan Tai
- Subjects
Motion compensation ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Inter frame ,Residual frame ,Motion vector ,Quarter-pixel motion ,Motion field ,Motion estimation ,Computer vision ,Artificial intelligence ,business ,Algorithm ,Block-matching algorithm - Abstract
In this paper, we present an improved multi-pass true motion estimation algorithm for frame rate up-conversion. In the proposed motion estimation algorithm, the motion vectors of different objects which are parted with depth information will be refined respectively. Therefore, more accurate pixeliVbased motion vector can be obtained which causes the performance of Frame Rate Up-Conversion (FRUC). For overlapped regions, the motion vector can be chosen in candidates with depth information. Experimental results show that our proposed algorithm effectively enhances the overall quality of the frame rate up-converted video sequence, both subjectively and objectively, even with complex scenes and fast motion.
- Published
- 2012
- Full Text
- View/download PDF
24. Sharpness Enhancement Algorithm through Edge Information
- Author
-
Shen-Chuan Tai, Zih-Siou Chen, Yi-Ying Chang, Tzu-Wen Liao, and Juo-Chen Chen
- Subjects
business.product_category ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Pattern recognition ,Image (mathematics) ,Image texture ,Histogram ,Computer vision ,Enhanced Data Rates for GSM Evolution ,Artificial intelligence ,Noise (video) ,business ,Algorithm ,Image based ,Digital camera ,Mathematics - Abstract
Goal of proposed algorithm is to enhance images like the digital camera photos or, better. Some enhancement filters process too excessive that makes image look unnatural. The paper proposes an algorithm to represent obviously the detailed textures and sharpen the overall image based on perceptual approach. Then the sharpened images are merged by the edge information of original image. The experimental results show that the detailed textures can be easily observed. The whole image looks not only more sharp but also natural after applying our algorithm.
- Published
- 2012
- Full Text
- View/download PDF
25. Local contrast enhancement algorithm for high Contrast Scene Images
- Author
-
Li-Wei Chen, Yi-Ying Chang, and Shen-Chuan Tai
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Tone mapping ,Field (computer science) ,Computer graphics ,Histogram ,Key (cryptography) ,Computer vision ,Adaptive histogram equalization ,Artificial intelligence ,Graphics ,business ,Algorithm ,Histogram equalization - Abstract
The ultimate goal of realistic graphics is the creation of images that provoke the same responses that a viewer would have to a real scene. In the field of computer graphics, one related key problem is the issue of displaying radiant intensities in a meaningful way. The paper proposed an efficient local contrast enhancement algorithm by histogram equalization and histogram stretching method. The algorithm also combines the tone mapping to fix high Contrast Scene Images. Experimental results have shown that it produces low dynamic range images with perceptually good quality and strong detail preservation. These strong experimental results warrant further exploration of the potential of this algorithm.
- Published
- 2011
- Full Text
- View/download PDF
26. A fast and reliable algorithm for video noise estimation based on spatio-temporal sobel gradients
- Author
-
Shen-Chuan Tai and Shih-Ming Yang
- Subjects
Gradient noise ,symbols.namesake ,Additive white Gaussian noise ,Gaussian noise ,symbols ,Wavelet transform ,Sobel operator ,Noise (video) ,White noise ,Variance (accounting) ,Algorithm ,Mathematics - Abstract
A fast and reliable spatio-temporal algorithm for estimating additive white Gaussian noise (AWGN) in video sequences is proposed. The input video is divided into right cuboids. Estimations are made on three independent domains (spatial, temporal-horizontal, and temporal-vertical). Inside each domain, homogeneous blocks are first identified based on Sobel gradients with an adaptive and self-determined threshold. The selected blocks are then filtered by a Laplacian operator. The average of the filtering convolutions provides the estimated noise variance for each domain. The arithmetic average of these three estimated variances is computed to be the final estimated noise variance. Experimental results show that the proposed algorithm achieves better performance and maintains low complexity for a variety of video sequences over a large range of noise variances.
- Published
- 2011
- Full Text
- View/download PDF
27. An Improved Detection Method for Zero Quantized Blocks on H.264/AVC
- Author
-
Shen-Chuan Tai and Bo-Jhih Chen
- Subjects
Discrete mathematics ,Energy conservation ,Quantization (physics) ,Computation ,False detection ,Discrete cosine transform ,Statistical analysis ,Detection rate ,Algorithm ,H 264 avc ,Mathematics - Abstract
An improved detection method for observing the zero quantized block (ZQB) is proposed. The additional computational cost would be reduced due to $4\times4$ ZQBs being detected prior to the forward transform and the quantization processes, we report a new criterion based on the statistical analysis by considering the energy conservation theorem. Experiments are also carried out to validate the present method. The results indicate that the present method has both a better detection rate with the negligible PSNR degradation and a reasonable error and/or false detection comparing to the prevalent methods. Particularly, computation savings are obtained as well.
- Published
- 2010
- Full Text
- View/download PDF
28. ECG data compression by corner detection
- Author
-
Shen-Chuan Tai
- Subjects
Mean squared error ,Computer science ,Speech recognition ,Biomedical Engineering ,Corner detection ,Signal Processing, Computer-Assisted ,Linear interpolation ,Signal ,Computer Science Applications ,Electrocardiography ,Sampling (signal processing) ,Code (cryptography) ,Humans ,Detection theory ,Algorithm ,Algorithms ,Data compression - Abstract
An ECG sampled at a rate of 360, 500 samples s-1 or more produces a large amount of redundant data that are difficult to store and transmit. A process is therefore required to represent the signals with clinically acceptable fidelity and with the least code bits possible. In the paper, a real-time ECG data compressing algorithm, CORNER, is presented. CORNER is an efficient algorithm which locates significant samples and at the same time encodes the linear segments between them using linear interpolation. The samples selected include, but are not limited to, the samples that are significantly displaced from the encoded signal such that the allowed maximum error is limited to a constant epsilon which is specified by the users. The way in which CORNER computes the displacement of a sample from the encoded signal guarantees that the high activity regions are more accurately coded. The results are compared with those of the well known data compression algorithm, AZTEC, which is also a real-time algorithm. It is found that, under the same bit rate, a considerable improvement of the signal-to-noise ratio (SNR) and root mean square error (RMSerr) can be achieved by employing the proposed CORNER algorithm. An average value of SNR (RMSerr) of 27.0 dB (5.668) can be achieved even at an average bit rate of 0.79 bit sample-1 by employing CORNER, whereas the average value of SNR (RMSerr) achieved by AZTEC under the same bit rate is 16.60 dB (19.368).
- Published
- 1992
- Full Text
- View/download PDF
29. Six-band sub-band coder on ECG waveforms
- Author
-
Shen-Chuan Tai
- Subjects
Data processing ,Signal processing ,Computer science ,Frequency band ,Speech recognition ,Biomedical Engineering ,Signal Processing, Computer-Assisted ,Huffman coding ,Radio spectrum ,Electronics, Medical ,Computer Science Applications ,Electrocardiography ,symbols.namesake ,symbols ,Humans ,Waveform ,Algorithm ,Data compression ,Coding (social sciences) - Abstract
An ECG sampled at a rate of 500 samples s-1 or more produces a large amount of redundant data that are difficult to store and transmit. A process is therefore required to represent the signals with clinically acceptable fidelity and with the least code bits possible. In the paper, an efficient sub-band coding method for encoding ECG waveforms is presented. Although sub-band coding has been successfully applied to speech signals, it is the first time that this technique has been applied to the encoding of ECG waveforms. A frequency band decomposition of an ECG waveform is carried out by means of quadrature mirror filters (QMF), which split the ECG spectrum into six bands of unequal width. In the lower frequency bands, which contain most of the ECG spectrum energy, a larger number of bits per sample is used, whereas in upper frequency bands, which contain noise-like signals, fewer bits per sample and the run length coding method are used. The simulation results are presented in terms of bit rates and the quality of the reconstructed waveforms. The results show that a reproduction with an average signal-to-noise ratio (SNR) of 29.97 dB can be achieved even at an average bit rate of 0.81 bits per sample.
- Published
- 1992
- Full Text
- View/download PDF
30. A transformational approach to synthesizing combinational circuits
- Author
-
M.W. Du, Shen-Chuan Tai, and Richard C. T. Lee
- Subjects
Combinational logic ,Adder ,Function (mathematics) ,Computer Graphics and Computer-Aided Design ,Transformation (music) ,Tree (data structure) ,Permutation ,Logic synthesis ,Free variables and bound variables ,Electrical and Electronic Engineering ,Arithmetic ,Algorithm ,Software ,Mathematics - Abstract
VAR, a transformational approach for obtaining multilevel logic synthesis results, is described. Suppressed variable permutation and complementation (SVPC) transformations which are powerful and can be economically realized are introduced. Each SVPC transformation can be viewed as an identity mapping on the n-cube, except on an (n-r)-subcube (defined by r fixed coordinates), where it behaves like a variable permutation and complementation (VPC) transformation on n-r variables (the free variables). VAR is based on transforming the input functions to predefined goal functions by SVPC transformations. A transformation tree is obtained, and the transformations on the tree are collapsed and further simplified to obtain an economical circuit. This approach is illustrated by considering the sum function of the full adder. >
- Published
- 1991
- Full Text
- View/download PDF
31. Automatic intensity-pair distribution for image contrast enhancement
- Author
-
Nai-Ching Wang and Shen-Chuan Tai
- Subjects
Pixel ,business.industry ,Contrast (statistics) ,Image processing ,Noise ,Histogram ,Human visual system model ,Effective method ,Computer vision ,Artificial intelligence ,business ,Algorithm ,Selection (genetic algorithm) ,Mathematics - Abstract
Intensity-pair distribution was recently proposed to enhance the contrast of an image. There are several parameters provided in that algorithm for users to control the enhancement. With a proper combination of parameters, the algorithm provides satisfying contrast enhancement without noise amplification, unnatural look, and other common drawbacks that are seen in previous work, but there is no effective method of parameter selection mentioned. In this paper, we proposed an effective criterion based on human visual system to decide a proper combination of parameters. With an appropriate combination of parameters, we can take the most advantage of that algorithm. Automation of parameter selection can easily be done by iteratively using the proposed algorithm. We presented experimental results to compare results of contrast enhancement with proper and improper combinations of parameters.
- Published
- 2008
- Full Text
- View/download PDF
32. A Two-Stage Contrast Enhancement Algorithm for Digital Images
- Author
-
Yi-Ying Chang, Yen-Cheng Lu, Nai-Ching Wang, and Shen-Chuan Tai
- Subjects
Artifact (error) ,Logarithm ,Computer science ,Just-noticeable difference ,media_common.quotation_subject ,Logarithmic growth ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Contrast (vision) ,Image processing ,Algorithm design ,Tone mapping ,Algorithm ,media_common - Abstract
Studies of contrast sensitivity of the human eyes show that logarithmic curves obey the Weber-Fechner law of just noticeable difference response in human perception. In this paper, we propose a local contrast enhancement algorithm with logarithm-based curves for both high dynamic and low dynamic images and this algorithm can adaptively change the curvature with local information. We also define two parameters to decide the level of contrast enhancement in the tone mapping procedure. For halo artifact which is suffered from local operator, a two-stage procedure is designed to solve the problem. We present experimental results to show performance of our algorithm compared with other existing methods.
- Published
- 2008
- Full Text
- View/download PDF
33. A Novel Interactively Recurrent Self-Evolving Fuzzy CMAC and Its Classification Applications
- Author
-
Jyun Guo Wang, Shen-Chuan Tai, and Cheng-Jian Lin
- Subjects
Computer science ,business.industry ,Fuzzy logic ,Computer Science Applications ,Theoretical Computer Science ,Internal feedback ,Cerebellar model articulation controller ,Fuzzy cmac ,Hypercube ,Artificial intelligence ,business ,Gradient descent ,Structure learning ,Classifier (UML) ,Algorithm ,Software - Abstract
In this paper, an Interactively Recurrent Self-evolving Fuzzy Cerebellar Model Articulation Controller (IRSFCMAC) model is developed for solving classification problems. The proposed IRSFCMAC classifier consists of internal feedback and external loops, which are generated by the hypercube cell firing strength to itself and other hypercube cells. The learning process of the IRSFCMAC gets started with an empty hypercube base, and then all of hypercube cells are generated and learned online via structure and parameter learning, respectively. The structure learning algorithm is based on the degree measure to determine the number of hypercube cells. The parameter learning algorithm, based on the gradient descent method, adjusts the shapes of the membership functions and the corresponding fuzzy weights of the IRSFCMAC. Finally, the proposed IRSFCMAC model is tested by four benchmark classification problems. Experimental results show that the proposed IRSFCMAC model has superior performance than traditional FCMAC and other models.
- Published
- 2015
- Full Text
- View/download PDF
34. Designing better adaptive sampling algorithms for ECG Holter systems
- Author
-
C.W. Chang, Shen-Chuan Tai, and C.F. Chen
- Subjects
Adaptive filter ,Signal processing ,Adaptive sampling ,Adaptive algorithm ,Sampling (signal processing) ,Computer science ,Electrocardiography, Ambulatory ,Biomedical Engineering ,Sampling (statistics) ,Signal Processing, Computer-Assisted ,Algorithm ,Gas compressor ,Algorithms - Abstract
Let /spl Psi/ be any adaptive sampling algorithm that can run in real time on a tapeless multichannel electrocardiogram (ECG) Holter system. Simple methods which can significantly improve /spl Psi/'s fidelity are described and their results are compared in this paper. It is shown that by adding some simple tests to /spl Psi/, the signals reconstructed by /spl Psi/ can be improved as much as 5.45 dB. It is also shown that under the same data rate, a good data compressor with slowly sampled input ECG is preferable to a bad data compressor with highly sampled input ECG.
- Published
- 1997
- Full Text
- View/download PDF
35. A motion and edge adaptive deinterlacing algorithm
- Author
-
F.J. Chang, Shen-Chuan Tai, and C.S. Yu
- Subjects
Motion compensation ,Motion analysis ,Pixel ,Computer science ,Orientation (computer vision) ,business.industry ,Feature extraction ,Video quality ,Edge detection ,Deinterlacing ,Distortion ,Motion estimation ,Computer vision ,Artificial intelligence ,business ,Algorithm ,Interpolation - Abstract
A motion adaptive and the edge-based deinterlacing algorithm is proposed in this paper. Pixels in a field are divided into two regions: static region and moving region. Inter-field interpolation is performed on the static region; intra-field edge based deinterlacing methods are used on the moving region. To estimate the correct interpolation orientation, edge detection is performed on the moving region pixels. The moving region is further classified into four areas that are subjected to different appropriate interpolation methods with quarter-pel accuracy. Experimental results show that the proposed technique can produce high quality video, and the computational complexity is acceptable for consumer electronic applications
- Published
- 2005
- Full Text
- View/download PDF
36. DCT-based image compression using wavelet-based algorithm with efficient deblocking filter
- Author
-
Wen-Chien Yen and Shen-Chuan Tai
- Subjects
Discrete wavelet transform ,Computer science ,Trellis quantization ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Data_CODINGANDINFORMATIONTHEORY ,computer.file_format ,JPEG ,JPEG 2000 ,Discrete cosine transform ,Quantization (image processing) ,computer ,Algorithm ,Image compression ,Data compression - Abstract
Discrete cosine transform (DCT) is widely used in many practical image/video compression systems because of its compression performance and computational efficiency. This work adopts DCT, and modified the SPIHT algorithm that designed initially for encoding the discrete wavelet transform (DWT) coefficients in order to suit to encode DCT coefficients. The algorithm represents the DCT coefficients to concentrate signal energy and proposes combination and dictator to eliminate the correlation in the same level subband for encoding the DCT-based images. The coding complexity of the proposed algorithm for DCT coefficients is just close to JPEG but the performance is higher than JPEG2000. Furthermore, the proposed algorithm append to deblocking function in low bit rate in order to improve the perceptual quality. Experimental results indicate that the proposed technique improves the quality of the reconstructed image in terms of both PSNR and the perceptual results over JPEG2000 at the same bit rate.
- Published
- 2005
- Full Text
- View/download PDF
37. Embedded medical image compression using DCT based subband decomposition and modified SPIHT data organization
- Author
-
Shen-Chuan Ti and Yen-Yu Chen
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Data_CODINGANDINFORMATIONTHEORY ,computer.file_format ,Set partitioning in hierarchical trees ,Redundancy (information theory) ,Wavelet ,Frequency domain ,JPEG 2000 ,Discrete cosine transform ,Computer vision ,Entropy encoding ,Artificial intelligence ,business ,computer ,Algorithm ,Image compression - Abstract
In the paper, an 8/spl times/8 DCT approach is adopted to perform subband decomposition, followed by modified SPIHT data organization and entropy coding. The translation function has the ability to retain the detail characteristics of an image. By means of a simple transformation to gather the DCT spectrum data with the same frequency domain, the translation function exploits all the characteristics of all individual blocks to a global framework. In this scheme, insignificant DCT coefficients that correspond to the same spatial location in the high-frequency subbands can be used to reduce the redundancy by a combined function proposed in associated with the modified SPIHT. Simulation results showed that the embedded DCT-CSPIHT image compression reduced the computational complexity to only a quarter of the wavelet based subband decomposition and improved the quality of the reconstructed medical image in terms of both the peak PSNR and the perceptual results over JPEG2000 and the original SPIHT at the same bit rate.
- Published
- 2004
- Full Text
- View/download PDF
38. VQ bit rate reduction technique by transform compression
- Author
-
Yung-Gi Wu and Shen-Chuan Tai
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Vector quantization ,Codebook ,computer.file_format ,Iterative reconstruction ,Bit rate ,Discrete cosine transform ,Computer vision ,Artificial intelligence ,Bit Rate Reduction ,business ,Image retrieval ,computer ,Algorithm ,Transform coding - Abstract
A technique to reduce the overhead of vector quantization (VQ) is developed. This method exploits the high correlation property among neighboring blocks. By rearranging the codebook after the training process, the indices of neighboring vectors have fewer varieties, then, the gains of compression is attained. For the energy compactness advantage of the discrete cosine transform (DCT), the indices buffer formed after the encoding phase are fed to the DCT to reduce the bit rate further. Statistics illustrators are addressed to prove the efficient performance of the proposed method. Experiments are carried out on natural gray images. Simulation results show that our method decreases the bit rate more than 50% with the same reconstructed image quality compared to standard VQ coding.
- Published
- 2002
- Full Text
- View/download PDF
39. An extensive Markov system for ECG exact coding
- Author
-
Shen-Chuan Tai
- Subjects
Computer science ,Speech recognition ,Biomedical Engineering ,Markov systems ,Markov process ,Huffman coding ,Markov model ,Electrocardiography ,Entropy (classical thermodynamics) ,symbols.namesake ,Humans ,Entropy (information theory) ,Entropy (energy dispersal) ,Entropy (arrow of time) ,Mathematics ,Entropy (statistical thermodynamics) ,business.industry ,Variable-order Markov model ,Signal Processing, Computer-Assisted ,Pattern recognition ,Markov Chains ,symbols ,Correlation method ,Artificial intelligence ,Ecg signal ,business ,Algorithm ,Entropy (order and disorder) - Abstract
In this paper, an extensive Markov process, which considers both the coding redundancy and the intersample redundancy, is presented to measure the entropy value of an ECG signal more accurately. It utilizes the intersample correlations by predicting the incoming n samples based on the previous m samples which constitute an extensive Markov process state. Theories of the extensive Markov process and conventional n repeated applications of m-th order Markov process are studied first in this paper. After that, they are realized for ECG exact coding. Results show that a better performance can be achieved by our system. The average code length for the extensive Markov system on the second difference signals was 2.512 b/sample, while the average Huffman code length for the second difference signals was 3.326 b/sample.
- Published
- 2002
- Full Text
- View/download PDF
40. Artifact-free superresolution algorithm with natural texture preservation
- Author
-
Tse Ming Kuo, Shen-Chuan Tai, and Hong Je Chen
- Subjects
Artifact (error) ,Texture (cosmology) ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Fractal analysis ,Atomic and Molecular Physics, and Optics ,Edge detection ,Computer Science Applications ,Fractal ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,Image resolution ,Digital filter - Abstract
Superresolution (SR) algorithms have recently become a hot research topic. The main purpose of image upscaling is to obtain high-resolution images from low-resolution ones, and these upscaled images should look like they had been taken with a camera having a resolution the same as the upscaled images, and at least present natural textures. In general, some SR algorithms preserve clear edges but blur the textures, while others preserve detailed textures but cause some obvious artifacts along edges. The proposed SR algorithm presents the detailed textures and, meanwhile, refines the strong edges and avoids causing obvious artifacts. The goal is achieved by using orthogonal fractal as the preliminary upscaling method in conjunction with the proper postprocessing where directional enhancement is adopted. In fact, the postprocessing part in the proposed SR algorithm can effectively reduce most jagged artifacts caused by SR algorithms. In the simulation results, it is shown that the proposed SR algorithm performs well in both objective and subjective measurements. Moreover, most detailed textures are properly enhanced and most jagged artifacts caused by SR algorithms can also be effectively reduced.
- Published
- 2014
- Full Text
- View/download PDF
41. Very low bit rate DCT coding by spectral similarity analysis
- Author
-
Shen-Chuan Tai and Yung Gi Wu
- Subjects
Signal processing ,Frequency band ,Image quality ,Discrete cosine transform ,computer.file_format ,JPEG ,computer ,Algorithm ,Transform coding ,Similitude ,Mathematics ,Image compression - Abstract
Conventional transform coding schemes such as JPEG process the spectrum signal in a block by block manner due to its simple manipulation; nevertheless it does not consider the similarity of different spectrums. The proposed method devises a translation function, which reorganizes the individual spectrum data to generate the global spectrums according to their frequency band. Among those different bands, high similarity characteristic is existing. Our algorithm analyzes the similarity of those different spectrum bands to reduce the bit rate of transmission or storage. Simulations are carried to many different nature images to demonstrate that the proposed method can improve the performances when compared with other existing transform coding schemes especially at very low bit rate (below 0.25 bpp) requirement.
- Published
- 1998
- Full Text
- View/download PDF
42. On designing efficient superresolution algorithms by regression models
- Author
-
Shen-Chuan Tai and Tse Ming Kuo
- Subjects
Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Regression analysis ,Image segmentation ,Resolution (logic) ,Atomic and Molecular Physics, and Optics ,Computer Science Applications ,Visualization ,Image (mathematics) ,Wavelet ,Electrical and Electronic Engineering ,Image resolution ,Algorithm ,Interpolation - Abstract
A good superresolution (SR) algorithm obtains high-resolution (HR) images from the corresponding low-resolution (LR) ones and, moreover, makes the former look like they had been acquired with a sensor having the expected resolution or at least as “natural” as possible. In general, fast SR algorithms usually result in more ill artifacts in the enlarged image, while the well-performed ones usually have great complexity and take much more computing time. For this purpose, four efficient SR algorithms based on regression models are proposed. In the proposed SR algorithms, the difference of a natural HR image and an HR image obtained by fast interpolation is taken as the lost detail and is supposed to be composed of several different oriented details. By the self-similarity of the input LR image and its corresponding HR image, a regression model is established by the input LR image to decide the proper respective weights of these oriented details which is then used to reconstruct the lost detail of the natural HR image. As shown in the experimental results, the proposed SR algorithms not only perform well in both objective criteria and visual quality but also take less computing time than some well-performing algorithms.
- Published
- 2013
- Full Text
- View/download PDF
43. Phase Retrieval Based on Matrix Completion
- Author
-
程鸿 Cheng Hong, 沈川 Shen Chuan, 韦穗 Wei Sui, 张芬 Zhang Fen, and 张成 Zhang Cheng
- Subjects
Matrix completion ,Computer science ,Phase retrieval ,Algorithm ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials - Published
- 2013
- Full Text
- View/download PDF
44. Improving the performance of electrocardiogram sub-band coder by extensive Markov system
- Author
-
Shen-Chuan Tai
- Subjects
Signal processing ,Markov chain ,Speech recognition ,Biomedical Engineering ,Markov systems ,Markov process ,Signal Processing, Computer-Assisted ,Models, Biological ,Markov Chains ,Computer Science Applications ,Sub-band coding ,Search engine ,symbols.namesake ,Electrocardiography ,Compression ratio ,symbols ,Humans ,Algorithm ,Coding (social sciences) ,Mathematics - Abstract
The paper reports on a continuation of previous work on the digital coding of 500 samples s-1 ECG at 240 bits s-1. Focus is on six-band sub-band coding (SBC) with extensive Markov system as post-processors for each sub-band signal. An extensive Markov process, which considers both the coding redundancy and the intersample redundancy, utilises the redundancies by predicting the incoming n samples based on the previous m samples that constitute an extensive Markov process state. Both the previous m samples and the incoming n samples are considered as extension codes of the quantisation levels. Simulation results show that a reproduction with an average of signals ratio (SNR) of 27.21 dB or peak SNR of 58.6 dB was achieved at an average bit rate of 0.48 bit per sample, which corresponds to an average compression ratio of 25, whereas under the same signal fidelity, the previous proposed SBC system achieved an average bit rate of 0.714 bit per sample.
- Published
- 1995
45. Efficient codebook search algorithm for vector quantization
- Author
-
Shen-Chuan Tai and Chih chiang Lai
- Subjects
Linde–Buzo–Gray algorithm ,Binary search algorithm ,Learning vector quantization ,Theoretical computer science ,Search algorithm ,Quantization (signal processing) ,Codebook ,Vector quantization ,Best-first search ,Algorithm ,Mathematics - Abstract
In this paper, we present an efficient codebook search algorithm in a VQ-based system. The proposed fast search algorithm utilizes the compactness property of signal energy on transform domain and the geometrical relations among input vector and codevectors to eliminate those codevectors which is impossible to be the closest codeword to input vector. Id does not need to examine each entry in the codebook of a vector quantization encoder and can achieve a full search equivalent performance. In comparison with other existing fast algorithm, the proposed algorithm requires the least number of multiplication and the least total number of distortion measurements.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1995
- Full Text
- View/download PDF
46. Adaptive selection model for detecting zero-quantized discrete cosine transform coefficients in video coding
- Author
-
Bo Jhih Chen, Shen-Chuan Tai, and Yung Gi Wu
- Subjects
Modified discrete cosine transform ,business.industry ,Computer science ,Quantization (signal processing) ,Trellis quantization ,General Engineering ,Atomic and Molecular Physics, and Optics ,Quantization (physics) ,Computer Science::Multimedia ,Discrete cosine transform ,Computer vision ,Artificial intelligence ,business ,Encoder ,Algorithm ,Coding (social sciences) - Abstract
An adaptive selection approach for advanced video coding that reduces the number of unnecessary discrete cosine transform (DCT), quantization (Q), inverse quantization (IQ), and inverse DCT (IDCT) computations is proposed. An adaptive selection approach is used to detect zero-quantized DCT (ZQDCT) coefficients since certain zero-quantized DCT coefficients of the prediction residual block are difficult to identify. The presented algorithm detects ZQDCT coefficients according to the corresponding frequency positions before implementing DCT. Therefore, redundant DCT, Q, IQ, and IDCT procedures are removed. To carry out the proposed algorithm, three sufficient conditions are assumed for deriving eight prediction modes, which are employed to select various types of DCT, Q, IQ, and IDCT implementations. The computed results indicate that the proposed algorithm is capable of reducing the number of DCT, Q, IQ, and IDCT computations compared to those required by related methods, while retaining the coding performance of the original encoder.
- Published
- 2012
- Full Text
- View/download PDF
47. AZTDIS--a two-phase real-time ECG data compressor
- Author
-
Shen-Chuan Tai
- Subjects
Sample (material) ,Biophysics ,Value (computer science) ,Signal Processing, Computer-Assisted ,Linear interpolation ,Signal ,Displacement (vector) ,Electrocardiography ,Sampling (signal processing) ,Computer Systems ,Code (cryptography) ,Linear Models ,Algorithm ,Algorithms ,Data compression ,Mathematics - Abstract
An ECG sampled at a rate of 360 samples s −1 or more produces a large amount of redundant data that are difficult to store and transmit; we therefore need a process to represent the signals with clinically acceptable fidelity and with as small a number of code bits as possible. In this paper, a real-time ECG data-compression algorithm, AZTDIS, is presented. AZTDIS is an efficient algorithm which locates significant samples and at the same time encodes linear segments between them by using linear interpolation. The significant samples selected include, but are not limited to, the samples that have significant displacement from the encoded signal such that the allowed maximal error is limited to a constant ϵ, which is specified by the user. The way that AZTDIS computes the displacement of a sample from the encoded signal guarantees that the high activity regions are more accurately coded. The results from AZTDIS are compared with those from the well-known data-compression algorithm, AZTEC, which is also a real-time algorithm. It is found that under the same bit rate, a considerable improvement of root-mean-square error (RMS err ) can be achieved by employing the proposed AZTDIS algorithm. An average value of RMS err of 9.715 can be achieved even at an average bit rate of 0.543 bits per sample by employing AZTDIS. By tuning the allowed maximal error of AZTDIS such that it has similar bit rate to AZTEC, the average value of RMS err achieved by AZTDIS is 5.554 while the average value of RMS err achieved by AZTEC under the same bit rate is 19.368.
- Published
- 1993
48. SLOPE--a real-time ECG data compressor
- Author
-
Shen-Chuan Tai
- Subjects
Data processing ,Biomedical Engineering ,Huffman coding ,Sample (graphics) ,Computer Science Applications ,symbols.namesake ,Computer Systems ,symbols ,Electrocardiography, Ambulatory ,Humans ,Gas compressor ,Real-time operating system ,Algorithm ,Communication channel ,Mathematics ,Integer (computer science) ,Data compression - Abstract
An ECG sampled at a rate of 250 samples s-1 or more produces a large amount of redundant data that are difficult to store and transmit. In the paper, a real-time ECG data compressor, SLOPE, is presented. SLOPE considers some adjacent samples as a vector, and this vector is extended if the coming sample falls in a fan spanned by this vector and a threshold angle; otherwise, it is delimited as a linear segment. By this means SLOPE repeatedly delimits linear segments of different lengths and different slopes. The Huffman codes for the parameters to describe this linear segment are transmitted for that linear segment. SLOPEa, which is a slightly modified version of SLOPE, is used to compress ambulatory ECG data. All the operations used by SLOPE and SLOPEa are simple integer operations, both SLOPE and SLOPEa being real-time compressors. Experimental results show that an average of 192 bits per channel per second (bpcs) for each ECG signal is obtained by SLOPE and an average of 148 bpcs for each ECG signal is obtained by SLOPEa.
- Published
- 1991
49. Fast motion estimation algorithm using motion adaptive search
- Author
-
Fu Kai Huang, Chong Shou Yu, and Shen-Chuan Tai
- Subjects
Motion compensation ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,Statistical model ,Atomic and Molecular Physics, and Optics ,Motion (physics) ,Quarter-pixel motion ,Distortion ,Motion estimation ,Computer vision ,Artificial intelligence ,business ,Algorithm ,Data compression ,Block-matching algorithm - Abstract
Most existing video compression standards use block-matching motion estimation to exploit temporal correlation between frames. To reduce the costly computation of exhaustive searches on all possible motion displacements, researchers have developed many fast algorithms. Some of these successfully proposed algorithms take advantage of the motion correlations between adjacent macroblocks. On the basis of the idea of second-order motion correlation between macroblocks in particular, we developed a new algorithm that applies a set of adaptive search patterns to benefit the statistical model of motions. In addition, adaptive early termination rules are used to prevent the waste of unnecessary computation. Simulation results show that the proposed algorithm outperforms most other existing algorithms in the areas of speed and visual quality.
- Published
- 2008
- Full Text
- View/download PDF
50. Compressing discrete cosine transform coefficients by modified set partitioning in hierarchical trees
- Author
-
Shen-Chuan Tai, Yen-Yu Chen, and Wen Chien Yan
- Subjects
Theoretical computer science ,Modified discrete cosine transform ,Trellis quantization ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Wavelet transform ,Data_CODINGANDINFORMATIONTHEORY ,computer.file_format ,JPEG ,Atomic and Molecular Physics, and Optics ,Computer Science Applications ,Set partitioning in hierarchical trees ,Discrete cosine transform ,Electrical and Electronic Engineering ,Quantization (image processing) ,computer ,Algorithm ,Data compression ,Mathematics - Abstract
The discrete cosine transform (DCT) is widely used in many practical image/video compression systems because of its compression performance and computational efficiency. We adopt the DCT and the modified set partitioning in hierachical trees (SPIHT) algorithm that was designed initially for encoding the dis- crete wavelet transform (DWT) coefficients to be suitable to encode DCT coefficients. The algorithm represents the DCT coefficients to concentrate signal energy and proposes a combination and dictator to eliminate the correlation in the same level subband for encoding DCT-based images. To further save bits, subbands with significant coefficients are classified into seven types. The coding complexity of the proposed algorithm for DCT coefficients is just close to JPEG but the performance is higher than JPEG2000. Experimental results indicate that the proposed technique improves the quality of the reconstructed image in terms of peak SNR (PSNR) over SPIHT and JPEG2000 at the same bit rate. © 2005 SPIE and
- Published
- 2005
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.