28 results on '"Mohammad Farhang Daemi"'
Search Results
2. Global description of edge patterns using moments
- Author
-
Mohammad K. Ibrahim, Mohammad Farhang Daemi, and Harish Kumar Sardana
- Subjects
business.industry ,3D single-object recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Cognitive neuroscience of visual object recognition ,Image processing ,Script recognition ,Computer Science::Sound ,Artificial Intelligence ,Computer Science::Computer Vision and Pattern Recognition ,Bounded function ,Signal Processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Pattern matching ,Invariant (mathematics) ,business ,Software ,Character recognition ,Mathematics - Abstract
Shape recognition has traditionally been accomplished using well bounded segmented regions or their closed contours. In this way, the internal edge details of the objects are ignored. There are cases such as in character recognition and in object recognition when patterns are not closed contours. In this paper, such patterns are defined as edge patterns and Edge Standard Moments (ESM) are developed which are invariant to location, scale and rotation. Results of patterns matching with character recognition and polyhedral object recognition are presented.
- Published
- 1994
3. Classification of coal images by a multi-scale segmentation techniques
- Author
-
R. E. Marston, N. J. Miles, Mohammad Farhang Daemi, B. P. Atkin, and Jamshid Dehmeshki
- Subjects
Pixel ,Contextual image classification ,business.industry ,Computer science ,Maceral ,Statistical model ,Pattern recognition ,Image segmentation ,Sample (graphics) ,Computer vision ,Segmentation ,Artificial intelligence ,business ,Image resolution - Abstract
This paper describes development of an automated and efficient technique for classifying different major maceral groups within polished coal blocks. Coal utilisation processes can be significantly affected by the distribution of macerals in the feed coal. Classical manual maceral analysis requires a highly skilled operator and the time to perform an analysis can depend on the complexity of the sample and on the work load of the operator. Also if different operators are employed lower levels of reproducibility may result. The aim of segmentation is to partition the images into different types of macerals. A multi-scale approach to segmentation is defined in which the result of each process at a given resolution is used to adjust the other process at the next resolution. This approach combines a suitable statistical model for distribution of pixel values within each macerals group and a transition distribution from coarse to fine scale, based on a son-father relationship, which is defined between the nodes in adjacent levels. This transition function is based on the idea that neighboring pixels are similar to one another. This is mainly true due to the high resolution of these images understudy, which means that the pixel size is significantly smaller than the size of most of the different regions of interest.
- Published
- 2002
4. Novel quad-tree image coding technique using edge-oriented classification
- Author
-
Farhad Keissarian and Mohammad Farhang Daemi
- Subjects
business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Vector quantization ,Pattern recognition ,computer.file_format ,Block Truncation Coding ,JPEG ,Histogram ,Quadtree ,Artificial intelligence ,business ,Block size ,computer ,Mathematics ,Image compression ,Color Cell Compression - Abstract
A new image compression approach is proposed in which variable block size technique is adopted, using quadtree decomposition, for coding images at low bit rates. In the proposed approach, low-activity regions, which usually occupy large areas in an image are coded with a larger block size and the block mean is used to represent each pixel in the block. A novel classification scheme, which operates based on the distribution of the block residuals is employed to determine whether the processed block is a low-detail or a high-detail block. To preserve edge integrity, a new edge-based coding technique is used to code high-activity regions. In this method, the orientation of edge pattern within a high-activity block will be computed as an aid to the classification. A novel edge-oriented classifier, operating based on the histogram analysis of the pixels' orientations, is also proposed to for edge classification. Each edge block is represented by a set of parameters associated with the pattern appearing inside the block. The use of these parameters at the receiver reduces the cost of reconstruction significantly and exploits the efficiency of the proposed technique. Experiments have been conducted to compare with variance-based quadtree technique, vector quantization-based variable size algorithms, as well as the standard JPEG. Results show higher PSNR at competitive reconstruction quality.
- Published
- 2002
5. Block pattern coding of HVS-based wavelets for image compression
- Author
-
Farhad Keissarian and Mohammad Farhang Daemi
- Subjects
business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Wavelet transform ,Image processing ,Pattern recognition ,Data_CODINGANDINFORMATIONTHEORY ,computer.file_format ,Block Truncation Coding ,JPEG ,Wavelet ,Human visual system model ,Computer vision ,Artificial intelligence ,business ,computer ,Color Cell Compression ,Mathematics ,Image compression - Abstract
In this paper, we present a wavelet-based image compression technique, which incorporates some of the human visual system (HVS) characteristics for the wavelet decomposition and bit allocation of subband images. The wavelet coefficients are coded using a new technique, referred to as Block Pattern Coding algorithm. The proposed technique employs a set of local geometric patterns, which preserve the underlying edge geometries in the high frequency signals at very low coding rates. Critical to the success of our approach is the frequent utilization of a special block pattern, which is a uniform pattern of constant intensity to reproduce image blocks of near constant intensity. A performance comparison with JPEG and HVS-based wavelets using VQ is presented for both moderate and heavy compression.
- Published
- 2001
6. Image compression using a novel edge-based coding algorithm
- Author
-
Mohammad Farhang Daemi and Farhad Keissarian
- Subjects
Block code ,Computer science ,Bit rate ,Redundancy (engineering) ,Image processing ,Block Truncation Coding ,Algorithm ,Edge detection ,Color Cell Compression ,Image compression ,Data compression - Abstract
In this paper, we present a novel edge-based coding algorithm for image compression. The proposed coding scheme is the predictive version of the original algorithm, which we presented earlier in literature. In the original version, an image is block coded according to the level of visual activity of individual blocks, following a novel edge-oriented classification stage. Each block is then represented by a set of parameters associated with the pattern appearing inside the block. The use of these parameters at the receiver reduces the cost of reconstruction significantly. In the present study, we extend and improve the performance of the existing technique by exploiting the expected spatial redundancy across the neighboring blocks. Satisfactory coded images at competitive bit rate with other block-based coding techniques have been obtained.
- Published
- 2001
7. Image representation scheme using histogram-based classifier and its application to image coding
- Author
-
Farhad Keissarian and Mohammad Farhang Daemi
- Subjects
Image coding ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Template ,Computer Science::Computer Vision and Pattern Recognition ,Histogram ,Color depth ,Computer vision ,Artificial intelligence ,business ,Classifier (UML) ,Image histogram ,Mathematics ,Image compression ,Shape analysis (digital geometry) - Abstract
In this paper, a new image representation scheme using a set of block templates is introduced first. Its application in image coding is presented afterwards. In the proposed representation scheme, a set of block templates is constructed to represent three basic types of image patterns: uniform, edge, and irregular. A novel classifier, which is designed based on the histogram shape analysis of image blocks, is employed to classify the block templates according to their level of visual activity. Each block template is then represented by a set of parameters associated with the pattern appearing inside the block. Image representation using these templates requires considerably fewer bits than the original pixel-wise description and yet characterizes perceptually significant features more effectively. The coding system approximates each image block by one of the block templates and further quantizes the template parameters. Satisfactory coded images have been obtained at bit rates between 0.3 - 0.4 bits per pixel (bpp).
- Published
- 2000
8. Pyramid image coder using block-template-matching algorithm
- Author
-
Mohammad Farhang Daemi and Farhad Keissarian
- Subjects
Coding algorithm ,Computational complexity theory ,Template matching ,Histogram ,Algorithmic efficiency ,Color depth ,Compression ratio ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Algorithm ,Classifier (UML) ,Mathematics - Abstract
In this paper, a new image coding technique is introduced first. Its inclusion in a pyramidal representation is presented afterwards. In the proposed stand-alone coding algorithm, referred to as Block template Matching, an image is block coded according to the type of individual blocks. A novel classifier, which is designed based on the histogram analysis of blocks is employed to classify the image blocks according to their level of visual activity. Each block is then represented by a set of parameters associated with the pattern appearing inside the block. The use of these parameters at the receiver reduces the cost of reconstruction significantly and exploits the efficiency of the proposed technique. The coding efficiency of the proposed technique along with the low computational complexity and simple parallel implementation of the pyramid approach allows for a high compression ratio as well as a good image quality. Satisfactory coded images have been obtained at bit rates in the range of 0.30 - 0.35 bits per pixel.
- Published
- 2000
9. Color image analysis of contaminants and bacteria transport in porous media
- Author
-
Mehdi Rashidi, Eric Dickenson, Mohammad Farhang Daemi, Jamshid Dehmeshki, and Larry Cole
- Subjects
Optics ,Planar ,business.industry ,law ,Color image ,Microscopy ,Image processing ,business ,Laser ,Porosity ,Porous medium ,Refractive index ,law.invention - Abstract
Transport of contaminants and bacteria in aqueous heterogeneous saturated porous systems have been studies experimentally using a novel fluorescent microscopic imaging (FMI) technique. The approach involves color visualization and quantification of bacterium and contaminant distributions within a transparent porous column. By introducing stained bacteria and an organic dye as a contaminant into the column and illuminating the porous regions with a planar sheet of laser beam, contaminant and bacterial transport processes through the porous medium can be observed and measured microscopically. A computer controlled CCD camera is used to record the fluorescent images as a function of time. These images are recorded by a frame accurate high resolution VCR and are then analyzed using a color image analysis code written in our laboratories. The color images are digitalized this way and simultaneously concentration and velocity distributions of both contaminant and bacterium are evaluated as a function of time and pore characteristics. The approach provides a unique dynamic probe to observe these transport processes microscopically. These results are extremely valuable in in-situ bioremediation problems since microscopic particle-contaminant-bacterium interactions are the key to understanding and optimization of these processes.
- Published
- 1997
10. Stochastic approach to texture analysis using probabilistic neural networks and Markov random fields
- Author
-
Fraser N. Hatfield, Jamshid Dehmeshki, Mohammad Farhang Daemi, and Mehdi Rashidi
- Subjects
Random field ,Artificial neural network ,Markov chain ,Computer science ,business.industry ,Stochastic process ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Markov model ,Machine learning ,computer.software_genre ,Hybrid algorithm ,Probabilistic neural network ,Computer Science::Computer Vision and Pattern Recognition ,Artificial intelligence ,Stochastic neural network ,business ,computer - Abstract
Images are statistical in nature due to random changes and noise,therefore it is sometimes an advantage to treat image functions as a realization of a stochastic process. An advantage of stochastic random field models is that they need only a few parameters to describe a region or texture. In this paper textural images are modeled as a realization of Markov Random Fields such as binomial and autoregressive Markov random fields. Parameters of each model are estimated and considered as features of the textural images. The extracted features are incorporated in either a probabilistic neural network (PNN) or a deterministic back propagation neural networks for the purpose of classification and differentiation between various textural images. The PNN and the learning algorithm are discussed in this paper in details. To train back propagation neural network, a hybrid training algorithm is proposed. This hybrid algorithm takes advantage of both simulated annealing and deterministic learning algorithms. The former algorithm is more reliable since it locates a more likely global minimum, but it is slow. The latter algorithm is fast but less reliable, and it can converge to a local minimum. There are many practical uses for the above proposed textural analysis tool such as remote sensing, mineralogical analysis, and medical image processing. In this paper, successful applications of the present stochastic model to synthetic texture images and MRI tongue and brain images are described.
- Published
- 1997
11. Rotational information in shape description
- Author
-
Ahmed Ghali and Mohammad Farhang Daemi
- Subjects
Contextual image classification ,business.industry ,Machine vision ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Information theory ,Heat kernel signature ,Active shape model ,Computer vision ,Artificial intelligence ,Invariant (mathematics) ,business ,Algorithm ,ComputingMethodologies_COMPUTERGRAPHICS ,Shape analysis (digital geometry) ,Mathematics - Abstract
Using information theory, this investigation propose a novel technique for shape description which is invariant to translation, rotation and in most cases also to scale. This new numeric shape descriptor is based on the measure of rotational information content of an image. In this paper we first review some popular metric shape description features. These features are then used to analyze the feasibility of using the rotational information for shape description, and by means of a comparative study we show how the rotational information is related to well known metric shape descriptors such as area, circularity and elongation. Finally, the results obtained are discussed and analyzed, and conclusions drawn in terms of the suitability of the technique for shape description in image recognition problems.
- Published
- 1996
12. Identification of quality of coal using an automated image analysis system
- Author
-
B. P. Atkin, N. J. Miles, Mohammad Farhang Daemi, and Jamshid Dehmeshki
- Subjects
Pixel ,business.industry ,Computer science ,Maceral ,Scale-space segmentation ,Pattern recognition ,Statistical model ,Image segmentation ,Computer vision ,Segmentation ,Coal ,Artificial intelligence ,Scale (map) ,business - Abstract
This paper is concerned with development of an automated and efficient system for quality control of coal. This is achieved by distinguishing between different major maceral groups present in the polished coal blocks when viewed under a microscope. Coal utilization processes can be significantly affected by the distribution of macerals in the feed coal. Manual petrographic analysis of coal requires a highly skilled operator and the results obtained can have a high degree of subjectivity. One way of overcoming these problems is to employ automated image analysis. The system described here consists of two stages: segmentation and interpretation. In the segmentation stage, the aim is to partition the images into different types of macerals. We have implemented a multi-scale segmentation technique in which the result of each process at a given resolution is used to adjust the other process at the next resolution. This approach combines a suitable statistical model for distribution of pixel values within each macerals group and a transition distribution from coarse to fine scale, based on a son-father relationship, which is defined between the nodes in adjacent levels. At each level, segmentation is performed by maximizing the a posteriori probability (MAP) which is achieved by a relaxation algorithm, similar to Besegs work. There are two major reasons for carrying out the segmentation estimation over a hierarchy of resolutions: to speed up the estimation process, and to incorporate large scale characteristics of each pixel. The speed can be further improved by restricting the operation on the pixels which are introduced as mixed in each resolution, by which the number of pixels to be considered are significantly reduced. In the interpretation stage, the coal macerals are identified according to the measurement information on the segmented region and domain knowledge. The paper describes the knowledge base used in this application in some detail. The system has been particularly successful in correctly classifying difficult cases, such as liptinite, vitrinite, semi-fusinite and pyrite.
- Published
- 1996
13. Adaptive segmentation of remotely sensed images based on stochastic models
- Author
-
R. E. Marston, Mohammad Farhang Daemi, Jamshid Dehmeshki, Paul M. Mather, and Zhen Shou Sun
- Subjects
Pixel ,Computer science ,business.industry ,Stochastic modelling ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Estimator ,Pattern recognition ,Statistical model ,Random walker algorithm ,Computer Science::Computer Vision and Pattern Recognition ,Expectation–maximization algorithm ,Segmentation ,Artificial intelligence ,business ,Cluster analysis - Abstract
This paper discusses the application of stochastic labeling of remotely sensed images. A cooperative, iterative approach to segmentation and model parameter estimation is defined which is a stochastic variant of the expectation maximization (EM) algorithm, adapted to our model. Classical statistical modeling forces each pixel to be associated with exactly one class. This assumption may not be realistic, particularly in the case of satellite data. Our approach allows the possibility of mixed pixels. The labeling used in this technique involves two parts: a hard component, which describes pure pixels, and a soft component, which describes mixed pixels. The technique is illustrated by the classification of a SPOT HRV image. Because of the high resolution of these images, the pixel size is significantly smaller than the size of most of the different regions of interest, so adjacent pixels are likely to have similar labels. In our stochastic expectation maximization (SEM) method the idea that neighboring pixels are similar to one another is expressed by using Gibbs distribution for the priori distribution of regions (labels). This paper also presents a statistical model for the distribution of pixel values within each region. The initial parameters of the model can be estimated by using a K-means clustering or ISODATA, in the case of unsupervised segmentation. These parameters are then modified in each iteration of SEM. In the case of supervised segmentation, the initial parameters can be obtained from a classifier training data set and then re-estimated in SEM method. The reason for this re-estimation is that a set of classification parameters obtained from a classifier training data set may not produce satisfactory results on images which were not used to train the classifier. Our study shows that this SEM method provides reliable model parameter estimators as well as segmentation of the image.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1995
14. Determination of major maceral groups in coal by automated image analysis procedures
- Author
-
B. P. Atkin, Mohammad Farhang Daemi, N. J. Miles, R. E. Marston, and Jamshid Dehmeshki
- Subjects
Pixel ,Carbonization ,business.industry ,Computer science ,Maceral ,Pattern recognition ,Image processing ,Image segmentation ,engineering.material ,Coal liquefaction ,Liptinite ,engineering ,Coal ,Artificial intelligence ,Pyrite ,business - Abstract
This paper describes development of an automated and efficient system for classifying of different major maceral groups within polished coal blocks. Coal utilization processes can be significantly affected by the distribution of macerals in the feed coal. In carbonization, for example, maceral group analysis is an important parameter in determining the correct coal blend to produce the required coking properties. In coal liquefaction, liptinites and vitrinites convert more easily to give useful products than inertinites. Microscopic images of coal are inherently difficult to interpret by conventional image processing techniques since certain macerals show similar visual characteristics. It is particularly difficult to distinguish between the liptinite maceral and the supporting setting resin. This requires the use of high level image processing as well as fluorescence microscopy in conjunction with normal white light microscopy. This paper is concerned with two main stages of the work, namely segmentation and interpretation. In the segmentation stage, a cooperative, iterative approach to segmentation and model parameter estimation is defined which is a stochastic variant of the Expectation Maximization algorithm. Because of the high resolution of these images under study, the pixel size is significantly smaller than the size of most of the different regions of interest. Consequently adjacent pixels are likely to have similar labels. In our Stochastic Expectation Maximization method the idea that neighboring pixels are similar to one another is expressed by using Gibbs distribution for the priori distribution of regions (labels). We also present a suitable statistical model for distribution of pixel values within each region. In the interpretation stage, the coal macerals are identified according to the measurement information on the segmented region and domain knowledge. Studies show that the system is able to distinguish coals macerals, especially Fusinite from Pyrite or liptinite from mineral which previous attempts have been unable to resolve.
- Published
- 1995
15. Automatic MRI compression and segmentation using a stochastic model
- Author
-
Orlean I. B. Cole, R. Coxon, R. E. Marston, Jamshid Dehmeshki, and Mohammad Farhang Daemi
- Subjects
Fuzzy clustering ,Pixel ,business.industry ,Stochastic modelling ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale-space segmentation ,Pattern recognition ,Image segmentation ,Segmentation ,Artificial intelligence ,business ,Cluster analysis ,Data compression - Abstract
Visualization of large multidimensional magnetic resonance images (MRI) can be augmented by reducing the noise and redundancies in the data. We present details of an automatic data compression and region segmentation technique applied to medical MRI data sampled over a wide range of inversion recovery times (TI). The example images were brain slices, each one sampled with 15 different TI values, varying from 10ms to 10s. Visually, details emerged as TI increased, but some features faded at higher values. A principal component analysis reduced the data by over two thirds without noticeable loss of detail. Conventional image clustering and segmentation techniques fail to produce satisfactory results on MR images. Among the stochastic methods, independent Gaussian random field (IGRF) models were found to be suitable models when region classes have differing grey level means. We developed an automatic image segmentation technique, based on the stochastic nature of the images, that operated in two stages. First, IGRF model parameters were estimated using a modified fuzzy clustering method. Second, image segmentation was formulated as a statistical inference problem. Using a maximum likelihood function, we estimated the class status of each pixel from the IGRF model parameters. The paper elaborates on this approach and presents practical results.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1995
16. Image coding using visual patterns
- Author
-
Mohammad K. Ibrahim, Farhad Keissarian, and Mohammad Farhang Daemi
- Subjects
Standard test image ,business.industry ,Binary image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Pattern recognition ,Automatic image annotation ,Image texture ,Digital image processing ,Computer vision ,Artificial intelligence ,business ,Image restoration ,Image compression ,Mathematics - Abstract
An image coding scheme using a set of image visual patterns is introduced. These patterns are constructed to represent two basic types of image patterns (uniform and oriented) over small blocks of an image. The coding system characterizes an image by its local features, and further approximates each image block by a block pattern. Algorithms for pattern classification, computation of pattern parameters, and image reconstruction from these parameters are presented, and these provide the necessary tools for applying the proposed coding method to varius images. Satisfactory coded images have been obtained, and compression ratios in the order of 15 to 1 have been achieved.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1995
17. An adaptive estimation and segmentation technique for determination of major maceral groups in coal
- Author
-
B. P. Atkin, N. J. Miles, Jamshid Dehmeshki, and Mohammad Farhang Daemi
- Subjects
Highly skilled ,business.industry ,Computer science ,Maceral ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Coal ,Segmentation ,business ,Process engineering ,Data science ,Image based - Abstract
This paper describes the development of an automated image based system for the classification of macerals in polished coal blocks. Coal petrology, and especially the estimation of the maceral content of a coal, has traditionally been considered to be a highly skilled and time consuming operation. However the recent upsurge in interest in this subject, driven by environmental legislation related to the utilisation of coal, has necessitated the development of a reliable automated system for maceral analysis. Manual maceral analysis is time consuming and its accuracy is largely dependent upon the skill of the operator. The major drawbacks to manual maceral analysis are related to time and operator fatigue, which can develop after the analysis of only one or two polished blocks. The reproducibility of the results from manual maceral analysis is also dependent upon the experience of the operator.
- Published
- 1995
18. Pattern recognition based on information theory principles
- Author
-
R. L. Beurle, Ahmed Ghali, K. A. Al-Khateeb, and Mohammad Farhang Daemi
- Subjects
Sketch recognition ,Intelligent character recognition ,business.industry ,Computer science ,3D single-object recognition ,Pattern recognition ,Optical character recognition ,Information theory ,Machine learning ,computer.software_genre ,Pattern recognition (psychology) ,Feature (machine learning) ,Artificial intelligence ,business ,computer ,Signature recognition - Abstract
One of the main problems faced in the development of pattern recognition algorithms is assessment of their performance. This paper describes the development of a novel technique for the assessment of information content of 2-D patterns encountered in practical pattern recognition problems. The technique is demonstrated by itsapplication to multi-font typed character recognition. In this work we firstly developed an information model applicable to any pattern, and its elaboration to measure recognition performance, and secondly we used this model to derive parameters such as the resolution required to distinguish between the patterns. This has resulted in a powerful method for assessing the perfoimance of any pattern recognition system.Keywords: pattern recognition, information theory, character recognition, recognition information 1. INTRODUCTION Pattern Recognition is one of the fastest growing scientific areas with applications across a wide variety of disciplines. The tasks of pattern recognition are basically to remove the need for a trained operator to perform therecognition, or to enable recognition to be performed that would otherwise be impossible [1]. When examining a
- Published
- 1994
19. Adaptive coding of images based on the visual activity level
- Author
-
Mohammad K. Ibrahim, Farhad Keissarian, and Mohammad Farhang Daemi
- Subjects
Pixel ,Adaptive coding ,business.industry ,Hit rate ,Pattern recognition ,Artificial intelligence ,Quantization (image processing) ,business ,Coding (social sciences) ,Mathematics ,Context-adaptive variable-length coding ,Image compression ,Data compression - Abstract
In this paper, a DCT based coding technique for adaptive coding of images according to the level ofvisual activity is presented. Adaptation is based on adaptive quantisation and adaptive bit selection. In the proposed system, we initially partition the image into a large number of sub-blocks of 4*4 pixels. A novel image analysis may then be performed prior to the coding in order to decide what isthe most significant information to encode. Classification according to the activity level within the blocks is based on the local statistics, and is used for adaptive bit selection, whereas optimum quantifiers having Gaussian density are used to achieve adaptive quantisation. Satisfactory performance is demonstrated in terms of direct comparison f the original and the reconstructed images. 1. INTRODUCTIONIn recent years, much research has been focused on image data compression, especially for the application of the next generation HDVT [1]. To make such systems practical, hit rate reduction and
- Published
- 1994
20. Recognition of line patterns using moments
- Author
-
Harish Kumar Sardana, Mohammad Farhang Daemi, and Mohammad K. Ibrahim
- Subjects
business.industry ,Quantization (signal processing) ,Cognitive neuroscience of visual object recognition ,Image segmentation ,Edge detection ,Line segment ,Digital image processing ,Binary data ,Computer vision ,Artificial intelligence ,Invariant (mathematics) ,business ,Algorithm ,Mathematics - Abstract
Most reliable features that can be readily extracted from intensity images comprising of line segments: both straight and curved. Most applications rely on the recognition of such line patterns. Fourier Descriptors, which are widely used, require the patterns to be closed and binary. Other techniques which are based on the chain codes or vectorization have quantization errors and therefore need additional preprocessing. Furthermore, almost all the description methods for line patterns inherently have an element of 'tracing' involved in them and their generalization to grey scale or multi-colored patterns is limited. A novel global shape description technique based on edge segments is used for recognition of line patterns. This approach extends the boundary based representation to generalized edge patterns that may have segments which are straight, curved, crossing or open. A novel representation of Edge Moments (EM) is used for shape description with a novel normalization. The invariant features may be formed by using standard (invariant) moments. This has led to development of Edge standard moments (ESM). The power of the method is demonstrated for recognition of 3-D polyhedral objects.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1993
21. New automatic threshold selection algorithm for edge detection
- Author
-
Mohammad Farhang Daemi, Mohammad K. Ibrahim, and Amar Aggoun
- Subjects
Theoretical computer science ,Computer science ,business.industry ,Machine vision ,Computation ,Computer data storage ,Image processing ,Image segmentation ,business ,Algorithm ,Selection algorithm ,Edge detection ,Image compression - Abstract
In this paper, a novel approach is proposed for selecting the thresholds of edge strength maps from its local histogram. This threshold selection technique is based on finding the threshold for small blocks of the edge map. For each block the threshold is chosen using an iterative procedure. The effect of the choice of the size of the block is discussed. In this paper, the edge strength map is quantized to reduce the computation of the iterative threshold selection algorithm as well as the memory requirement. It is shown that the quantization of the edge map improves the performance of the local iterative threshold selection algorithm. Typical examples of the tests carried out are presented.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1993
22. Image coding and image activity measurement
- Author
-
Mohammad K. Ibrahim, Farhad Keissarian, and Mohammad Farhang Daemi
- Subjects
Image coding ,Pixel ,Image quality ,Computer science ,Machine vision ,business.industry ,Image processing ,Computer vision ,Artificial intelligence ,business ,ENCODE ,Image compression ,Coding (social sciences) - Abstract
In this paper, a novel image analysis technique is proposed, which may be performed prior to coding in order to decide what is the most significant information to encode. In the proposed system, the image to be coded is first partitioned into a large number of sub-blocks of N*N pixels. The blocks can then be stored into two major classes according to the level of the visual activity present. The classification is based on analyzing the local histogram within each sub-block. In this paper, we initially analyze the image blocks to separate uniform blocks from those that can be classified as non-uniform blocks. Adjacent uniform blocks with the same statistics are merged to form large blocks. These blocks can then be coded by their mean values. It is also shown that the non-uniform blocks may also be classified into three categories with different levels of activity.
- Published
- 1993
23. Automatic image database generation from CAD for 3D object recognition
- Author
-
Mohammad K. Ibrahim, Mohammad Farhang Daemi, and Harish Kumar Sardana
- Subjects
business.industry ,Computer science ,CAD ,Solid modeling ,computer.file_format ,Modular design ,3D modeling ,computer.software_genre ,Data conversion ,Rendering (computer graphics) ,File server ,Computer Aided Design ,Computer vision ,Artificial intelligence ,business ,computer - Abstract
The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1993
24. Weighted Hough transform
- Author
-
Mohammad K. Ibrahim, Mohammad Farhang Daemi, and E. C. L. Ngau
- Subjects
Pixel ,Transform theory ,business.industry ,Scale-invariant feature transform ,Pattern recognition ,Image processing ,Thresholding ,Edge detection ,Image (mathematics) ,Hough transform ,law.invention ,law ,Artificial intelligence ,business ,Mathematics - Abstract
A new method is proposed here termed weighted Hough transform (WHT). The advantage of the WHT is that it can be applied to the differential image directly without the need for thresholding. In the WHT the contribution of each pixel to the parameter domain is weighted according to its value. It is well known that the performance of the conventional Hough transform is dependent on the threshold value used. The new method is therefore a generalizing of the Hough transform to overcome this problems.
- Published
- 1992
25. Novel approach for assessment of translation, rotation, and overall information content of patterns
- Author
-
R. L. Beurle, Mohammad Farhang Daemi, and Mohammad K. Ibrahim
- Subjects
Sensor array ,Computer science ,Binary image ,Component (UML) ,Pattern recognition (psychology) ,Binary data ,Data mining ,Information theory ,computer.software_genre ,Translation (geometry) ,Rotation (mathematics) ,computer - Abstract
A preliminary investigation confirmed the possibility of assessing the translation and rotation information content of simple binary images viewed at the output of a sensor array. In this paper, following a brief summary of the essence of the techniques used, we show how the translation and rotation component of the information are related to the overall information associated with a particular pattern. The overall information may be regarded as the information associated with a particular input pattern which may occupy any of the possible orientations and positions with equal probability. Simple rectangular patterns are used to illustrate the results, which are discussed in detail, but the technique is applicable to any shape.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1992
26. Modified Laplacian enhancement of low-resolution digital images
- Author
-
David C. J. Naylor and Mohammad Farhang Daemi
- Subjects
Laplace transform ,Computer science ,business.industry ,Digital imaging ,Image processing ,Filter (signal processing) ,Digital image ,Computer Science::Computer Vision and Pattern Recognition ,Distortion ,Digital image processing ,Computer vision ,Artificial intelligence ,business ,Laplace operator ,Digital filter ,Image resolution - Abstract
This paper describes improved methods for enhancing low resolution images with the aim of extracting the desired image from a noisy background, while preserving its surface features. A novel modified Laplacian filter (Laplace-8) was used. This was based on the classical Laplacian filter (Laplace-4) and operates in a similar fashion by using the Laplace coefficient to determine the level of enhancement. Laplace-4 has a major shortfall in that it only enhances gradients, and so the gray levels of a smooth image surface are left unchanged. This results in severe image surface distortion. The improved filter (Laplace-8), aims to alleviate the distortion by enhancing the whole surface of the image as well as producing good contour enhancement. This is possible even if the image surface is totally smooth. Two controlling techniques were used to compare the two filters, namely, limited gray level (limited between 0 -255) and unlimited gray level. Results were based on the correlations of original and Laplace filtered images, and the statistics of their contours and surfaces. The images produced by Laplace-8 filtering were shown to be superior to that of Laplace-4, showing little image distortion in the case of unlimited gray level enhancement. Laplace-8 is also an effective contour extractor, producing higher contour gray levels for a given enhancement (Laplace) coefficient. The paper describes in detail the performance of the Laplace-8 method with aid of examples.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1991
27. Assessment of the information content of patterns: an algorithm
- Author
-
Mohammad Farhang Daemi and R. L. Beurle
- Subjects
Upload ,Workstation ,SIMPLE (military communications protocol) ,Reduced instruction set computing ,law ,Computer science ,Optical engineering ,Mainframe computer ,Volume (computing) ,Translation (geometry) ,Algorithm ,law.invention - Abstract
A preliminary investigation confirmed the possibility of assessing the translational and rotational information content of simple artificial images. The calculation is tedious, and for more realistic patterns it is essential to implement the method on a computer. This paper describes an algorithm developed for this purpose which confirms the results of the preliminary investigation. Use of the algorithm facilitates much more comprehensive analysis of the combined effect of continuous rotation and fine translation, and paves the way for analysis of more realistic patterns. Owing to the volume of calculation involved in these algorithms, extensive computing facilities were necessary. The major part of the work was carried out using an ICL 3900 series mainframe computer as well as other powerful workstations such as a RISC architecture MIPS machine.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1991
28. Edge-moment-based three-dimensional object recognition
- Author
-
Mohammad K. Ibrahim, Harish Kumar Sardana, and Mohammad Farhang Daemi
- Subjects
Normalization (statistics) ,business.industry ,Computer science ,General Engineering ,Cognitive neuroscience of visual object recognition ,Image processing ,Image segmentation ,Solid modeling ,Atomic and Molecular Physics, and Optics ,Edge detection ,Polyhedron ,Velocity Moments ,Computer vision ,Artificial intelligence ,Invariant (mathematics) ,business ,Algorithm - Abstract
A novel global shape description technique based on edge segments is applied for recognition of 3-D objects. This approach extends the boundary-based representation to generalized edge patterns that may have segments that are straight, curved, crossing, or open. A novel representation of edge moments is used for shape description, with a novel normalization. The invariant features may be formed by using standard (invariant) moments. This has led to the development of edge standard moments. The power of the method is demonstrated for the recognition of 3-D polyhedral objects.
- Published
- 1994
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.