139 results on '"images processing"'
Search Results
2. A Comparison of YOLO Networks for Ship Detection and Classification from Optical Remote-Sensing Images
- Author
-
Trung, Ha Duyen, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Abraham, Ajith, editor, Hong, Tzung-Pei, editor, Kotecha, Ketan, editor, Ma, Kun, editor, Manghirmalani Mishra, Pooja, editor, and Gandhi, Niketa, editor
- Published
- 2023
- Full Text
- View/download PDF
3. An Enhanced Approach for Image Edge Detection Using Histogram Equalization (BBHE) and Bacterial Foraging Optimization (BFO)
- Author
-
Parveen Kumar, Tanvi Jindal, and Balwinder Raj
- Subjects
edge detection ,bacterial foraging optimization ,bbhe ,images processing ,graphics ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 ,Telecommunication ,TK5101-6720 - Abstract
The Edge detection is a customarily task. Edge detection is the main task to perform as it gives clear information about the images. It is a tremendous device in photograph processing gadgets and computer imaginative and prescient. Previous research has been done on moving window approach and genetic algorithms. In this research paper new technique, Bacterial Foraging Optimization (BFO) is applied which is galvanized through the social foraging conduct of Escherichia coli (E.coli). The Bacterial Foraging Optimization (BFO) has been practice by analysts for clarifying real world optimization problems arising in different areas of engineering and application domains, due to its efficiency. The Brightness preserving bi-histogram equalization (BHEE) is another technique that is used for edge enhancement. The BFO is applied on the low level characteristics on the images to find the pixels of natural images and the values of F-measures, recall(r) and precision (p) are calculated and compared with the previous technique. The enhancement technique i.e. BBHE is carried out to improve the information about the pictures.
- Published
- 2022
- Full Text
- View/download PDF
4. Computerized Tomography Images Processing Using Artificial Intelligence Techniques
- Author
-
Chuquín, Shirley, Cuenca, Erick, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Botto-Tobar, Miguel, editor, Cruz, Henry, editor, Díaz Cadena, Angela, editor, and Durakovic, Benjamin, editor
- Published
- 2022
- Full Text
- View/download PDF
5. An attempt at the possibility of using multi-output paraunitary filters for image processing for astrophotography trackers.
- Author
-
Poczekajło, Paweł and Suszyński, Robert
- Subjects
IMAGE processing ,ASTRONOMICAL photography ,DIGITAL signal processing ,SIGNAL processing ,POSSIBILITY ,IMAGE stabilization ,TELESCOPES - Abstract
The article deals with the issue of stabilizing the frame position of the shot in long time astrophotography. The authors described this issue from the point of view of guiding a telescope following of the sky, and also in terms of the possibility of supporting this process with digital signal processing using paraunitary filters. The procedure for obtaining a 2D paraunitary system from an initial (preset) filter is presented. An attempt is made to verify the data that can be obtained from individual outputs of paraunitary systems. Several example filters were shown and their operation was tested on a selected image of a real astronomical object from the telescope's guide camera. Based on the output images obtained (from three paraunit systems), the possibility of their further use in practice was evaluated. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Deep learning based techniques for flame identification in optical engines.
- Author
-
Henrique Rufino, Caio, Moraes Coraça, Eduardo, Teixeira Lacava, Pedro, and Ferreira, Janito Vaqueiro
- Abstract
The mandatory migration from fossil to renewable energy sources requires the characterization of new alternative fuels. One important step in fuel characterization is the test in optical engines, which allows the morphological characterization of flames. This analysis requires the post treatment of images by using segmentation. In many cases, an automatic threshold presents shortcomings as the flames may present different regions with variable luminosity, as also reflections from valves and cylinder liner. Consequently, a time-consuming manual image processing is required and, therefore, an automatic procedure would be welcome. The use of deep learning techniques for image segmentation is a promising alternative for such task, which has showed excellent results in several applications. In this study, two different models were trained to identify flames in images obtained from an optical engine operating at various conditions. The dataset used to train the models was generated by using images from tests with several types of fuels and combustion modes. The effects of image resolution and the generalization capabilities for different fuels and combustion operation were investigated. After analyzing the results, the use of deep learning methods to identify and characterize flames was validated as a mean for improving processing time. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Automated Computer Vision System for Urine Color Detection.
- Author
-
Abdulwahed, Ban Shamil, Al-Naji, Ali, Al-Rayahi, Izzat, Yahya, Ammar, and Perera, Asanka G.
- Subjects
WATER quality ,FACIAL paralysis ,ARTIFICIAL intelligence ,BIT rate ,CIPHERS - Abstract
Urine color analysis is one of the most helpful indicators of health status, and any changes in urine color might be a symptom of serious disease, dehydration of the body, or caused by drugs. To get better assistance for urine color detection in the proposed system, a urine color automatic identification has been developed based on computer vision. The proposed system uses a web camera to capture an image in real-time, analyze it, and then classify the color of urine by using the random forest (RF) algorithm and show the result via the Graphical User Interface (GUI). In addition, the proposed system can send the results to the mobile phone of the patient or care provider by using an Arduino microcontroller and GSM module. Moreover, sending a voice message about the color of urine is related to pathological conditions. The results showed that the proposed system has high accuracy (approximately about 97%) in detecting urine color under different light conditions, with low cost, short time, and easy implementation. In the comparison with the current methods the proposed system has maximum accuracy and minimum error rate. This methodology can pave the way for an additional case study in medical applications, particularly in diagnosis, and patient health monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Automated Computer Vision System for Urine Color Detection
- Author
-
Ban Shamil Abdulwahed, Ali Al-Naji, Izzat Al-Rayahi, Ammar Yahya, and Asanka G. Perera
- Subjects
Urine Color Detection ,Images Processing ,Machine Learning ,Random Forest ,Graphical User Interface ,Technology ,Science - Abstract
Urine color analysis is one of the most helpful indicators of health status, and any changes in urine color might be a symptom of serious disease, dehydration of the body, or caused by drugs. To get better assistance for urine color detection in the proposed system, a urine color automatic identification has been developed based on computer vision. The proposed system uses a web camera to capture an image in real-time, analyze it, and then classify the color of urine by using the random forest (RF) algorithm and show the result via the Graphical User Interface (GUI). In addition, the proposed system can send the results to the mobile phone of the patient or care provider by using an Arduino microcontroller and GSM module. Moreover, sending a voice message about the color of urine is related to pathological conditions. The results showed that the proposed system has high accuracy (approximately about 97%) in detecting urine color under different light conditions, with low cost, short time, and easy implementation. In the comparison with the current methods the proposed system has maximum accuracy and minimum error rate. This methodology can pave the way for an additional case study in medical applications, particularly in diagnosis, and patient health monitoring.
- Published
- 2023
- Full Text
- View/download PDF
9. System for the analysis of human balance based on accelerometers and support vector machines
- Author
-
V.C. Pinheiro, J.C. do Carmo, F.A. de O. Nascimento, and C.J. Miosso
- Subjects
Postural balance ,Balance signals processing ,Support vector machines ,Images processing ,Accelerometers ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Disturbances in balance control lead to movement impairment and severe discomfort, dizziness, vertigo and may also lead to serious accidents. It is important to monitor the level of balance in order to determine the risk of a fall and to evaluate progress during treatment. Some solutions exist, but they are generally restricted to indoor environments. We propose and evaluate a system, based on accelerometers and support vector machines, that indicates the user’s postural balance variation which can be used in indoor and outdoor environments. For the training phase of the system, we used the accelerometer signals acquired from a single subject under monitored conditions of balance and intentional imbalance, and used the scores provided by the SWAY®software for establishing the reference target values. Based on these targets, we trained a support vector machine to classify the signal into n levels of balance and later evaluated the performance using cross validation by random resampling. We also developed a support vector machine approach for estimating the center of pressure, by using as reference targets the results from a force platform. For validation, we performed experiments with a subject who was performing determined movements. Later other experiments were executed, so the different centers of pressure could be computed by our system and compared to the results from the force platform. We also performed tests with a dummy and a John Doe doll, in order to observe the system’s behavior in the presence of a sudden drop or a lack of balance. The results show that the system can classify the acquired signals into two to seven levels of balance, with significant accuracy, and was also able to infer the centroid of each center of pressure region with an error lower than 0.9 cm. The tests performed with the dolls show that the system is able to distinguish between the conditions of a sudden drop and of a recovery of balance after losing one’s balance. The results suggest that the system can be used to detect variations in balance and, therefore, to indicate the risk of a fall even in outdoor environments.
- Published
- 2023
- Full Text
- View/download PDF
10. An Enhanced Approach for Image Edge Detection Using Histogram Equalization (BBHE) and Bacterial Foraging Optimization (BFO).
- Author
-
Kumar, Parveen, Jindal, Tanvi, and Raj, Balwinder
- Subjects
EDGE detection (Image processing) ,HISTOGRAMS ,ESCHERICHIA coli ,IMPLEMENTS, utensils, etc. ,MATHEMATICAL statistics - Abstract
The Edge detection is a customarily task. Edge detection is the main task to perform as it gives clear information about the images. It is a tremendous device in photograph processing gadgets and computer imaginative and prescient. Previous research has been done on moving window approach and genetic algorithms. In this research paper new technique, Bacterial Foraging Optimization (BFO) is applied which is galvanized through the social foraging conduct of Escherichia coli (E.coli). The Bacterial Foraging Optimization (BFO) has been practice by analysts for clarifying real world optimization problems arising in different areas of engineering and application domains, due to its efficiency. The Brightness preserving bi-histogram equalization (BHEE) is another technique that is used for edge enhancement. The BFO is applied on the low level characteristics on the images to find the pixels of natural images and the values of F-measures, recall(r) and precision (p) are calculated and compared with the previous technique. The enhancement technique i.e. BBHE is carried out to improve the information about the pictures. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Time-efficient low-resolution RGB aerial imaging for precision mapping of weed types in site-specific herbicide application.
- Author
-
Panduangnat, Lalita, Posom, Jetsada, Saikaew, Kanda, Phuphaphud, Arthit, Wongpichet, Seree, Chinapas, Adulwit, Sukpancharoen, Somboon, and Saengprachatanarug, Khwantri
- Subjects
COLOR space ,AGRICULTURAL drones ,HERBICIDE application ,IMAGE analysis ,WEEDS - Abstract
An efficient method for weed detection and the precise generation of spraying maps is crucial to optimizing weed management strategies and minimizing the costs associated with herbicide use. This study presents a method that leverages low-resolution UAV images to accurately detect and map weeds in sugarcane fields. The proposed approach integrates RGB images, vegetation indices and the HSV colour space, enhancing segmentation through histogram equalization (HE) and object-based image analysis (OBIA). The models developed in this study demonstrate exceptional performance in weed detection, with the most suitable dataset, achieving a detection capability of 95.78% for Broad-leaved Weeds (BLW) and the highest accuracy of 94.45% for Narrow-leaved Weeds (NLW). When considering multiple targets such as BLW, NLW, soil and sugarcane, the models exhibit a detection accuracy of 89.69%. Furthermore, the precision spraying maps generated by the coverage model method (CM) demonstrate remarkable accuracy, reaching 97.50% for weed control using agricultural drones. This method offers an efficient, cost-effective, and timely solution for precise weed detection, leading to improved weed control outcomes by enabling the selection of appropriate chemical substances tailored to each weed species. It reduces repetitive spraying costs and minimises chemical usage through spot spraying. • Developed a method to use low-res UAV imaging for precise weed mapping in sugarcane fields. • Applied histogram equalization to enhance OBIA's accuracy in identifying similar-looking weeds. • Optimized weed type detection using RGB, vegetation indices, and HSV in a machine learning. • Achieved 97.50% accuracy in creating weed spraying maps. • The proposed method offers cost-effectiveness, precision, and rapid implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Improving Visual Contrast Between Fat and Muscle Tissues in B-Mode Images Using CBE: A Simulation Study
- Author
-
Pastrana-Chalco, Mario, Pereira, Wagner C. A., Teixeira, Cesar A., Magjarevic, Ratko, Series Editor, Ładyżyński, Piotr, Associate Editor, Ibrahim, Fatimah, Associate Editor, Lackovic, Igor, Associate Editor, Rock, Emilio Sacristan, Associate Editor, Henriques, Jorge, editor, Neves, Nuno, editor, and de Carvalho, Paulo, editor
- Published
- 2020
- Full Text
- View/download PDF
13. Enhancement of Additive Manufacturing Processes for Thin-Walled Part Production Using Gas Metal Arc Welding (GMAW) with Wavelet Transform
- Author
-
Foorginejad, Abofazi, Khatibi, Siamak, Torshizi, Hojjat, Emam, Sayyed Mohammad, Afshari, Hossein, Foorginejad, Abofazi, Khatibi, Siamak, Torshizi, Hojjat, Emam, Sayyed Mohammad, and Afshari, Hossein
- Abstract
Additive manufacturing encompasses technologies that produce three-dimensional computer-aided design (CAD) models through a layer-by-layer production process. Compared to traditional manufacturing methods, additive manufacturing technologies offer significant advantages in producing intricate components with minimal energy consumption, reduced raw material waste, and shortened production timelines. AM methods based on shielded gas welding have recently piqued the interest of researchers due to their high efficiency and cost-effectiveness in manufacturing critical components. However, one of the most formidable challenges in additive manufacturing methods based on shielded gas welding lies in the irregularity of weld bead height at different points, compromising the precision of components produced using these techniques. In this current research, we aimed to achieve uniform weld heights along the welding path by considering the most influential parameters on weld bead geometry and conducting experimental tests. Input parameters of the process, including nozzle angle, welding speed, wire speed, and voltage, were considered. Simultaneously, image processing and wavelet transform were employed to assess the uniformity of weld bead height. These parameters were applied to produce intricate parts after identifying optimal parameters that yielded the smoothest weld lines. According to the results, the appropriate bead for manufacturing the part was extracted. The results show that the smoothest bead line is achieved in 27 V as the highest level of voltage, at a 90° nozzle position and the maximum wire feed rate. Parts manufactured using this method across different layers exhibited no distortions, and the repeatability of production substantiated the high reliability of this approach for component manufacturing. © 2024 by the authors.
- Published
- 2024
- Full Text
- View/download PDF
14. Development and evaluation of a vision driven sensor for estimating fuel feeding rates in combustion and gasification processes
- Author
-
Ögren, Yngve, Sepman, Alexey, Fooladgar, Ehsan, Weiland, Fredrik, Wiinikka, Henrik, Ögren, Yngve, Sepman, Alexey, Fooladgar, Ehsan, Weiland, Fredrik, and Wiinikka, Henrik
- Abstract
A machine vision driven sensor for estimating the instantaneous feeding rate of pelletized fuels was developed and tested experimentally in combustion and gasification processes. The feeding rate was determined from images of the pellets sliding on a transfer chute into the reactor. From the images the apparent area and velocity of the pellets were extracted. Area was determined by a segmentation model created using a machine learning framework and velocities by image registration of two subsequent images. The measured weight of the pelletized fuel passed through the feeding system was in good agreement with the weight estimated by the sensor. The observed variations in the fuel feeding correlated with the variations in the gaseous species concentrations measured in the reactor core and in the exhaust. Since the developed sensor measures the ingoing fuel feeding rate prior to the reactor, its signal could therefore help improve process control., Correspondence Address: Y. Ögren; RISE AB, Piteå, Box 726 SE-941 28, Sweden; . The Bio4Energy, a strategic research environment appointed by the Swedish government and the SwedishCenter for Gasification financed by the Swedish Energy Agency and member companies. The RE:source program finance by the Swedish Energy Agency, Vinnova and Formas. The Pulp&Fuel project financed by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 818011 and the TDLAS-AI project (Swedish energy agency project 50470-1).
- Published
- 2024
- Full Text
- View/download PDF
15. Shock filter coupled with a high-order PDE for additive noise removal and image quality enhancement
- Author
-
Simo Thierry, Welba Colince, Ntsama Eloundou Pascal, and Noura Alexendre
- Subjects
PDE ,Shock-filter ,Deblurring ,Images processing ,A fourth-order diffusion equation ,Computer engineering. Computer hardware ,TK7885-7895 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
We are interested in repairing blurry images degraded by additive noise. We present a new filter for image enhancement based on the combination of a shock filter with a fourth order diffusion equation. The algorithm that we propose eliminates not only the blurry but also the noise. The control of the diffusion weight and that of the enhancement of the model is done by a parameter. After experimentation on some test images and on some real images of degraded old documents, the results obtained are satisfactory compared to certain models in the literature.
- Published
- 2021
- Full Text
- View/download PDF
16. A Comparison Study for the Effect of Applying Image Filters on Image's Statistical Distribution.
- Author
-
Ibrahim, Mohammed Fadhil
- Subjects
DISTRIBUTION (Probability theory) ,PROBABILITY density function ,IMAGE processing ,PARAMETER estimation ,LAPLACIAN matrices - Abstract
Image filters has taken attention last few years due to its importance in terms of image processing and applications. Applying image filters on images elements can be affected by the values of image parameters, which resulted from any processing tasks. By applying image filters, we can extent the image processing methods to present higher productivity. In this paper, we compare the effect of applying five image filters on the statistical distribution, which are (Laplacian, Differentiation, LOG, Sharpening, and Gaussian). Our method has been applied for a number of textural images (water texture, wool texture, and wood texture), the images has been divided for three groups according to the texture type. The result of our method proved that some of image filter affects the statistical distribution of image elements which are: (Differentiation, LOG, Sharpening) while other do not affect the parameter distribution (Laplacian, Gaussian). We evaluate our method by calculating the value of (MSE). The method opens the door in front of extending such technique with other image processing aspects. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. 基于改进 SSD 的灵武长枣图像轻量化目标检测方法.
- Author
-
王昱潭 and 薛君蕊
- Subjects
- *
OBJECT recognition (Computer vision) , *DATA augmentation , *JUJUBE (Plant) , *COMPUTER storage devices , *MULTISPECTRAL imaging , *XML (Extensible Markup Language) - Abstract
The complex working environment of picking robots has limited the picking speed and equipment memory resources in the intelligent harvesting of Lingwu long jujubes. Therefore, it is necessary to meet the requirements of lighter network structure and higher detection accuracy, particularly for the visual recognition system. A pre-train model has widely been loaded almost all the object detection at present, due to high initialization performance and convergence speed. However, two challenges are still remained: 1) The network structure cannot be changed on the limited memory resources of the device; 2) There may be great differences between the ImageNet dataset and the dataset to be trained, leading to the low training effect. Taking the SSD model as the basic framework, this research aims to propose a lightweight object detection for the images of Lingwu long jujubes. The excellent performance was achieved without loading the pre-train model. Firstly, data augmentation is performed on the collected 1 000 images to obtain 5 000 images. Data augmentation operations include random cropping, random vertical or horizontal flipping, random brightness adjustment, random contrast adjustment, and random saturation adjustment. Secondly, the Lingwu long jujube dataset was established, including 3 500 training images and 1 500 test images. The resolution of images consisted of 3 016×4 032, 4 068×3 456, and 2 448×3 264. The models of smartphones for image acquisition included HUAWEI TRT-AL00A, Vivo Y79A, and Xiaomi 2014501. The images were uniformly scaled to the resolution of 300×300, in order to meet the input requirements of image size in the SSD object detection. Data augmentation included random cropping, random vertical or horizontal flipping, as well as random adjustment of brightness, contrast, and saturation. The format of the PASCAL VOC dataset was also adopted. Labelling software was used to label the images, and then the marked images were stored in the label folder in XML format. Secondly, the improved DenseNet was utilized the Convolutional Block Attention Modules and two dense blocks with convolution groups of 6 and 8. Taking the improved DenseNet as the backbone network, the improved SSD model was obtained to combine with the multi-level fusion structure, where the first three additional layers were replaced in the SSD model with the Inception module. In the improved SSD model without loading the pre-train model, the mAP was 96.60%, the detection speed was 28.05 frames/s, and the number of parameters was 1.99×106, particularly 2.02 percentage points and 0.05 percentage points higher than that of the SSD and SSD model (pre-train), respectively. Correspondingly, the parameter of the improved SSD model was 11.14×106 lower than the SSD model, fully meeting the requirements of the lightweight network without loading the pre-train model. This finding can provide a strong visual technical support for the intelligent harvesting of Lingwu long jujubes, even medical and multispectral images detection tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Identify and segment microalgae in complex backgrounds with improved YOLO.
- Author
-
Yang, Hao, Lang, Kaiqi, and Wang, Xiaoping
- Abstract
Microalgae are widely distributed in the ocean, and some species are prone to causing harmful algal blooms that threaten the marine ecological environment. At present, microscopy is the most common method for microalgae analysis, and the combination of computer vision and microscopy is the mainstream trend in algae morphology classification. However, most methods only focus on the species and quantities of algae, without obtaining contour information that can further analyze their survival status and biomass. Therefore, this article proposes a convolutional neural network (AlgaeSeg-YOLO), which can recognize the species, quantities, and contours of algae. Firstly, a dataset of microalgae microscopic images in microfluidic chips was constructed, which includes a total of 2799 annotated images of 6 harmful microalgae, including 3916 instances. Secondly, the AlgaeSeg-YOLO was constructed with stronger feature fusion and pixel-level spatial information modeling capabilities based on YOLOv8n-seg. Finally, compared to other common methods, the experimental results show that the mAP (mean Average Precision) of AlgaeSeg-YOLO reaches 95.61 %, which is 1.64 %, 1.76 %, 5.28 %, 3.34 %, and 5.39 % higher than YOLOv8n-seg, YOLOv5n-seg, Mask R-CNN, Cascade Mask R-CNN, and SOLOv2, respectively, achieving real-time and accurate segmentation of microalgae in complex backgrounds. Meanwhile, the parameters and computation remain relatively low. This work helps to achieve fully automated analysis of microalgae, reduce labor costs, and provide technical support for long-term monitoring of the ecological environment and further research. • Identification and segmentation of microalgae using deep learning • Microalgae instance segmentation algorithm based on microscopic images • Rapid identification and segmentation of microalgae in complex background • Feature fusion network optimization of microalgae instance segmentation model [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Positioning of the Cutting Tool of a CNC Type Milling Machine by Means of Digital Image Processing
- Author
-
Londoño Lopera, Juan Camilo, Goez Mora, Jhon Edison, Rico Mesa, Edgar Mario, Barbosa, Simone Diniz Junqueira, Series Editor, Filipe, Joaquim, Series Editor, Kotenko, Igor, Series Editor, Sivalingam, Krishna M., Series Editor, Washio, Takashi, Series Editor, Yuan, Junsong, Series Editor, Zhou, Lizhu, Series Editor, Serrano C., Jairo E., editor, and Martínez-Santos, Juan Carlos, editor
- Published
- 2018
- Full Text
- View/download PDF
20. Automatic Visual Classification of Parking Lot Spaces: A Comparison Between BoF and CNN Approaches
- Author
-
Goez Mora, Jhon Edison, Londoño Lopera, Juan Camilo, Patiño Cortes, Diego Alberto, Barbosa, Simone Diniz Junqueira, Series Editor, Filipe, Joaquim, Series Editor, Kotenko, Igor, Series Editor, Sivalingam, Krishna M., Series Editor, Washio, Takashi, Series Editor, Yuan, Junsong, Series Editor, Zhou, Lizhu, Series Editor, Ghosh, Ashish, Series Editor, Figueroa-García, Juan Carlos, editor, López-Santana, Eduyn Ramiro, editor, and Rodriguez-Molano, José Ignacio, editor
- Published
- 2018
- Full Text
- View/download PDF
21. Application of artificial intelligence in ophthalmology
- Author
-
Xue-Li Du, Wen-Bo Li, and Bo-Jie Hu
- Subjects
1561 ,artificial intelligence ,deep learning ,machine learning ,images processing ,ophthalmology ,Ophthalmology ,RE1-994 - Abstract
Artificial intelligence is a general term that means to accomplish a task mainly by a computer, with the least human beings participation, and it is widely accepted as the invention of robots. With the development of this new technology, artificial intelligence has been one of the most influential information technology revolutions. We searched these English-language studies relative to ophthalmology published on PubMed and Springer databases. The application of artificial intelligence in ophthalmology mainly concentrates on the diseases with a high incidence, such as diabetic retinopathy, age-related macular degeneration, glaucoma, retinopathy of prematurity, age-related or congenital cataract and few with retinal vein occlusion. According to the above studies, we conclude that the sensitivity of detection and accuracy for proliferative diabetic retinopathy ranged from 75% to 91.7%, for non-proliferative diabetic retinopathy ranged from 75% to 94.7%, for age-related macular degeneration it ranged from 75% to 100%, for retinopathy of prematurity ranged over 95%, for retinal vein occlusion just one study reported ranged over 97%, for glaucoma ranged 63.7% to 93.1%, and for cataract it achieved a more than 70% similarity against clinical grading.
- Published
- 2018
- Full Text
- View/download PDF
22. Digital processing of ultrasound images on dilated blood vessels from diabetic patients.
- Author
-
Cordova-Fraga, Teodoro, García, Daniel, Murillo-Ortiz, Blanca, García, Marysol, Gomez, Christian, Amador-Medina, Fabian, Guzman-Cabrera, Rafael, Pinto, David, Singh, Vivek, and Perez, Fernando
- Subjects
- *
PEOPLE with diabetes , *DIGITAL image processing , *BLOOD vessels , *PERIPHERAL vascular diseases , *CHRONIC kidney failure , *DUPLEX ultrasonography - Abstract
Introduction Peripheral arterial disease (PAD) is a fairly common degenerative vascular condition in diabetic patients that leads to inadequate blood flow (BF), this disease is mainly due to atherosclerosis that causes chronic narrowing of arteries, which can precipitate acute thrombotic events. In patients with diabetes, atherosclerosis is the main reason for reducing life expectancy, as long as diabetic nephropathy and retinopathy are the largest contributors to end-stage renal disease and blindness, respectively. Objective This was an assessment of dilatation of the blood vessels on diabetic patients vs. healthy volunteers by using digital processing of imaging's. Materials and Methods The study subject was ultrasound imaging processing of blood vessels dilation on low extremities of diabetic patients, the results were compared with ultrasound images of healthy subjects. Results The digital images processing suggests that there is a significant difference among images experimental of the diabetic group and healthy volunteers' images, the control group. Discussion The digital imaging processing performed in the Matlab platform is an adequate procedure for blood vessels dilation analysis of the ultrasound images taken from the lower extremities in diabetic patients. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. How effective are current population-based metaheuristic algorithms for variance-based multi-level image thresholding?
- Author
-
Mousavirad, S. J., Schaefer, G., Zhou, H., Helali Moghadam, Mahshid, Mousavirad, S. J., Schaefer, G., Zhou, H., and Helali Moghadam, Mahshid
- Abstract
Multi-level image thresholding is a common approach to image segmentation where an image is divided into several regions based on its histogram. Otsu's method is the most popular method for this purpose, and is based on seeking for threshold values that maximise the between-class variance. This requires an exhaustive search to find the optimal set of threshold values, making image thresholding a time-consuming process. This is especially the case with increasing numbers of thresholds since, due to the curse of dimensionality, the search space enlarges exponentially with the number of thresholds. Population-based metaheuristic algorithms are efficient and effective problem-independent methods to tackle hard optimisation problems. Over the years, a variety of such algorithms, often based on bio-inspired paradigms, have been proposed. In this paper, we formulate multi-level image thresholding as an optimisation problem and perform an extensive evaluation of 23 population-based metaheuristics, including both state-of-the-art and recently introduced algorithms, for this purpose. We benchmark the algorithms on a set of commonly used images and based on various measures, including objective function value, peak signal-to-noise ratio, feature similarity index, and structural similarity index. In addition, we carry out a stability analysis as well as a statistical analysis to judge if there are significant differences between algorithms. Our experimental results indicate that recently introduced algorithms do not necessarily achieve acceptable performance in multi-level image thresholding, while some established algorithms are demonstrated to work better.
- Published
- 2023
- Full Text
- View/download PDF
24. A Principal Component Analysis Methodology of Oil Spill Detection and Monitoring Using Satellite Remote Sensing Sensors
- Author
-
Arslan, N., Majidi Nezhad, Meysam, Heydari, A., Astiaso Garcia, Davide, Sylaios, G., Arslan, N., Majidi Nezhad, Meysam, Heydari, A., Astiaso Garcia, Davide, and Sylaios, G.
- Abstract
Monitoring, assessing, and measuring oil spills is essential in protecting the marine environment and in efforts to clean oil spills. One of the most recent oil spills happened near Port Fourchon, Louisiana, caused by Hurricane Ida (Category 4), that had a wind speed of 240 km/h. In this regard, Earth Observation (EO) Satellite Remote Sensing (SRS) images can effectively highlight oil spills in marine areas as a “fast and no-cost” technique. However, clouds and the sea surface spectral signature complicate the interpretation of oil spill areas in the optical images. In this study, Principal Component Analysis (PCA) has been applied of Landsat-8 and Sentinel-2 SRS images to improve information from the optical sensor bands. The PCA produces an output unrelated to the main bands, making it easier to distinguish oil spills from clouds and seawater due to the spectral diversity between oil, clouds, and the seawater surface. Then, an additional step has been applied to highlight the oil spill area using PCAs with different band combinations. Furthermore, Sentinel-1 (SAR), Sentinel-2 (optical), and Landsat-8 (optical) SRS images have been analyzed with cross-sections to suppress the “look-alike” effect of marine oil spill areas. Finally, mean and high-pass filters were used for Land Surface Temperature (LST) SRS images estimated from the Landsat thermal band. The results show that the seawater value is about −17.5 db and the oil spill area shows a value between −22.5 db and −25 db; the Landsat 8 satellites thermal band 10, depicting contrast at some areas for oil spill, can be determined by the 3 × 3 and 5 × 5 Kernel High pass and the 3 × 3 Mean filter. The results demonstrate that the SRS images should be used together to improve oil spill detection studies results.
- Published
- 2023
- Full Text
- View/download PDF
25. Designing an Application that Helps People with Visual Impairments to Distinguish and Classify Libyan Traded Securities
- Author
-
Khalifa, Mohamed K. S.
- Subjects
An application to identify the Libyan securities traded ,Deep Learning ,Images Processing ,Matlab Language ,Blind and Visually Impaired - Abstract
The aim of this research project is to help people with visual challenges (blind and visually impaired) in the country of Libya, so we designed and developed an application for automatic identification of Libyan securities traded through a mobile phone camera. This research work was implemented and tested for stock classification, using methods that combine the language environment "Matlab" and "MobileNet", a relatively new framework for deep learning (DL) image processing architecture. A currency classification system has been applied and implemented to identify it through the use of image processing steps. The proposed application processes, to classify and distinguish the types of Libyan money denominations, consist of six stages: (photo capture, pre-treatment of colors and currency image, detection of currency image edges, image segmentation, extraction of distinctive signs of currency characteristics, currency value recognition). To prove the effectiveness of the application algorithm, we evaluated how well our proposed model performs using a new, classified and balanced raw data set of bank-notes (new, average, and old), consisting of approximately 2500 images with different lighting methods, which were employed for comparison and conducting experiments (training and testing) to classify the traded and approved Libyan securities of each of the five types (1, 5, 10, 20 & 50 LYD). In addition to the application of the supervised learning approach, the results of the experimental tests demonstrated the effectiveness of the proposed application algorithm in distinguishing and classifying the Libyan securities traded using the image processing process, and it showed good performance as it took a short time and produced an overall accuracy rate of more than 99% in the testing and verification process. Moreover, to ensure and confirm the efficiency and effectiveness of our model for the proposed application, we evaluated our application through 7 users from the visually impaired community. According to evaluation and verification, our application has been proven to be very successful and highly effective and works well with any lighting environment or heterogeneous methods .
- Published
- 2023
- Full Text
- View/download PDF
26. Monitoring Information System of Aedes Aegypti Reproduction
- Author
-
Morais, H. S., Santos, O. S., Rocha, M. A., Almeida, T. C. S., Brasil, L. M., Amvame-Nze, G. D., Miosso, C. J., Costa, M. V. C., Pizo, G. A., MAGJAREVIC, Ratko, Editor-in-chief, Ladyzynsk, Piotr, Series editor, Ibrahim, Fatimah, Series editor, Lacković, Igor, Series editor, Rock, Emilio Sacristan, Series editor, and Jaffray, David A., editor
- Published
- 2015
- Full Text
- View/download PDF
27. Images encryption algorithm based on the quaternion multiplication and the XOR operation.
- Author
-
Boussif, Mohamed, Aloui, Noureddine, and Cherif, Adnene
- Subjects
QUATERNIONS ,IMAGE encryption ,MULTIPLICATION ,ALGORITHMS ,IMAGE processing ,RUNNING speed - Abstract
In this paper, we propose an images encryption algorithm based on the quaternion multiplication and the XOR function. The proposed algorithm processes the image in 32 quaternions i.e. block of 32 × 4. For each quaternion, the algorithm exor the i
th image quaternion and the ith key quaternion. The key quaternion is changed in each block using the quaternion multiplication. The randomness of the proposed algorithm is evaluated by diehard package. Several other analyses are provided to evaluate the performances of this encryption scheme. These analyses demonstrate that the proposed algorithm is with high security level and fast run speed and can be competitive with some other recently proposed image encryption schemes. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
28. CREATION OF HIERARCHICAL STRUCTURES OF FORMAL DESCRIPTIONS OF MODELS OF THE NANOIMAGES BASED ON GROUPS OF THE SIGNS RELATING TO VARIOUS MORPHOLOGICAL AND LARGESCALE LEVELS OF REPRESENTATION.
- Author
-
Zhiznyakov, A. L., Privezentsev, D. G., and Zakharov, A. A.
- Subjects
- *
HIERARCHICAL Bayes model , *NANOSTRUCTURES , *MULTISCALE modeling , *IMAGE analysis , *SYSTEMS design - Abstract
The sequence of images received when studying a nanoobject (the induced cluster-type nanostructures) at different scales is offered to consider as result of mutual influence of factors of heredity and variability of signs on some of his initial characteristics. Features of heredity and variability of signs gives the chance of creation of hierarchical structures of formal descriptions are described, i.e. models of the nanoimages based on groups of the signs relating to various morphological and large-scale levels of representation. The new methods of processing and the analysis of images based on these theoretical provisions and approaches have to provide a possibility of fuller extraction of information due to use of multilevel system of signs. Thus, for the most complete analysis of nanostructures, it is necessary to use its images obtained at different scales of observation. In this case, the same feature (feature) can manifest itself in varying degrees on several images of a multiscale sequence, constantly changing at the same time. As a result of such changes, it can either disappear or change so much that it will be next to it as another independent feature (feature). Here it is possible to note the possibility of mutual influence of various features on each other, resulting in the appearance in a multiscale sequence of images with new characteristic properties and features. Buying a nanoobject, i.e. models of images based on various attributes related to different morphological and scale levels of representation. Analysis of these structures allows us to come to a flexible multi-level "top-down" processing system, the use of which is associated not only with the requirement to improve accuracy, but also the need to fundamentally change the perception strategy of nanoobjects, in contrast to macroobjects. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
29. DEVELOPMENT OF A HIERARCHICAL REPRESENTATION OF A DIGITAL IMAGE BASED ON A FUZZY FRACTAL MODEL.
- Author
-
Zhiznyakov, A. L., Privezentsev, D. G., Pugin, E. V., and Belyakova, A. S.
- Subjects
DIGITAL image processing ,FRACTAL analysis ,FUZZY logic ,FUZZY sets ,PROJECT managers - Abstract
The task of the work is to improve the quality of digital image processing in vision systems by developing new features based on the use of fractal theory in conjunction with fuzzy logic and the theory of fuzzy sets. At present, the systems of technical vision are actively used in many fields of science and technology. The most important part of any such system is the software, whose task is to process the information received. However, the overwhelming majority of digital image processing algorithms are based on a clear algorithm for extracting useful information, which often does not allow the technical vision systems to solve non-trivial problems. In this regard, it is proposed to combine the mathematical theory of fuzzy sets and fuzzy logic with the proven fractal methods of digital image processing. To develop a system of new features, a new model of digital image is needed. It is proposed to modify the fractal model developed by the project manager by using a fuzzy distance in it as a measure of similarity of the image areas. This allows you to expand the hierarchy of representations of the source image, thereby increasing the amount of useful information about the original image. The system of fractal attributes developed by the project manager is proposed to be modified by using the membership function as the main metric, which allows using fuzzy logic in the formation of characteristic values. The proposed new model and a new system of features based on the use of fuzzy measures and membership functions will allow developing new image processing algorithms that differ from the existing possibility of using fuzzy conclusions and results. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Unsupervised detection of vineyards by 3D point-cloud UAV photogrammetry for precision agriculture.
- Author
-
Comba, Lorenzo, Biglia, Alessandro, Ricauda Aimonino, Davide, and Gay, Paolo
- Subjects
- *
VINEYARDS , *POINT cloud , *PHOTOGRAMMETRY , *DRONE aircraft , *PRECISION farming - Abstract
Graphical abstract Highlights: • Unsupervised vineyards detection from 3D point-cloud map. • Vineyards features evaluation and mapping from 3D point-cloud map. • Unmanned aerial vehicles (UAV) imagery for vineyards 3D point-cloud modelling. Abstract An effective management of precision viticulture processes relies on robust crop monitoring procedures and, in the near future, to autonomous machine for automatic site-specific crop managing. In this context, the exact detection of vineyards from 3D point-cloud maps, generated from unmanned aerial vehicles (UAV) multispectral imagery, will play a crucial role, e.g. both for achieve enhanced remotely sensed data and to manage path and operation of unmanned vehicles. In this paper, an innovative unsupervised algorithm for vineyard detection and vine-rows features evaluation, based on 3D point-cloud maps processing, is presented. The main results are the automatic detection of the vineyards and the local evaluation of vine rows orientation and of inter-rows spacing. The overall point-cloud processing algorithm can be divided into three mains steps: (1) precise local terrain surface and height evaluation of each point of the cloud, (2) point-cloud scouting and scoring procedure on the basis of a new vineyard likelihood measure, and, finally, (3) detection of vineyard areas and local features evaluation. The algorithm was found to be efficient and robust: reliable results were obtained even in the presence of dense inter-row grassing, many missing plants and steep terrain slopes. Performances of the algorithm were evaluated on vineyard maps at different phenological phase and growth stages. The effectiveness of the developed algorithm does not rely on the presence of rectilinear vine rows, being also able to detect vineyards with curvilinear vine row layouts. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. Geological and topographical analysis of Anshar Sulcus, Ganymede: Implications for grooved terrain formation.
- Author
-
Ianiri, Mafalda, Mitri, Giuseppe, Sulcanese, Davide, Chiarolanza, Gianluca, and Cioria, Camilla
- Published
- 2023
- Full Text
- View/download PDF
32. Analysis of single image super resolution models
- Author
-
Mertali Koprulu, M. Taner Eskil, Işık Üniversitesi, Mühendislik ve Doğa Bilimleri Fakültesi, Bilgisayar Mühendisliği Bölümü, Işık University, Faculty of Engineering and Natural Sciences, Department of Computer Engineering, Köprülü, Mertali, and Eskil, Mustafa Taner
- Subjects
Comprehensive analysis ,Image super resolutions ,Generative adversarial networks ,Super-resolution models ,Image processing technique ,Convolutional neural network ,Deep learning ,Images processing ,Optical resolving power ,Image analysis ,Benchmarking ,Image processing ,Image enhancement ,Single image super resolution ,Single images ,Convolutional neural networks ,Current modeling ,Learning approach - Abstract
Image Super-Resolution (SR) is a set of image processing techniques which improve the resolution of images and videos. Deep learning approaches have made remarkable improvement in image super-resolution in recent years. This article aims and seeks to provide a comprehensive analysis on recent advances of models which has been used in image superresolution. This study has been investigated over other essential topics of current model problems, such as publicly accessible benchmark data-sets and performance evaluation measures. Finally, The study concluded these analysis by highlighting several weaknesses of existing base models as their feeding strategy and approved that the training technique which is Blind Feeding, which led several model to achieve state-of-the art. Publisher's Version
- Published
- 2022
33. Diseño, desarrollo e implementación de un servicio web para la anotación de imágenes de alta resolución: Integración de modelos de Deep Learning para la segmentación automática de regiones
- Author
-
Pulgarín Ospina, Cristian Camilo
- Subjects
Serveis Web ,Aprendizaje profundo ,Digital Pathology ,Desarrollo web ,Patología digital ,Explotació de dades ,Patologia Digital ,Procesamiento de imagen ,Servicios Web ,Explotación de datos ,Big data ,Data Explotation ,Deep Learning ,Processament d’imatge ,Web Services ,Crowdsourcing ,Images Processing ,Máster Universitario en Gestión de la Información-Màster universitari en Gestió de la Informació ,LENGUAJES Y SISTEMAS INFORMATICOS ,Aprenentatge profund - Abstract
[CA] La societat actual està molt digitalitzada, gràcies a això s’aconsegueix generar immenses quantitats de dades que si són analitzats aporten molta informació que pot ser usada per a realitzar importants avanços en múltiples camps, un d’aquestos camps es el de la salut, prova d’això és la revolució digital en mètodes clínics com l’estudi histològic d’imatges tumors, el paradigma en el Workflow de la patologia tradicional ha fet que s’estandarditze l’ús d’imatges digitals Whole Slide Images en detriment de les mostres microscòpiques, això es deu en gran part a la quantitat de beneficis que aquest metode presenta, una d’ells e l’ús de programes que permeten la visualització de imatges a nivell de píxel amb un nivell de qualitat òptim. El grup CVBlab ofereix una eina per a la visualització de les imatges WSI que a més té funcionalitats per a anotar imatges, aquesta aplicació té l’objectiu de facilitar l’anotació als patòlegs pera generen bases de dades robustes amb les quals entrenar models de Deep Learning, el problema és que aquest sistema té dificultats per a donar un servei òptim al volum de casos que es volen avaluar i a més no estan implementades noves funcionalitats que són necessàries per ajustar-se a l’exigència de determinats projectes, una d’elles és la implementació de models de Deep Learning en la pròpia aplicació per a avaluar-los amb metodologies Crowdsourcing,en aquest context es realitzat una anàlisi minuciosa de la infraestructura i el codi per a buscar millores i sintetitzar una estratègia de banyara d’aquesta, a més s’han dissenyat i implementat noves extensions que permeten integrar models de DeepLearning i fer tasques de Crowdsourcing, [ES] La sociedad actual está muy digitalizada, gracias a esto se consigue generar inmensas cantidades de datos que si son analizados aportan mucha información que puede ser usada para realizar importantes avances en múltiples campos, uno de estos campos es él de la salud, prueba de ello es la revolución digital en métodos clínicos como el estudio histológico de imágenes tumores, el paradigma en el Workflow de la patología tradicional hahechoqueseestandariceelusodeimágenesdigitalesWholeSlide Images en detrimento de las muestras microscópicas, esto se debe en gran parte a la cantidad de beneficios que plantes, una de ellas es el uso de software que permiten la visualización de las imágenes a nivel de píxel con un nivel de calidad óptimo. El grupo CVBlab ofrece una herramientas para la visualización de las imágenes WSI que además tiene funcionalidades para anotar imágenes, esta aplicación tiene el objetivo de facilitar la anotación a los patólogos para generar bases de datos robustas con las que entrenar modelos de Deep Learning, el problema es que este sistema tiene dificultades para dar un servicio óptimo al volumen de casos que se quieren evaluar y además no están implementadas nuevas funcionalidades que son necesarias para que ajustarse a la exigencia de determinados proyectos, una de ellas es la implementación de modelos de Deep Learning en la propia aplicación para evaluarlos con metodologías Crowdsourcing, en este contexto se ha realizado un análisis minuciosodelainfraestructura yelcódigoparabuscarmejorasysintetizarunaestrategia de mojara de la misma, además se han diseñado e implementado novedosas extensiones que permiten integrar modelos de Deep Larning y realizar tareas de Crowdsourcing., [EN] Today’s society is highly digitalized, thanks to this, it is possible to generate immense amounts of data which, if analyzed, provide a great deal of information that can be used to make significant advances in many fields, one of these fields is health, proof of which is the digital revolution in clinical methods such as the histological study tumor images, the paradigm in the workflow of traditional pathology has led to the standardization of the use of digital images to the detriment of microscopic samples, this is mainly due to the number of benefits that they offer, one of which is the use of software that allows the visualization of images at pixel level with an optimum level of quality. The CVBlab group provides tools for the visualization of WSI images that also has functionalities for anotatinging images. This application aims to facilitate annotation for pathologists in order to generate robust databases with which to train Deep Learning models, The problem is that this system has difficulties to provide an optimal service to the volume of cases to be evaluated and also new functionalities that are necessary to meet the requirements of certain projects are not implemented, one of them is the implementation of Deep Learningmodelss in the application itself to evaluate them with Crowdsourcing methodologies. In this context, a thorough analysis of the infrastructure and code has been carried out in order to search for improvements and synthesize a strategy for its wetting, in addition, novel extensions have been designed and implemented to integrate models of crowdsourcing and crowdsourcing tasks.
- Published
- 2022
34. A novel image Denoising approach using super resolution densely connected convolutional networks
- Author
-
Mürsel Ozan İncetaş, Murat Uçar, Emine Uçar, Utku Köse, İşletme ve Yönetim Bilimleri Fakültesi -- Yönetim Bilişim Sistemleri Bölümü, Uçar, Murat, Uçar, Emine, and ALKÜ, Meslek Yüksekokulları, Akseki Meslek Yüksekokulu, Bilgisayar Teknolojileri Bölümü
- Subjects
Sparse Representation ,Image distortions ,Computer Networks and Communications ,Superresolution ,Diffusion ,Engineering ,Statistical tests ,Noise intensities ,Distortion effects ,Media Technology ,Densely connected convolutional networks ,Deep learning ,Images processing ,Electrical Engineering, Electronics & Computer Science - Security, Encryption & Encoding - Image Fusion ,Convolution ,Dictionaries ,Hardware and Architecture ,Image denoising ,Computer Science ,Convolutional neural networks ,Densely connected convolutional network ,Noisy image ,Cnn ,Software ,Convolutional networks ,Denoising approach - Abstract
Image distortion effects, called noise, may occur due to various reasons such as image acquisition, transfer, and duplication. Image denoising is a preliminary step for many studies in the field of image processing. The vast majority of techniques in the literature require parameters that the user must determine according to the noise intensity. Due to the user requirement, the developed techniques become almost impossible to use by another computer system. Therefore, the Densely Connected Convolutional Networks structure-based model is proposed to remove noise from gray-level images with different noise levels in this study. With the developed approach, the obligation of the user to enter any parameters has been eliminated. For the training of the proposed method, 2200 noisy images with 11 different levels derived from the BSDS300 Train dataset (original 200 images) were used, and the success of the method was evaluated with 1100 noisy images derived from the BSDS300 Test dataset (original 100 images). The images used to evaluate the success of the proposed method were compared to both the traditional and state-of-the-art techniques. It was observed that the average SSIM / PSNR values obtained with the proposed method for the whole test dataset were 0.9236 / 33.94 at low noise level (sigma(2) = 0.001) and 0.7156 / 26.39 at high noise level (sigma(2) = 0.020). The results show that the proposed method is a very effective and efficient noise filter for image denoising.
- Published
- 2022
35. Study of the Application of Deep Convolutional Neural Networks (CNNs) in Processing Sensor Data and Biomedical Images
- Author
-
Weijun Hu, Yan Zhang, and Lijie Li
- Subjects
convolutional neural network ,images processing ,multi-sensor ,diabetic retinopathy ,Chemical technology ,TP1-1185 - Abstract
The fast progress in research and development of multifunctional, distributed sensor networks has brought challenges in processing data from a large number of sensors. Using deep learning methods such as convolutional neural networks (CNN), it is possible to build smarter systems to forecasting future situations as well as precisely classify large amounts of data from sensors. Multi-sensor data from atmospheric pollutants measurements that involves five criteria, with the underlying analytic model unknown, need to be categorized, so do the Diabetic Retinopathy (DR) fundus images dataset. In this work, we created automatic classifiers based on a deep convolutional neural network (CNN) with two models, a simpler feedforward model with dual modules and an Inception Resnet v2 model, and various structural tweaks for classifying the data from the two tasks. For segregating multi-sensor data, we trained a deep CNN-based classifier on an image dataset extracted from the data by a novel image generating method. We created two deepened and one reductive feedforward network for DR phase classification. The validation accuracies and visualization results show that increasing deep CNN structure depth or kernels number in convolutional layers will not indefinitely improve the classification quality and that a more sophisticated model does not necessarily achieve higher performance when training datasets are quantitatively limited, while increasing training image resolution can induce higher classification accuracies for trained CNNs. The methodology aims at providing support for devising classification networks powering intelligent sensors.
- Published
- 2019
- Full Text
- View/download PDF
36. A CNN based architecture for forgery detection in administrative documents
- Author
-
Maamouli, Khadidja, Benhamza, Hiba, Djeffal, Abdelhamid, Cheddad, Abbas, Maamouli, Khadidja, Benhamza, Hiba, Djeffal, Abdelhamid, and Cheddad, Abbas
- Abstract
The use of digital documents is knowing a widespread in different daily administrative and economic transactions. Simultaneously, the forgery of many documents becomes a crime that costs billions to states and companies. Several researchers tried to develop techniques that automatically detect forged documents using machine learning and image processing. With the immense success of deep learning applications, we employ, in this work, a convolutional neural network architecture that uses a gathered dataset of forged and authentic administrative documents. The results obtained on our dataset of 493 documents reached 73.95% accuracy and 97.3% recall, surpassing the efficiency of the machine learning base methods. © 2022 IEEE.
- Published
- 2022
- Full Text
- View/download PDF
37. Face Detection in a Video File Based on Matching Face Template
- Author
-
Maha Hasso and Shahad Hasso
- Subjects
detection human face ,images processing ,Mathematics ,QA1-939 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The research aim is apply algorithm for finding the number of regions to determine the largest number of faces of candidates in the video or detection, which appear in the video depending on the color techniques in the video and to fragment the skin and reject the largest number of regions that are non skin, and reduce the false faces. The algorithm depends on the usage of technique to detect human face and select it in the video after segment it to set of color images, the technique includes two steps for image processing The first is for building a model for skin color, and an outline of the distribution of color and isolate areas of skin and non skin within the image, then identify areas of the skin. The second step includes the template matching. The results proved high accuracy by nearly 85% in the discrimination of the faces in the video and enclosed in geometric and stored in a new video, the algorithms is programmed in MATLAB 7.10.0 (2010) programming language which has many efficient functions in image processing.
- Published
- 2013
- Full Text
- View/download PDF
38. A CNN based architecture for forgery detection in administrative documents
- Author
-
Khadidja Maamouli, Hiba Benhamza, Abdelhamid Djeffal, and Abbas Cheddad
- Subjects
Learning systems ,CNN-based architecture ,Forgery detection ,Computer Sciences ,Network architecture ,Deep learning ,Convolutional neural network ,Images processing ,Digital Documents ,Neural network architecture ,Economic transactions ,Datavetenskap (datalogi) ,Image processing ,Forgery detections ,Convolutional neural networks ,Machine-learning - Abstract
The use of digital documents is knowing a widespread in different daily administrative and economic transactions. Simultaneously, the forgery of many documents becomes a crime that costs billions to states and companies. Several researchers tried to develop techniques that automatically detect forged documents using machine learning and image processing. With the immense success of deep learning applications, we employ, in this work, a convolutional neural network architecture that uses a gathered dataset of forged and authentic administrative documents. The results obtained on our dataset of 493 documents reached 73.95% accuracy and 97.3% recall, surpassing the efficiency of the machine learning base methods. © 2022 IEEE.
- Published
- 2022
39. An experimental investigation of dam-break induced flood waves for different density fluids
- Author
-
Hatice Ozmen-Cagatay, Selahattin Kocaman, Evren Turhan, Mühendislik ve Doğa Bilimleri Fakültesi -- İnşaat Mühendisliği Bölümü, and Kocaman, Selahattin
- Subjects
Numerical-Simulation ,RANS ,Dam-break experiments ,Computational fluid dynamics ,Oceanography ,Physics::Fluid Dynamics ,Viscosity ,Engineering ,Digital image processing ,Shallow-Water Equation ,Volume of fluid method ,Dam-break experiment ,Initial-Stage ,Computer software ,Newtonian liquids ,Different density fluid ,Flood waves ,Engineering & Materials Science - Modelling & Simulation - Euler Equations ,Mechanics ,Dam Break ,Level set ,Navier Stokes equations ,Different density fluids ,Surface ,Impact ,Sunflower oil ,Reynolds-averaged Navier–Stokes equations ,CFD ,Geology ,Simulation ,Experimental investigations ,Environmental Engineering ,Numerical models ,Flow (psychology) ,Ocean Engineering ,Surface profiles ,Image processing ,2-phase flows ,Newtonian fluid ,Dam-break flow ,Computer simulation ,High speed cameras ,Volume ,Images processing ,Reynolds - Averaged Navier-Stokes ,Floods ,Tanks (containers) ,Free surfaces ,Free surface ,Reservoirs (water) ,Dams ,Different densities - Abstract
The present study aims to investigate the effect of various fluids on dam-break flow propagation in a rectangular and horizontal channel under dry bed conditions. Laboratory experiments were carried out to produce dam-break flood waves in a tank by the sudden release of a movable gate that divided the tank into a reservoir and a downstream channel. In these experiments, three different fluids were used as Newtonian fluids in the reservoir: normal water, sunflower oil, and salt water. A digital image processing technique was adopted for the experimental characterization of the dam-break waves. Instantaneous free surface profiles of the dam-break flow were captured by a high-speed camera. Free-surface profiles for different times and time evolution of the flow depths at four selected locations were determined. The types of fluids had an effect on the results due to their specific characteristics such as density and viscosity. Furthermore, numerical simulation of the problem was performed by Reynolds-averaged Navier-Stokes (RANS) and Volume of Fluid (VOF) based software Flow-3D. When the experimental data were compared with the numerical simulation results, there was good agreement for the elapsed time and selected measuring locations.
- Published
- 2022
40. Organization of protected filtering of images in clouds
- Author
-
Mirataei, Alireza, Rusanova, Olga, Tribynska, Karolina, and Markovskyi, Oleksandr
- Subjects
secure clouds computing ,arithmetic mean filtration ,images processing ,homomorphic encryption ,004.052.42 - Abstract
The article proposes an approach to using cloud technologies to accelerate the filtering of image streams while ensuring their protection during processing on remote computer systems. Homomorphic encryption of images during their remote filtering is proposed to be carried out by shuffling rows of pixel matrices. This provides a high level of protection against attempts to illegally restore images on computer systems that filter them. The developed approach makes it possible to speed up the performance of this important image processing operation by 1-2 orders of magnitude.
- Published
- 2022
41. Automatic recognition of fundamental tissues on histology images of the human cardiovascular system.
- Author
-
Mazo, Claudia, Trujillo, Maria, Alegre, Enrique, and Salazar, Liliana
- Subjects
- *
CARDIOVASCULAR system , *CARDIOVASCULAR disease treatment , *K-means clustering , *CELL nuclei , *EPITHELIUM - Abstract
Cardiovascular disease is the leading cause of death worldwide. Therefore, techniques for improving diagnosis and treatment in this field have become key areas for research. In particular, approaches for tissue image processing may support education system and medical practice. In this paper, an approach to automatic recognition and classification of fundamental tissues, using morphological information is presented. Taking a 40× or 10× histological image as input, three clusters are created with the k-means algorithm using a structural tensor and the red and the green channels. Loose connective tissue, light regions and cell nuclei are recognised on 40× images. Then, the cell nuclei's features – shape and spatial projection – and light regions are used to recognise and classify epithelial cells and tissue into flat, cubic and cylindrical. In a similar way, light regions, loose connective and muscle tissues are recognised on 10× images. Finally, the tissue's function and composition are used to refine muscle tissue recognition. Experimental validation is then carried out by histologist following expert criteria, along with manually annotated images that are used as a ground-truth. The results revealed that the proposed approach classified the fundamental tissues in a similar way to the conventional method employed by histologists. The proposed automatic recognition approach provides for epithelial tissues a sensitivity of 0.79 for cubic, 0.85 for cylindrical and 0.91 for flat. Furthermore, the experts gave our method an average score of 4.85 out of 5 in the recognition of loose connective tissue and 4.82 out of 5 for muscle tissue recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
42. Revisión sobre la detección del color rojo en imágenes digitales independiente de su luminosidad y tonalidad.
- Author
-
Páramo Fonseca, Jorge
- Abstract
The present review deploys a bibliographic review on the different topics that intervene in the determination of the red color in digital images. These topics are: how to manage the color in digital images, what are the modes of color, what are the digital images and relation to its format; what is the digital images processing, the alternations that are presented in the digital images and its correction, as well as the color segmentation and the determination. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
43. Electronic spreadsheet to acquire the reflectance from the TM and ETM+ Landsat images
- Author
-
Antonio R. Formaggio, José C. N. Epiphanio, Alfredo J. B. Luiz, and Salete Gürtler
- Subjects
atmospheric correction ,satellite images ,images processing ,digital number ,remote sensing. ,Geography. Anthropology. Recreation ,Cartography ,GA101-1776 - Abstract
The reflectance of agricultural cultures and other terrestrial surface "targets" is an intrinsic parameter of these targets, so in many situations, it must be used instead of the values of "gray levels" that is found in the satellite images. In order to get reflectance values, it is necessary to eliminate the atmospheric interference and to make a set of calculations that uses sensor parameters and information regarding the original image. The automation of this procedure has the advantage to speed up the process and to reduce the possibility of errors during calculations. The objective of this paper is to present an electronic spreadsheet that simplifies and automatizes the transformation of the digital numbers of the TM/Landsat-5 and ETM+/Landsat-7 images into reflectance. The method employed for atmospheric correction was the dark object subtraction (DOS). The electronic spreadsheet described here is freely available to users and can be downloaded at the following website: http://www.dsr.inpe.br/Calculo_Reflectancia.xls.
- Published
- 2005
44. Text and Image: From Book History to “The Book is History”
- Author
-
Gelfand, Julia
- Published
- 2007
- Full Text
- View/download PDF
45. A machine learning-based image processing approach for robotic assembly system
- Author
-
Wang, Xi Vincent, Soriano Pinter, Jaume, Liu, Zhihao, Wang, Lihui, Wang, Xi Vincent, Soriano Pinter, Jaume, Liu, Zhihao, and Wang, Lihui
- Abstract
Due to the boost of machine learning research in recent years, advanced technologies bring new possibilities to robotic assembly systems. The machine learning-based image processing methods show promising potential to tackle the challenges in the assembly process, e.g. object recognition, locating and trajectory planning. Accurate and robust methodologies are needed to guarantee the performance of the assembly tasks. In this research, a machine learning-based image processing method is proposed for the robotic assembly system. It is capable of detecting and locating assembly components based on low-cost image inputs, and manipulate the industrial robot automatically. A geometry library is also developed, which is an optional hybrid method towards accurate recognition results when needed. The proposed approach is validated and evaluated via case studies., QC 20220914
- Published
- 2021
- Full Text
- View/download PDF
46. Quantificação semi-automatica da perfusão miocardica em imagens de ecocardiografia com contraste
- Author
-
Lopes, Marden Leonardi, Costa, Eduardo Tavares, 1956, Gutierrez, Marco Antônio, Mathias Junior, Wilson, Bassani, José Wilson Magalhães, Pereira, Wagner Coelho de Albuquerque, Furuie, Sergio Shiguemi, Caldas, Marcia Azevedo, Button, Vera Lúcia da Silveira Nantes, Universidade Estadual de Campinas. Faculdade de Engenharia Elétrica e de Computação, Programa de Pós-Graduação em Engenharia Elétrica, and UNIVERSIDADE ESTADUAL DE CAMPINAS
- Subjects
Ultrassom na medicina ,Ultrassom ,Echocardiography ,Ultrasound ,Processamento de imagens ,Images processing ,Ultrasound in medicine ,Ecocardiografia - Abstract
Orientadores: Eduardo Tavares Costa, Marco Antonio Gutierrez, Wilson Mathias Jr Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação Resumo: Embora alguns equipamentos atuais de imagem por ultra-som. ofereçam ferramentas específicas para estudos de Ecocardíografi a por Contraste de Microbolhas (ECM) e apesar do potencial comprovado da técnica para a análise quantitativa da perfusão miocárdica, seu uso se restringe praticamente à interpretação qualitativa (visual) das imagens clínicas. Este fato é normalmente atribuído à inexistência de métodos de quantificação rápidos, e ao mesmo tempo robustos, para utilização direta na rotina clínica. Os métodos propostos na literatura e alguns softwares, disponibilizados recentemente no mercado requerem, quantificações offlim, principalmente devido à falta ou ineficiência das ferramentas para alinhamento das imagens da seqüência de ECM e para colocação de regiões de interesse (ROls). O objetivo desta tese foi o desenvolvimento de um método rápido e de fácil utilização para quantificação setni-automática da perfusão miocárdica por ECM, cora ênfase na automatização do alinhamento das imagens e da colocação de ROls. Para o alinhamento (translação e rotação) foram desenvolvidos dois algoritmos baseados em Templaíe Matching, técnicas de busca rápida e correlação. A colocação de ROls é feita de forma automática e padronizada a partir de um contorno da parede miocárdica desenhado pelo usuário. Foi implementado um programa para quantificação em ECM com base no método desenvolvido e este protótipo foi testado com 30 seqüências de ECM (570 imagens). Testes quantitativos demostraram precisão média no processo de alinhamento de 1 pixel (para translação) e 1 grau (para rotação), com exatidão aproximada de ± 1 pixel e de± i grau. Testes qualitativos indicaram colocação ótima das ROls em cerca de 67% das seqüências analisadas. De forma gerai, os resultados de quantificação foram equivalentes aos de um processo com alinhamento automático e ajuste manual de ojfsets remanescentes, ou mesmo aos de um processo com alinhamento de quadros totalmente manual. A variabilidade intra-observador verificada foi pequena e estatisticamente insignificante. O tempo de processamento do protótipo baseado no método desenvolvido foi aproximadamente 50% menor que o cie um processo de quantificação equivalente com ajuste manual dos quadros pré-alinhados Abstract: Although some current commercial ultrasound machines incorporate tools for Myocardial Contrast Echocardiography (MCE) and the technique has a great potential for quantitative analysis of myocardial perfusion, its use is pratically restricted to qualitative (visual) interpretation of clinical data. This is due to the lack of fast and robust quantification systems to be used in the clinical practice. Quantification methods found in the literature and some commercial softwares now available demand extra time for offline quantification, mainly due to the lack or inefficiency of images alignment and regions of interest (ROIs) placement. The objective of this thesis was the development of a fast, easy-to-use semi-automatic method for perfusion quantification in MCE, emphasizing the automatization of images alignment and of the placement of regions of interest. To align images (translation and rotation) we have developed two algorithms based on template matching, fast search algorithms and correlation. ROPs placement over myocardium wall is automatic and standardized and starts with the user drawing the myocardium borders. It has been implemented a software for MCE quantification based on the developed method and this prototype was tested with thirty MCE sequences (570 images). Quantitative tests have shown mean precision of 1 pixel (translation) and 1 degree (rotation) in the alignment process, and accuracy around ± 1 pixel and ± 1 degree. Qualitative tests have shown optimal placement of ROIs over myocardium in about 67% of tested sequences. In general, quantification results have shown that the method performance is similar to a quantification process with automatic alignment and manual adjustment of remaining shifts (translation and rotation), or also to a process with a full manual alignment of frames. Intra-observer variability was small and statiscally insignificant. The computational time of the prototype based on the developed method was around 50% less than the computational time of a similar quantification process with manual adjustments of pre-aligned frames Doutorado Engenharia Biomédica Doutor em Engenharia Elétrica
- Published
- 2021
- Full Text
- View/download PDF
47. Avaliação do teor de clorofila em mudas de cana-de-açúcar por meio de imagens espectrais
- Author
-
Mesa, Nelson Felipe Oliveros, 1992, Teruel Mederos, Barbara Janet, 1966, Tinini, Rodolpho César dos Reis, 1987, Matsura, Edson Eiji, Wetterich, Caio Bruno, Universidade Estadual de Campinas. Faculdade de Engenharia Agrícola, Programa de Pós-Graduação em Engenharia Agrícola, and UNIVERSIDADE ESTADUAL DE CAMPINAS
- Subjects
Reflectância ,Quimiometria ,Mudas ,Seedlings ,Processamento de imagens ,Fluorescência ,Cana-de-açúcar ,Reflectance ,Images processing ,Chemometrics ,Sugarcane ,Fluorescence - Abstract
Orientadores: Barbara Janet Teruel Mederos, Rodolpho Cesar dos Reis Tinini Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Agrícola Resumo: O teor de clorofila é um parâmetro amplamente utilizado para o diagnóstico do estado nutritivo da cana-de-açúcar, pois tem correlação com a concentração de nitrogênio que por sua vez é indicativo do rendimento da cultura. Consequentemente, a relação já demonstrada do teor de clorofila com as propriedades óticas das folhas permite a sua estimação a partir de imagens. O presente trabalho tem como objetivo desenvolver um modelo de predição do teor de clorofila em mudas de cana-de-açúcar, baseado no processamento de imagens digitais e espectrais, na região visível do espectro eletromagnético. Para alcançar o objetivo foram utilizadas técnicas de modelagem multivariada correlacionando a resposta espectral da muda com o teor de clorofila. O experimento foi executado em duas etapas com o intuito de, 1) identificar o comportamento espectral das mudas de-cana-de-açúcar sob diferentes fontes de excitação; e 2) correlacionar a resposta espectral da folha com o teor de clorofila usando métodos quimiométricos para desenvolver o melhor modelo preditivo do teor de clorofila aplicável para o monitoramento das mudas de cana-de-açúcar. Obtiveram-se correlações fortes e significativas entre a informação tricromática das imagens e o teor de clorofila. A partir da resposta espectral em absorbância obteve-se um modelo de predição multivariado com métricas satisfatórias de ajuste e erro com o método analítico, quando comparado com o medidor portátil de clorofila SPAD Abstract: The chlorophyll content is a parameter widely used for the sugarcane nutritive state diagnosis, because of being correlated to the nitrogen content which by his side is a yield indicative of the crop. Consequently, the relationship, already demonstrated, between chlorophyll content and leaf optical properties allow its estimation through imagery. The present work has as objective to develop a chlorophyll content predictive model for sugarcane seedlings, based on digital and spectral images processing, in the visible spectra. To achieve the objective, multivariate modelling techniques were implemented, correlating the seedling spectral response with the chlorophyll content. The experiment was executed in two stages aiming to, 1) identify the spectral behavior of the sugarcane seedling under several sources of excitation, and 2) correlate the leaf spectral response with the chlorophyll content using chemometrical methods to develop the best chlorophyll predictive model applicable to the sugarcane seedlings monitoring. Strong and significant correlations were obtained between the images trichromatic information and the chlorophyll content. Through the absorbance spectral response was obtained a multivariate predictive model with better fitting whit the analytical method, when compared to the SPAD chlorophyll meter, for the chlorophyll content measurement Mestrado Água e Solo Mestre em Engenharia Agrícola CNPQ 132405/2016-4
- Published
- 2021
- Full Text
- View/download PDF
48. Estudo de modelos estatisticos utilizados na caracterização de tecidos por ultra-som
- Author
-
Vivas, Gustavo de Castro, Costa, Eduardo Tavares, 1956, Dantas, Ricardo Grossi, 1974, Bassani, José Wilson Magalhães, Pereira, Wagner Coelho de Albuquerque, Universidade Estadual de Campinas. Faculdade de Engenharia Elétrica e de Computação, Programa de Pós-Graduação em Engenharia Elétrica, and UNIVERSIDADE ESTADUAL DE CAMPINAS
- Subjects
Ultrassom na medicina ,Ultrassom ,Speckle ,Tissue characterization ,Probability density functions ,Ultrasound ,Processamento de imagens ,Distribuição (Probabilidades) ,Images processing - Abstract
Orientadores: Eduardo Tavares Costa, Ricardo Grossi Dantas Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação Resumo: O diagnóstico médico por ultra-som vem sendo amplamente difundido, tornando-se referência em muitos exames clínicos, destacando-se as imagens em modo-B, capazes de representar a anatomia de tecidos e órgãos de forma não-invasiva, em tempo real e sem a utilização de radiação ionizante. Entretanto, o speckle, artefato inerente aos sistemas que utilizam fontes coerentes como nos sistemas de ultra-som, degrada a qualidade das imagens, podendo reduzir bastante a capacidade de detecção de lesões pelo médico. A caracterização de tecidos por ultra-som visa extrair informações de relevância clínica sobre as reais características da estrutura biológica sob investigação e que não podem ser facilmente percebidas por inspeção visual. Neste trabalho foi realizado um estudo comparativo entre os principais modelos de distribuição estatística encontrados na literatura e adotados na caracterização de tecidos por ultra-som. Foram utilizadas funções densidade de probabilidade que melhor representassem o padrão de brilho existente em uma dada região de uma imagem. Os resultados indicaram a versatilidade da distribuição Composta (K-Nakagami) em modelar diferentes condições de espalhamento existentes nos tecidos, mostrando-se uma forte candidata para a caracterização de tecidos por ultra-som. Entretanto, usando o conceito de espalhadores equivalentes, pôde ser mostrado que a abordagem estatística utilizada não fornece parâmetros quantitativos conclusivos sobre a estrutura investigada, mas uma contribuição conjunta de vários fatores, entre eles a densidade e a distribuição de amplitudes dos espalhadores acústicos Abstract: Ultrasound medical diagnosis has been widely used and has become a reference in many clinical examinations, especially B-mode imaging, capable of representing tissue and organ anatomy without ionizing radiation in a non-invasive way and in real-time. However, speckle, an inherent artifact of systems that use coherent sources like ultrasound systems, degrades image quality, leading to subjective and possibly misleading diagnostics. Ultrasonic tissue characterization aims to extract clinical relevant information of the biological structure characteristics under investigation and that cannot be easily achieved by visual inspection. In this dissertation it was carried out a comparative study of the most popular models of statistics distributions found in literature and commonly adopted in ultrasonic tissue characterization. It has been used probability density functions that better represented the brightness pattern of a given region of an ultrasound image. The results indicated the versatility of the Compound distribution (K-Nakagami) in modeling different scattering conditions of tissues, revealing itself a good model for use in ultrasonic tissue characterization. However, using the concept of equivalent scatterers, it could be shown that the statistics approach does not supply conclusive quantitative parameters of the structure under investigation, being a joint contribution of many factors such as density and amplitude distribution of the acoustic scatterers Mestrado Engenharia Biomédica Mestre em Engenharia Elétrica
- Published
- 2021
- Full Text
- View/download PDF
49. Selección óptima de parámetros para algoritmos de detección de obstáculos con visión monocular.
- Author
-
Delgado Morales, Jorge S., Viera López, Gustavo, Rodríguez Gómez, Raúl J., and Serrano Muñoz, Antonio
- Subjects
- *
COMPUTER vision , *METAHEURISTIC algorithms , *MATHEMATICAL optimization , *ARTIFICIAL intelligence , *IMAGE processing , *ROBOTICS , *MOBILE robots - Abstract
One of the most important task in the field of mobile robotics is obstacle detection. To solve this task, computer vision has been used often, especially monocular vision. This is due to the inherently complexity of stereo vision systems and the increasing development of the research that uses a single camera to detect obstacles. Computer vision and image processing algorithms for obstacles detection require multiple parameters that need to be adjusted to work efficiently according to the characteristics of the robot and the conditions of the environment in which it operates. In the present work, a method for the optimum selection of the parameters of this kind of algorithm for a certain environment is proposed. To achieve that, the obstacle detection problem was modeled as an optimization problem. Besides, an explanation of two of these algorithms based on monocular vision are given. These are used for the validation of the given method. For the solution of the given problem, obtained results with different metaheuristics are included. Finally, the obtained results from using these techniques in different environments are compared. [ABSTRACT FROM AUTHOR]
- Published
- 2016
50. Application of image processing to assess emulsion stability and emulsification properties of Arabic gum.
- Author
-
Hosseini, Abdullah, Jafari, Seid Mahdi, Mirzaei, Habibollah, Asghari, Ali, and Akhavan, Sahar
- Subjects
- *
IMAGE processing , *EMULSIONS , *GINGIVA , *RESPONSE surfaces (Statistics) , *VISCOSITY , *STABILIZING agents - Abstract
This paper focuses on the development of an effective methodology to determine the optimum levels of independent variables leading to maximize stability of O/W emulsions containing Arabic gum, as a natural emulsifier and stabilizer. Response surface methodology (RSM) was employed to determine the effect of Arabic gum content (2%, 5%, and 8% (w/w)), homogenization time (5, 12.5, and 20 min) and storage temperature (4, 22, and 40 °C). Image processing was used to determine emulsion stability based on responses including creaming index, centrifugal stability, viscosity, color parameters, and D 32 and D 43 indices. For each response, a second-order polynomial model with high coefficient of determination ( R 2 ) values ranging from 0.95 to 0.989 was developed using multiple linear regression analysis. The optimization results showed that the overall optimum region with the highest stability was found to be at the combined levels of 5.81% (w/w) Arabic gum content, 5 min homogenization time, and 22 °C for storage temperature. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.