28 results on '"Mark Whitty"'
Search Results
2. 3DBunch: A Novel iOS-Smartphone Application to Evaluate the Number of Grape Berries per Bunch Using Image Analysis Techniques
- Author
-
Xiangdong Zeng, Mark Whitty, and Scarlett Liu
- Subjects
yield estimation ,General Computer Science ,Computer science ,02 engineering and technology ,Smartphone application ,App store ,Image (mathematics) ,bunch analysis ,image analysis ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Computer vision ,Berry counting ,Single image ,Pixel ,business.industry ,iOS application ,General Engineering ,Process (computing) ,Sampling (statistics) ,021001 nanoscience & nanotechnology ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,0210 nano-technology ,business ,lcsh:TK1-9971 - Abstract
Evaluating the number of berries per bunch is a vital step of grape yield estimation in viticulture but is a labour intensive task for traditional manual measurement. Therefore, this paper develops a novel smartphone application for counting berries automatically from a single image. The application, called 3DBunch, acquires images from the camera or the album on a smartphone, and then estimates the number of berries by a reconstructed 3D bunch model based on the proposed image analysis techniques that are embedded in the developed iOS app. It also presents features of visualising the statistics of the reconstructed bunch, which including of the distribution of detected berry size in pixels and the total number of berries. It also has the capability of presenting sampling related information, which includes the person who conducted the sampling, the location of samples, the dates of sampling, the variety, farm, vineyards etc. The application was evaluated both on a simulator on a commercial computer and an iPad mini 4. By analysing 291 bunch images from two varieties the app achieved an accuracy of 91% regarding berry counting per bunch. Additionally, the computational time consumed to process 100 images on iPad mini 4 was studied and returned an average of 7.51 seconds per image. Obtaining these results with only a smartphone and a small backing board for capturing a photo with a single bunch, 3DBunch provides an efficient way for farmers to count berries in-vivo and it is available in the iOS App Store.
- Published
- 2020
- Full Text
- View/download PDF
3. Low-Cost Filter Selection from Spectrometer Data for Multispectral Imaging Applications
- Author
-
Mark Whitty, Julie Tang, and Paul R. Petrie
- Subjects
0209 industrial biotechnology ,Spectrometer ,Computer science ,business.industry ,020208 electrical & electronic engineering ,Multispectral image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Hyperspectral imaging ,02 engineering and technology ,Filter (signal processing) ,Object (computer science) ,020901 industrial engineering & automation ,Transmission (telecommunications) ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,RGB color model ,Computer vision ,Artificial intelligence ,business ,Selection (genetic algorithm) - Abstract
Imaging beyond RGB bands has the ability to provide solutions for non-destructive detection of objects or scene properties. Methods often used in research include hyperspectral cameras and spectrometers; the former is expensive and the latter can only provide point measurements. Multispectral imaging can provide low-cost solutions in instances where RGB does not suffice, however, evaluation of filters used in multispectral sensors or the use of off-the-shelf filters for user specific applications have not been explicitly analysed. This paper proposes a novel method for using spectrometer data by firstly limiting the search space to existing off-the-shelf filters, modelling those filters, and then applying the desired model for classification or regression to more appropriately model multispectral imaging performance. The results indicate that point measurements produced by spectrometers may not correspond with the response achieved when using off-the-shelf filters; particularly in the instance where the spectral profile of the measured object varies significantly within the transmission regions of the filter. The method presented focuses on the practical estimation of filter performance when selecting filters.
- Published
- 2019
- Full Text
- View/download PDF
4. Accelerating Automated Stomata Analysis Through Simplified Sample Collection and Imaging Techniques
- Author
-
Vihaan Kaura, Harsh Patel, Luke Millstead, Florence Tomasetig, Hiranya Jayakody, Mark Whitty, and Paul R. Petrie
- Subjects
0106 biological sciences ,Scanner ,Microscope ,Computer science ,Microscope slide ,Plant Science ,lcsh:Plant culture ,01 natural sciences ,Convolutional neural network ,law.invention ,stomata pore measurement ,03 medical and health sciences ,law ,Digital image processing ,stomata sample collection ,Computer vision ,lcsh:SB1-1110 ,stomata analysis pipeline ,microscope imagery ,030304 developmental biology ,Graphical user interface ,Original Research ,0303 health sciences ,business.industry ,fungi ,high-throughput analysis ,Sample collection ,Artificial intelligence ,business ,Focus (optics) ,010606 plant biology & botany - Abstract
Digital image processing is commonly used in plant health and growth analysis, aiming to improve research efficiency and repeatability. One focus is analysing the morphology of stomata, with the aim to better understand the regulation of gas exchange, its link to photosynthesis and water use and how they are influenced by climatic conditions. Despite the key role played by these cells, their microscopic analysis is largely manual, requiring intricate sample collection, laborious microscope application and the manual operation of a graphical user interface to identify and measure stomata. This research proposes a simple, end-to-end solution which enables automatic analysis of stomata by introducing key changes to imaging techniques, stomata detection as well as stomatal pore area calculation. An optimal procedure was developed for sample collection and imaging by investigating the suitability of using an automatic microscope slide scanner to image nail polish imprints. The use of the slide scanner allows the rapid collection of high-quality images from entire samples with minimal manual effort. A convolutional neural network was used to automatically detect stomata in the input image, achieving average precision, recall and F-score values of 0.79, 0.85, and 0.82 across four plant species. A novel binary segmentation and stomatal cross section analysis method is developed to estimate the pore boundary and calculate the associated area. The pore estimation algorithm correctly identifies stomata pores 73.72% of the time. Ultimately, this research presents a fast and simplified method of stomatal assay generation requiring minimal human intervention, enhancing the speed of acquiring plant health information.
- Published
- 2020
5. A Generalised Approach for High-throughput Instance Segmentation of Stomata in Microscope Images
- Author
-
Hugo J. de Boer, Paul R. Petrie, Mark Whitty, and Hiranya Jayakody
- Subjects
0106 biological sciences ,0301 basic medicine ,Microscope ,Computer science ,Image processing ,Plant Science ,lcsh:Plant culture ,01 natural sciences ,law.invention ,03 medical and health sciences ,law ,Machine learning ,Genetics ,Segmentation ,lcsh:SB1-1110 ,Pyramid (image processing) ,Automatic stomata detection ,Throughput (business) ,lcsh:QH301-705.5 ,High-throughput analysis ,business.industry ,Methodology ,Pattern recognition ,Mask R-CNN ,030104 developmental biology ,lcsh:Biology (General) ,Feature (computer vision) ,Instance segmentation ,Sample collection ,Artificial intelligence ,Scale (map) ,business ,Microscope imagery ,010606 plant biology & botany ,Biotechnology - Abstract
Background Stomata analysis using microscope imagery provides important insight into plant physiology, health and the surrounding environmental conditions. Plant scientists are now able to conduct automated high-throughput analysis of stomata in microscope data, however, existing detection methods are sensitive to the appearance of stomata in the training images, thereby limiting general applicability. In addition, existing methods only generate bounding-boxes around detected stomata, which require users to implement additional image processing steps to study stomata morphology. In this paper, we develop a fully automated, robust stomata detection algorithm which can also identify individual stomata boundaries regardless of the plant species, sample collection method, imaging technique and magnification level. Results The proposed solution consists of three stages. First, the input image is pre-processed to remove any colour space biases occurring from different sample collection and imaging techniques. Then, a Mask R-CNN is applied to estimate individual stomata boundaries. The feature pyramid network embedded in the Mask R-CNN is utilised to identify stomata at different scales. Finally, a statistical filter is implemented at the Mask R-CNN output to reduce the number of false positive generated by the network. The algorithm was tested using 16 datasets from 12 sources, containing over 60,000 stomata. For the first time in this domain, the proposed solution was tested against 7 microscope datasets never seen by the algorithm to show the generalisability of the solution. Results indicated that the proposed approach can detect stomata with a precision, recall, and F-score of 95.10%, 83.34%, and 88.61%, respectively. A separate test conducted by comparing estimated stomata boundary values with manually measured data showed that the proposed method has an IoU score of 0.70; a 7% improvement over the bounding-box approach. Conclusions The proposed method shows robust performance across multiple microscope image datasets of different quality and scale. This generalised stomata detection algorithm allows plant scientists to conduct stomata analysis whilst eliminating the need to re-label and re-train for each new dataset. The open-source code shared with this project can be directly deployed in Google Colab or any other Tensorflow environment.
- Published
- 2020
- Full Text
- View/download PDF
6. A robust automated flower estimation system for grape vines
- Author
-
Julie Tang, Scarlett Liu, Mark Whitty, Paul R. Petrie, Bolai Xin, Xuesong Li, and Hongkun Wu
- Subjects
0106 biological sciences ,Estimation ,Calibration (statistics) ,Machine vision ,business.industry ,Linear model ,Soil Science ,Pattern recognition ,Image processing ,04 agricultural and veterinary sciences ,01 natural sciences ,Global model ,040501 horticulture ,Control and Systems Engineering ,Robustness (computer science) ,Range (statistics) ,Artificial intelligence ,0405 other agricultural sciences ,business ,Agronomy and Crop Science ,010606 plant biology & botany ,Food Science ,Mathematics - Abstract
Automated flower counting systems have recently been developed to process images of grapevine inflorescences, which assist in the critical tasks of determining potential yields early in the season and measurement of fruit-set ratios without arduous manual counting. In this paper, we introduce a robust flower estimation system comprised of an improved flower candidate detection algorithm, flower classification and finally flower estimation using calibration models. These elements of the system have been tested in eight aspects across 533 images with associated manual counts to determine the overall accuracy and how it is affected by experimental conditions. The proposed algorithm for flower candidate detection and classification is superior to all existing methods in terms of accuracy and robustness when compared with images where visible flowers are manually identified. For flower estimation, an accuracy of 84.3% against actual manual counts was achieved both in-vivo and ex-vivo and found to be robust across the 12 datasets used for validation. A single-variable linear model trained on 13 images outperformed other estimation models and had a suitable balance between accuracy and manual counting effort. Although accurate flower counting is dependent on the stage of inflorescence development, we found that once they reach approximately EL16 this dependency decreases and the same estimation model can be used within a range of about two EL stages. A global model can be developed across multiple cultivars if they have inflorescences with a similar size and structure.
- Published
- 2018
- Full Text
- View/download PDF
7. Smartphone tools for measuring vine water status
- Author
-
M.A. Skewes, Mark Whitty, Paul R. Petrie, and Scarlett Liu
- Subjects
Vine ,Measure (data warehouse) ,Stomatal conductance ,Thermal infrared ,Computer science ,Orientation (computer vision) ,business.industry ,Real-time computing ,04 agricultural and veterinary sciences ,Horticulture ,01 natural sciences ,010309 optics ,Software ,Phone ,0103 physical sciences ,3d camera ,040103 agronomy & agriculture ,0401 agriculture, forestry, and fisheries ,business - Abstract
Smartphones have several advantages over specialist systems for extending crop monitoring technology to small to medium scale horticultural operators, including ubiquity, price, user familiarity and the ease of implementing updates. They also contain sufficient computing power that analysis and support software can be contained within the phone. This work evaluated a range of smartphone based tools for measuring vine water status, leading to the development of the most promising tool into a smart phone application that can be easily used by vineyard managers. Potential systems evaluated included: 1) an infrared camera that is integrated into or connected directly to the smartphone and uses established techniques for the analysis of thermal imagery to assess water status; 2) a portable near infra-red spectrophotometer that interfaces with the phone and measures reflectance across wavelengths for the calculation of water status indices; 3) a 3D camera that is integrated into or connected to the phone via WiFi and can use image analysis to assess the shape or orientation of the leaves; 4) a microscope attached to the smartphone camera or as a separate portable unit that can be used to measure stomatal number and aperture and then calculate stomatal conductance. A trial site with a range of irrigation deficit treatments applied to 'Chardonnay' and 'Cabernet Sauvignon' grapevines was established in the Riverland of South Australia. Water status measurements from the smartphone based sensors described above were benchmarked against conventional methods including mid-day stem water potential and stomatal conductance. The thermal infrared camera system was selected as the most accurate and robust option for development into an app, which will be released to selected viticulturists for beta testing in 2017.
- Published
- 2018
- Full Text
- View/download PDF
8. Microscope image based fully automated stomata detection and pore measurement method for grapevines
- Author
-
Paul R. Petrie, Mark Whitty, Hiranya Jayakody, and Scarlett Liu
- Subjects
0106 biological sciences ,0301 basic medicine ,Microscope ,Computer science ,Image processing ,Plant Science ,lcsh:Plant culture ,01 natural sciences ,Skeletonization ,law.invention ,03 medical and health sciences ,law ,Machine learning ,Genetics ,Segmentation ,Computer vision ,lcsh:SB1-1110 ,Automatic stomata detection ,lcsh:QH301-705.5 ,Stomata ,business.industry ,Research ,Template matching ,Process (computing) ,Image segmentation ,Object detection ,Grapevines ,030104 developmental biology ,Cascade object detection ,lcsh:Biology (General) ,Stomatal morphology ,Artificial intelligence ,business ,010606 plant biology & botany ,Biotechnology - Abstract
Background Stomatal behavior in grapevines has been identified as a good indicator of the water stress level and overall health of the plant. Microscope images are often used to analyze stomatal behavior in plants. However, most of the current approaches involve manual measurement of stomatal features. The main aim of this research is to develop a fully automated stomata detection and pore measurement method for grapevines, taking microscope images as the input. The proposed approach, which employs machine learning and image processing techniques, can outperform available manual and semi-automatic methods used to identify and estimate stomatal morphological features. Results First, a cascade object detection learning algorithm is developed to correctly identify multiple stomata in a large microscopic image. Once the regions of interest which contain stomata are identified and extracted, a combination of image processing techniques are applied to estimate the pore dimensions of the stomata. The stomata detection approach was compared with an existing fully automated template matching technique and a semi-automatic maximum stable extremal regions approach, with the proposed method clearly surpassing the performance of the existing techniques with a precision of 91.68% and an F1-score of 0.85. Next, the morphological features of the detected stomata were measured. Contrary to existing approaches, the proposed image segmentation and skeletonization method allows us to estimate the pore dimensions even in cases where the stomatal pore boundary is only partially visible in the microscope image. A test conducted using 1267 images of stomata showed that the segmentation and skeletonization approach was able to correctly identify the stoma opening 86.27% of the time. Further comparisons made with manually traced stoma openings indicated that the proposed method is able to estimate stomata morphological features with accuracies of 89.03% for area, 94.06% for major axis length, 93.31% for minor axis length and 99.43% for eccentricity. Conclusions The proposed fully automated solution for stomata detection and measurement is able to produce results far superior to existing automatic and semi-automatic methods. This method not only produces a low number of false positives in the stomata detection stage, it can also accurately estimate the pore dimensions of partially incomplete stomata images. In addition, it can process thousands of stomata in minutes, eliminating the need for researchers to manually measure stomata, thereby accelerating the process of analysing plant health.
- Published
- 2017
- Full Text
- View/download PDF
9. A computer vision system for early stage grape yield estimation based on shoot detection
- Author
-
Mark Whitty, Scarlett Liu, Julie Tang, Steve Cossell, and Gregory Dunn
- Subjects
Engineering ,Machine vision ,business.industry ,010401 analytical chemistry ,Feature extraction ,Data classification ,Forestry ,Feature selection ,Image processing ,02 engineering and technology ,Horticulture ,01 natural sciences ,0104 chemical sciences ,Computer Science Applications ,Yield (wine) ,0202 electrical engineering, electronic engineering, information engineering ,Unsupervised learning ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Scale (map) ,business ,Agronomy and Crop Science - Abstract
A vision system for automated yield estimation and variation mapping is proposed.The proposed method produces F1-score 0.90 in average over four experimental blocks.The developed shoot detection does not require manual labeling to build a classifier.The developed system only requires low-cost off-the-shelf image collection equipment.The best EL stage for imaging shoots is around EL stage 9 regarding yield estimation. Counting grapevine shoots early in the growing season is critical for adjusting management practices but is challenging to automate due to a range of environmental factors.This paper proposes a completely automatic system for grapevine yield estimation, comprised of robust shoot detection and yield estimation based on shoot counts produced from videos. Experiments were conducted on four vine blocks across two cultivars and trellis systems over two seasons. A novel shoot detection framework is presented, including image processing, feature extraction, unsupervised feature selection and unsupervised learning as a final classification step. Then a procedure for converting shoot counts from videos to yield estimates is introduced.The shoot detection framework accuracy was calculated to be 86.83% with an F1-score of 0.90 across the four experimental blocks. This was shown to be robust in a range of lighting conditions in a commercial vineyard. The absolute predicted yield estimation error of the system when applied to four blocks over two consecutive years ranged from 1.18% to 36.02% when the videos were filmed around E-L stage 9.The developed system has an advantage over traditional PCD mapping techniques in that yield variation maps can be obtained earlier in the season, thereby allowing farmers to adjust their management practices for improved outputs. The unsupervised feature selection algorithm combined with unsupervised learning removed the requirement for any prior training or labeling, greatly enhancing the applicability of the overall framework and allows full automation of shoot mapping on a large scale in vineyards.
- Published
- 2017
- Full Text
- View/download PDF
10. Stratification of fertility potential according to cervical mucus symptoms: achieving pregnancy in fertile and infertile couples
- Author
-
Marie Marshell, Marian Corkill, Mark Whitty, Joseph V. Turner, and Adrian Thomas
- Subjects
Infertility ,Adult ,medicine.medical_specialty ,media_common.quotation_subject ,030209 endocrinology & metabolism ,Fertility ,Stratification (vegetation) ,03 medical and health sciences ,0302 clinical medicine ,Pregnancy ,medicine ,Humans ,Ovulation ,Menstrual cycle ,media_common ,030219 obstetrics & reproductive medicine ,business.industry ,Obstetrics ,Australia ,Obstetrics and Gynecology ,General Medicine ,medicine.disease ,Cervical mucus ,Reproductive Medicine ,Fertilization ,Cervix Mucus ,Female ,Reproduction ,business - Abstract
Women wishing to conceive are largely unaware of fertility symptoms at the time of ovulation. This study investigated the effectiveness of fertility-awareness in achieving pregnancy, particularly fertile mucus pattern, in the context of infertility. The 384 eligible participants were taken from consecutive women desiring pregnancy who attended 17 Australian Billings Ovulation Method
- Published
- 2019
11. DeepPhenology: Estimation of apple flower phenology distributions based on deep learning
- Author
-
Xu Wang, Mark Whitty, and Julie Tang
- Subjects
0106 biological sciences ,Thinning ,Contextual image classification ,Phenology ,business.industry ,Deep learning ,Forestry ,Pattern recognition ,04 agricultural and veterinary sciences ,Horticulture ,01 natural sciences ,Object detection ,Computer Science Applications ,040103 agronomy & agriculture ,0401 agriculture, forestry, and fisheries ,RGB color model ,Artificial intelligence ,Divergence (statistics) ,business ,Agronomy and Crop Science ,010606 plant biology & botany ,Mathematics ,Block (data storage) - Abstract
Estimation of phenology distribution in horticultural crops is very important as it governs the timing of chemical thinning in order to produce good quality fruit. This paper presents a novel phenology distribution estimation method named DeepPhenology for apple flowers based on CNNs using RGB images, which is able to efficiently map the flower distribution on an image-level, row-level, and block-level. The image classification model VGG-16 was directly trained with relative phenology distributions calculated from manual counts of flowers in the field and acquired imagery. The proposed method removes the need to label images, which overcomes difficulties in distinguishing overlapping flower clusters or identifying hidden flower clusters when using 2D imagery. DeepPhenology was tested on both daytime and night-time images captured using an RGB camera mounted on a ground vehicle in both Gala and Pink Lady varieties in an Australian orchard. An average Kullback-Leibler (KL) divergence value of 0.23 over all validation sets and an average KL value of 0.27 over all test sets was achieved. Further evaluation has been done by comparing the proposed model with YOLOv5 and shown to outperform this state-of-the-art object detection model for this task. By combining relative phenology distributions from a single image to a row-level or block-level distribution, we are able to give farmers a precise and high-level overview of block performance to form the basis for decisions on chemical thinning applications.
- Published
- 2021
- Full Text
- View/download PDF
12. Embedding metadata in images at time of capture using physical Quick Response (QR) codes
- Author
-
Mark Whitty and G.N. Hill
- Subjects
0106 biological sciences ,Correctness ,Computer science ,Library and Information Sciences ,Management Science and Operations Research ,computer.software_genre ,010603 evolutionary biology ,01 natural sciences ,Field (computer science) ,Metadata management ,Media Technology ,Code (cryptography) ,business.industry ,05 social sciences ,Usability ,computer.file_format ,Pipeline (software) ,Computer Science Applications ,Metadata ,Data mining ,Image file formats ,0509 other social sciences ,050904 information & library sciences ,business ,computer ,Information Systems - Abstract
Maintaining metadata records for scientific imaging is challenging where the link between the metadata and the image is labour intensive to create and can easily be broken. We propose a method for using QR codes in images of samples to embed the metadata in an open and robust manner, so that it can be readily extracted on demand. By using a novel pipeline for generating QR codes, displaying them in images, reading the QR codes in the images and extracting the metadata for later action such as renaming the image file, a streamlined process for metadata management is introduced. This method was simulated using a range of image types and QR code parameters to identify the limits of various parameter combinations, providing practical insight into code design and usability. The pipeline was also tested with hundreds of images in both laboratory and field situations and proved to be extremely efficient and robust. This method offers potential for anyone taking images of samples who needs to guarantee the existence and correctness of metadata without relying on an external association mechanism.
- Published
- 2021
- Full Text
- View/download PDF
13. Spatial Map Generation from Low Cost Ground Vehicle Mounted Monocular Camera
- Author
-
Scarlett Liu, Julie Tang, Mark Whitty, and Stephen Cossell
- Subjects
0106 biological sciences ,Pixel ,business.industry ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,04 agricultural and veterinary sciences ,01 natural sciences ,Vineyard ,Geography ,Aerial photography ,Control and Systems Engineering ,Position (vector) ,040103 agronomy & agriculture ,Global Positioning System ,0401 agriculture, forestry, and fisheries ,Computer vision ,Artificial intelligence ,Stage (hydrology) ,business ,010606 plant biology & botany ,Block (data storage) ,Remote sensing - Abstract
This paper presents a method for generating a spatial map of a particular plant or environmental property of a vineyard block based on low cost camera technology and existing vineyard vehicles. Such properties can range from leaf area, per vine bunch count or bare-wire detection. The paper provides a low cost ground vehicle based solution that does not rely on live GPS position recording. Rather, the relative estimated motion between video frames is used to localize each sensor reading within the bounds of each row. Row end locations are derived from post-processed GPS recorded locations of the perimeter of a block with an aerial photograph. This paper uses the proportion of leaf colored pixels in a video frame as a token example of measuring the relative growth of vines during the shoots stage.
- Published
- 2016
- Full Text
- View/download PDF
14. Side-view apple flower mapping using edge-based fully convolutional networks for variable rate chemical thinning
- Author
-
Julie Tang, Mark Whitty, and Xu Wang
- Subjects
0106 biological sciences ,Pixel ,Thinning ,business.industry ,Machine vision ,Computer science ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Forestry ,Context (language use) ,Pattern recognition ,04 agricultural and veterinary sciences ,Horticulture ,01 natural sciences ,Thresholding ,Computer Science Applications ,040103 agronomy & agriculture ,0401 agriculture, forestry, and fisheries ,Segmentation ,Artificial intelligence ,F1 score ,business ,Agronomy and Crop Science ,010606 plant biology & botany - Abstract
Apple trees commonly require the removal of excessive flowers by thinning to produce high quality fruit. Machine vision has recently been applied to detect the flower density as the first step in this process. Existing work relying on color thresholding is sensitive to imaging conditions and the most recent published work using deep learning in this context has proven to be exceptionally slow to process. This paper presents an apple flower segmentation method on a pixel level based on a Fully Convolutional Network (FCN) together with a process of generating a map that can be used for a variable rate chemical sprayer. Despite the challenging conditions of an uncontrolled environment, our apple flower detector was able to generate a F1 score at pixel-level up to 85.6%, which is a relatively high accuracy in terms of pixel-level segmentation. Our method has been tested on both daytime and night-time datasets, which strongly validates the ability of our apple flower detector to work under different conditions. The resulting detections are georeferenced and merged into a density map in the format necessary for application by a variable rate chemical sprayer. Finally, this flower density mapping system will benefit farmers by visualising the whole crop and extracting useful information to support their decision making for chemical thinning.
- Published
- 2020
- Full Text
- View/download PDF
15. Automatic grape bunch detection in vineyards with an SVM classifier
- Author
-
Mark Whitty and Scarlett Liu
- Subjects
Scale (ratio) ,Logic ,Computer science ,business.industry ,Applied Mathematics ,Image processing ,Field of view ,Support vector machine ,Svm classifier ,Bunches ,Yield (wine) ,Precision viticulture ,Computer vision ,Artificial intelligence ,business - Abstract
Precise yield estimation in vineyards using image processing techniques has only been demonstrated conceptually on a small scale. Expanding this scale requires significant computational power where, by necessity, only small parts of the images of vines contain useful features. This paper introduces an image processing algorithm combining colour and texture information and the use of a support vector machine, to accelerate fruit detection by isolating and counting bunches in images. Experiments carried out on two varieties of red grapes (Shiraz and Cabernet Sauvignon) demonstrate an accuracy of 88.0% and recall of 91.6%. This method is also shown to remove the restriction on the field of view and background which plagued existing methods and is a first step towards precise and reliable yield estimation on a large scale.
- Published
- 2015
- Full Text
- View/download PDF
16. Metric-based detection of robot kidnapping with an SVM classifier
- Author
-
Mark Whitty and Dylan Campbell
- Subjects
Computer science ,business.industry ,General Mathematics ,Point cloud ,Pattern recognition ,Mobile robot ,Machine learning ,computer.software_genre ,Computer Science Applications ,Support vector machine ,Discriminative model ,Control and Systems Engineering ,Robot ,Artificial intelligence ,business ,computer ,Pose ,Classifier (UML) ,Software - Abstract
Kidnapping occurs when a robot is unaware that it has not correctly ascertained its position, potentially causing severe map deformation and reducing the robot's functionality. This paper presents metric-based techniques for real-time kidnap detection, utilising either linear or SVM classifiers to identify all kidnapping events during the autonomous operation of a mobile robot. In contrast, existing techniques either solve specific cases of kidnapping, such as elevator motion, without addressing the general case or remove dependence on local pose estimation entirely, an inefficient and computationally expensive approach. Three metrics that measured the quality of a pose estimate were evaluated and a joint classifier was constructed by combining the most discriminative quality metric with a fourth metric that measured the discrepancy between two independent pose estimates. A multi-class Support Vector Machine classifier was also trained using all four metrics and produced better classification results than the simpler joint classifier, at the cost of requiring a larger training dataset. While metrics specific to 3D point clouds were used, the approach can be generalised to other forms of data, including visual, provided that two independent ways of estimating pose are available. Two methods for kidnap detection using local pose estimation techniques are proposed.At least two independent ways of estimating relative pose are required.Metrics assessing the quality of a pose estimate are developed and evaluated.For applications with limited training data, a joint classifier performs well.If a large training dataset is available, an SVM classifier is more accurate.
- Published
- 2015
- Full Text
- View/download PDF
17. A Fast Method to Measure Stomatal Aperture by MSER on Smart Mobile Phone
- Author
-
Mark Whitty, Paul R. Petrie, Julie Tang, and Scarlett Liu
- Subjects
Computer science ,business.industry ,Aperture ,Mobile phone ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Astrophysics::Instrumentation and Methods for Astrophysics ,Measure (physics) ,Image processing ,Computer vision ,Artificial intelligence ,Stomatal aperture ,business ,Edge detection - Abstract
A fast image processing method is proposed for detecting stomata and measuring stomatal aperture size in individual images. The accuracy of aperture measurements is 97%. A prototype mobile application is developed to assist field measurements.
- Published
- 2016
- Full Text
- View/download PDF
18. Internet-based operation of autonomous robots: The role of data replication, compression, bandwidth allocation and visualization
- Author
-
Jayantha Katupitiya, Jose Guivant, Stephen Cossell, and Mark Whitty
- Subjects
Bandwidth management ,Dynamic bandwidth allocation ,business.industry ,Computer science ,Real-time computing ,Mobile robot ,Computer Science Applications ,Bandwidth allocation ,Control and Systems Engineering ,Teleoperation ,Bandwidth (computing) ,The Internet ,business ,Data compression - Abstract
The problem of remote-controlling a mobile robot through the Internet with its associated bandwidth constraints is addressed in this paper. Our solution combines a novel communication and processing module with a unique sensor layout and a flexible control architecture to achieve a range of capabilities from traditional teleoperation to point-and-click autonomy. Careful management of the available bandwidth enables a demonstration of these capabilities between nodes with 20,000 km separation while also providing real-time three-dimensional (3D) models of the environment through the Internet. A spatially oriented compression algorithm, integral to efficient bandwidth management, is also presented. Experiments establish the effectiveness of the extended situational awareness in improving the efficiency and accuracy of driving a mobile robot through a cluttered environment over the existing 2D map or video streaming methods. © 2012 Wiley Periodicals, Inc. (See www.youtube.com/UNSWMechatronics and www.robotics.unsw.edu.au for more information.)
- Published
- 2012
- Full Text
- View/download PDF
19. The National Lung Screening Trial: Overview and Study Design
- Author
-
Natalie Cunningham, Michael Khalili, John Waltz, Ralph Weiben, Deb Gurtner, Linda DeAlmeida, Sanjay Gupta, Sharon Maxfield, Crissy Kibic, Kathleen DeWitt, David DeMets, Walter Allen Bowman, Robert Epstein, Mia Burkhard, Stephen J. Swensen, Hattie Cromwell, Kianoush Rezai, Steadman Sankey, Lisa Scott Wasson, Rita Musanti, Tamim Malbari, Joy Ferola, Qimei He, Patty Trapnell, Melba Francis, Sam Quattlebaum, Joanice Thompson, Ana Birofka, Robin Griggs, Elizabeth Johnson, Margaret R. Spitz, Nicole Richardson, Yuting Liang, Lawrence G. Hutchins, Mirjana Tecmire, Lila Camara, James J. Navin, Eileen Frost, Diane Romano, Carrie Petkus, Eric J. Berns, Pei Jan P Lin, Steve D. Uttecht, Marian Acerra, Lawrence R. Ragard, Leo P. Lawler, Christopher M. Rogers, Alan Lee Goodwin, L. Ellen Martinusen, Melissa Ford, Michael T. Fisher, Beverly Powell, Cindy Lin, Jamie Downs, Brent Fodera, Bonita Wohlers, Michael Brangan, Peggy Bradley, Todd B. Burt, Susan Allen, Shiva Borgheian, Mingying Zeng, Thomas Riley, Danielle Gherardini, Steven Shiff, Olivia Campa, Wahied Gendi, Fang F. Xu, Ivana K. Kazda, Anne Chung, Briar Doi, Helen Price, Maria Vlachou, Alan Morgan, Simone Vuong, Pierre P. Massion, Darcy Watson, Debbie William, Esther Nakano, Karen Broski, David Creed, Melanie Bvorak, Lakisha Hawkins, Gladys Hino, Raymond Dauphinais, Michele Sallas, Helene Shiratori, Venus Brown, Denise Brooks, Heather Porter, Ilana F. Gareen, Tracy Lee, Melissa Cates, Kyle Turner, Tiffanie Hammond, Margaret Paquette, Lorraine Kerchum, Barbara Lewis, Douglas J. Reding, Thomas E. Hartman, Cathy Longden, Melissa Laron, Reza Abaya, Beborah Robertson, J W Semenkovich, Christine Holland, Hugh McGinley, Chani Montalbo, Karen Zubena, Vanessa Ralda, Adam C. Stein, Jennifer Ott, Lawrence M. Kotner, Jing Lee, Arnold Ssali, Michael Young, Quinn A. DeMordaunt, Linda V. White, Steve Dubinett, Pearl Chan, Roxana Phillips, Mallory Kolich, Brent B. Nelson, Phi Do, Jill Spivak, Angele LaFleur, Kesha Smith, Elayne Weslowsky, Patricia Nieters, Maurice LeBlanc, Satinder Singh, Lonna Matthews, Quentin McMullen, Karen Lappe, Sharon Longacre, Cindy Cobb, Jane A. Zehner, Michael Teepe, Pamela M. Marcus, Kathleen Bow, Wendy Francis, Mary Gemmel, Robert S. Fontana, Linda Jurjans, Barbara Ginther, Jonathan B. Clapp, Monica Richel, Scott F. Pickering, Brenda Edwards, Kendrick Looney, Randy Marshall, Roni Atkins, Danielle Wicks, Julie Peterson, Dcanna Cape, Albert J. Cook, Jerry Brekke, Louisa Turner, Larry Stoller, Mark B. Salerno, Bavid E. Midthun, Mark Delano, Minnetta Belyea, Deborah Greene, Jonathan Goldin, Terry Lewis, Virginia Fischer, Andrea Chapman, Shari Jordan, Deb Warren, Demetria Johnson, Rekha Khatri, Lisa Sirianni, Guillermo Geisse, Michael A. Fuchs, Kanya Kumbalasiri, Jeremy J. Erasmus, Vicki Shambaugh, Denise Boyles, Sarah Hallsky, Anna Nanovski, Jill Heinz, Mollie King, Kay Vydareny, Olga Soukhanova, Patricia Rueweler, Perry G. Pernicano, Regina Rendas-Baum, Phyllis Pirotte, Russell Harris, Neil Argyle, Miyoung Kim, June Krebsbach, Audrey Gallego, Sheila Wein, Mukesh F. Karwat, Karla Myra-Bloom, Pamela Byrnes, Mitchell D. Schnall, Hector Ahumada, Eric Sanchez, Donna DesMarais, Julie Maderitz, Cindy Lavergne, Lori Kirchoff, Patricia C. Sanders, Elizabeth Thielke, Michael Sullivan, Jennifer Gaegler, Janet Manual, Jennifer R. Heinz, Ray Zisumbo, Diane C. Strollo, Candace Mueller, Irene Mahon, Brenda Delfosse, Carolyn M. Johnson, William E. Grizzle, Merideth Stanley, Sylvan Green, Pamela Harvey, Lindsay Richardson, Brenda K. Brewer, Philip Costello, Deanna Zapolski, John Worrell, Jeffrey G. Schragin, David S. Alberts, Edward L. Korn, Tamara Owens, Hank Brastater, Kay Mathiesen-Viergutz, Mark Broschinsky, Paul W. Spirn, Grace Isaacs, John S. Waltz, Mitch Goodsitt, Christi Newton-Foster, Sharlene Snowden, Barbara Voight, Gail Bizer, Kathy McDonough, William Huynh, Eduard Van Stam, Robert A. Carlson, Mike Florzyk, Paula M. Jacobs, Joan Fuller, Mauren Grunenwald, Ann Bangerter, Jacksonville, Adriane Andersen, Tess Thompson, Kenneth Nowers, Stephanie Helwi, Martin J. Edelman, Emmanuel Omoba, Rubenia Flores, Kevin T. White, Patrick W. Wolfe, Michael Milacek, Sharon Gard, Brandon B. Bigby, Cynthia H. McCollough, Andrew Burnside, Sheryl L. Ogden, Maisha Pollard, Thomas K. Pilgram, Sydney Laster, Claudia J. Kasales, Bruce W. Turnbull, Cheri Haselhuhn, Laura N. Myers, Jean Jacobsen, Melissa Love, Gavin D. Watt, Cheryl Love, Gerald F. Abbott, Susanne Kozakowski, Jerry L. Montague, Cynthia Hill, Neil F. O'Donnell, Anna Sear, Thomas M. Beck, Jean Wegner, Chrispina Wray, Edward M. Brown, Louise Ledbetter, Karen Bellware, Julie Moody, Noel Bahr, Matthew T. Freedman, Thomas Hensley, John E. Madewell, Leanne Hadfield, David R. Maffitt, Lisa Cottrell, John J. Warner, Deborah Graham, Krystal Arnold, Alejandra Reyes, Kristin Lieberman, Derek Omori, Donna Garland, Mike Burek, Mel Johnson, Judith Harkins, Martha Fronheiser, M. Y. M. Chen, Dawn Simmons, Kathleen Voight, Aaron O. Bungum, Marianne Rice, Lakeshia Murray, Tami Krpata, Donna Sammons, Leslie Kmetty, Catherine Duda, Carissa Krzeczkowski, Anne Nguyen, Richard H. Lane, Cynthia Mack, Loren C. Macey, Eddy Wicklander, Kelly McDaniel, Sue Zahradka, Hassan Bourija, Cristina Farkas, Jincy George, Renae Kiffmeyer, Wendell Christie, Catherine Engartner, John Crump, Mimi Kim, Carol Steinberg, Reginald F. Munden, Deb Kirby, Jo Ann Stetz, Barbara O'Brien, Sally Tenorio, Laura Multerer, Carlotta McCalister-Cross, Jessica Silva-Gietzen, Tamara Saunders, Harvey Glazer, Cam Vashel, Maria Oh, Rodkise Estell, Steven M. Moore, Tara Riley, Grant Izmirlian, D. Claire Anderson, James Burner, Steven Peace, Phil Hoffman, Angela Del Pino, Brian Irons, Carlos Jamis-Dow, John K. Lawlor, Edward F. Patz, Jay Afiat, Amber Barrow, Bawn M. Beno, Melissa S. Fritz, Lynn Coppage, Scott J. Sheltra, Tim Swan, Jerry Bergen, Charlie Fenton, Eric Deaton, Marilyn J. Siegel, Korinna Vigeant, Kerry Engber, Sarah Merrill, Buddy Williams, Kimberly Stryker, Bradley S. Snyder, Christina Romo, Andrea Hugill, Michael J. O'Shea, Linda White, Gail Fellows, Yasmeen Hafeez, Joe Woodside, Shauna Dave Scholl, Philip C. Prorok, Sharon Carmen, Kelly Hatton, Steven V. Marx, Sooah Kim, Robert Kobistek, Dawn Thomas, Lea Momongan, Chris Steward, Kari Bohman, Holly Bradford, Bradley S. Sabloff, Phillip Peterson, William C. Black, Lisa Pineda, James G. Ravenel, Karen Taylor, Beverly Trombley, Mona N. Fouad, Amber McDonald, Lauren J. Ramsay, Lisa Harmon, Jeffrey Geiger, David L. Spizarny, Jeffrey S. Klein, Xizeng Wu, Heather Tumberlinson, Joy Espiritu, Gina Varner, Dawn Fuehrer, Eric A. Hoffman, Sheila Moesinger, Nina Wadhwa, Steve King, Patricia Lavernick, Paola Spicker, Timothy R. Church, Cheryl Whistle, Sheila Greenup, Patricia Fantuz, Stephanie Levi, Peter Balkin, Mary E. Johnson, Johanna Ziegler, Susan Hoffman, Kathy L. Clingan, Craig Kuhlka, Maria Marchese, Lawrence F Cohen, Cylen Javidan-Nejad, Wilbur A. Franklin, Kevin J. Leonard, Tim A. Parritt, Jade Quijano, Kathleen Poler, Jennifer Rosenbaum, Xiuli Zhang, Christine Brown, Terri David-Schlegel, Susan M. Peterson, James R. Jett, Kenneth W. Clark, Edward P. Gelmann, Arthur Migo, Patricia Fox, Lori Hamm, Janie McMahon, Darlene Guillette, Robert C. Young, Patty Beckmann, Jerome Jones, Nikki Jablonsky, Roberta Yoffie, Heather L. Bradley, Darlene Higgins, Francine L. Jacobson, Christine B. Berg, Mark Bramwitt, Constantine N. Petrochko, Karen Stokes, Jennifer Rowe, Kathy McKeeta-Frobeck, Brenda Sleasman, Courtney Bell, Dave Tripp, Saundra S. Buys, Susan Walsh, Jo Rean D. Sicks, Richard G. Barr, Kirk Midkiff, Tom Caldwell, Elisabeth A. Grady, Subbarao Inampudi, Marilyn Calulot, Paul A. Kvale, Alice DuChateau, Kathy Berreth, Ruth Holdener, Katie Kuenhold, Thomas E. Warfel, David P. Naidich, Mandie Leming, Fraser Wilton, Leanne Franceshelli, Kathleen McMurtrie, Elaine Bowman, Donald F. Bittner, Helen Kaemmerer, Merri Mullennix, Adelheid Lowery, Andrew Karellas, Jenny Hirschy, Kate Naughton, Ashley B. Long, Kristin M. Gerndt, Kathleen Young, Richard M. Schwartzstein, Wendy Smith, Joseph Aisner, Shane Ball, Kathleen Krach, Cathy Mueller, Virginia May, Christopher Blue, Marsha Lawrence, Ronald S. Kuzo, Colleen McGuire, Alisha Moore, Sara Cantrell, Christie Leary, Pamela Allen, Maryann Trotta, Clifford Caughman, Peggy J. Gocala, Brian Mullen, Janan Alkilidar, Maryann Duggan, Lin Mueller, Alesis Nieves, Fenghai Duan, Frederick Olson, Edwin G. Williams, Jo Ann Hall Sky, Grant Izmirilian, Peggy Joyce, Judy Preston, Cristine Juul, Julianne Falcone, Bruce Neilson, Fla Lisa Beagle, Beth Evans, Jamie Mood, Janet Bishop, Jean Tsukamoto, Vivien Gardner, Gillian Devereux, Minesh Patel, Sally Fraki, Celia Stolin, Ami Lyn Taplin, Stephenie Johnson, Saeed Matinkhah, Jenna Bradford, Sanjeev Bhalla, Charles Jackson, Julie Haglage, Darlene R. Fleming, Allie M. Bell, Paul A. Bunn, Gail Orvis, Andrew J. Bierhals, Julie Ngo, Belores K. Prudoehl, Elaine N. Daniel, Peggy Olson, Paul F. Pinsky, Glenna M. Fehrmann, Aras Acemgil, Andrea Hamilton-Foss, Leeta Grayson, Smita Patel, Scott Emerson, Carl J. Zylak, James R. Maxwell, Jennifer Fleischer, Suzanne Smith, Jacqueline R. Sheeran, Alan Williams, Scott Gaerte, John Fletcher, Sonya Clark, Nancy Gankiewicz, Stuart S. Sagel, Jason Spaulding, Nancy E. Hanson, Nicole Fields, Richard D. Nawfel, Dinakar Gopalakrishnan, Margaret Oechsli, Susan Wenmoth, Isabelle Forter, Elizabeth Morrell, Jessica Rider, Letitia Clark, Michael Woo, Cynthia A. Brown, Camille Mueller, Mark T. Dransfield, Lois M. Roberts, Anne Randall, Eduard J. Gamito, Carrie O'Brien, Carolyn Palazzolo, Julie Schach, Robert Falk, Melissa Hudson, Jennifer Garcia Livingston, Cynthia L. Andrist, Tammy Fox, Elliott Drake, Tanya Zeiger, Renee Metz, Kevin Thomas, Neha Kumar, Elizabeth Couch, Beborah Bay, Mei Hsiu Chen, Jason Bronfman, Philip Dennis, Deb Engelhard, Pamela McBride, Daniel Kimball, Amy Haas, Pamela M. Mazuerk, Marlea Osterhout, Venetia Cooke, Tina Taylor, Amy St.Claire, Joe Hughes, Becky McElsain, Beverly Brittain, Michele Adkinson, Paige Beck, Martha Maineiro, Paula R. Beerman, Jackie Seivert, Mary M. Pollock, Donald Corle, Tina Herron, Marcella Petruzzi, Natalie F. Scully, Kenneth A. Coleman, Jennifer Yang, Debra Loria, Wendy Moss, Alan Brisendine, Cheryl M. Lewis, Dalphany Blalock, Lonni Schultz, Douglas Bashford, Nora Szabo, David Shea, Amanda Devore, Karen Schleip, Judy Netzer, Barry Clot, Gerald M. Mulligan, Nancy E. Krieger Black, David Schultz, Jim Pool, Craig E. Leymaster, Kathryn Rabanal, Kay Bohn, Tara Berg, Marisol Furlong, Stacey Mitchell, Donna Biracree, Laura Jones, Cassie Olson, Robin Stewart, Jeremy Pierce, Marilyn Bruger, Valene Kennedy, Stephanie Davis, Colin O'Donnell, Glenn A. Tung, Shannon Wright, William Lake, Sharon Jones, Vincent Girardi, Brad Benjamin, Veenu Harjani, Drew A. Torigian, Kevin Edelman, Sue Frederickson, Paul E. Smart, Michelle Wann Haynes, D S Gierada, Glenn Fletcher, Rosalie Ronan, Patricia Ann Street, Eleace Eldridge-Smith, Lynly Wilcox, Cindy Lewis-Burke, La Tonja Davis, Rachel Black Thomas, Dawn Shone, Evangeline Griesemer, Tim Budd, Lindsey Dymond, Marlene Semansky, Amy Rueth, Constantine Gatsonis, Kay H. Vydareny, Usha Singh, Amy Lita Evangelista, Angelica C. Barrett, Bethany Pitino, Shirley Wachholz, Angela M. Williams, Sandra Fiarman, Karen Luttrop, David Chellini, Michael Bradley, Helen Fink, Aaron Zirbes, Roger Inatomi, Joon K. Lee, Heather Bishop Blake, Lisa Woodard, Craig Hritz, Sarah Neff, Aine Marie Kelly, Deborah Harbison, Baigalmaa Yondonsambuu, Amy Lloyd, Christine Gjertson, Erin Cunningham, Angelee Mean, June Morfit, Ping Hu, William Thomas, Jazman Brooke, Paul Marcus, Jeremy Gorelick, Erin Lange, William Stanford, Denise R. Aberle, Lena Glick, Annabelle Lee, Ian Malcomb, Deanna L. Miller, Mary Mesnard, Jacqueline Jackson, Jhenny Hernandez, Desiree E. Morgan, Howard I. Jolies, Jacquie Marietta, Teresa Lanning, Debra Rempinski, Amanda C. Davis, Karen Mathews Batton, Mahadevappa Mahesh, Erik Wilson, Deana Nelson, Sharan L. Campleman, William Manor, Julie Sears, Howard Mann, E. David Crawford, Carl Krinopol, Greg Gambill, Margo Cousins, Rex C. Yung, Sangeeta Tekchandani, Thomas Vahey, Ann D. McGinnis, Kimberly Nolan, Kaylene Crawford, Kelli P. Rockwell, Dana Roeshe, Fred W. Prior, Kari Ranae Kramer, Heidi Nordstrom, Frank Stahan, Shawn Sams, Cherie Baiton, Joy Tani, Thomas J. Watson, Angela Cosas, Diane Kowalik, Pritha Dalal, Ann Jolly, Jeanine Wade, Laura Bailey, Julie Varner, Glen K. Nyborg, Christopher Toyn, David Gemmel, Susanna N. Dyer, Laurie Amendolare, Mary Ellen Frebes, Judy Ho, Adele Perryman, John Keller, D. Sullivan, George Mahoney, Scott Cupp, Linda L. Welch, Peter Greenwald, Robert Sole, Marcello Grigolo, Caroline Chiles, Patricia Sheridan, Deborah M. Chewar, Vijayasri Narayanaswami, Susan Blackwell, Suzanne B. Lenz, Alphonso Dial, Melvin Tockman, Carolyn Hill, John Stubblefield, Catherine E. Smith, Judith Lobaugh, Rosa M. Medina, Jackie Meier, Nandita Bhattacharjee, Robert Tokarz, Lisa Clement, Nancy Caird, Cindy Masiejczyk, Patricia Shwarts, Laura Springhetti, Sandra Schornak-Curtis, Edwin F. Donnelly, Patricia Tesch, Laurie Rathmell, Pamela K. Woodard, Edward A. Sausville, David R. Pickens, Kylee Hansen, Paulette Williams, Barbara Ferris, Rachel L. McCall, Nicole M. Carmichael, Dawn Whistler, Ramachandra Chanapatna, Glynis Marsh, Mary Wiseman, Tony DeAngelis, L. Heather, Vicki Prayer, Robin Laura, Priscilla Bland, Gregory W. Gladish, Amy Garrett, Kelly McNulty, Daniel J. Pluta, Mylene T. Truong, Serelda Young, Crista Cimis, Gordon Jacob Sen, Rhonda Rosario, Anthony B. Miller, Edward Hunt, Juanita Helms, Jill K. Bronson, Jeff Yates, Ginette D. Turgeon, Bo Lu, Nancy Fredericks, Pam Senn, Ryan Pena, Hakan Sahin, Mary Lynn Steele, Jill E. Cordes, Noel Maddy, R. Adam DeBaugh, Hope Hooks, Zipporah Lewis, Robert L. Berger, Shani Harris, Natalie Gray, Jennifer Kasecamp, Elizabeth King, Jacinta Mattingly, Hrudaya Nath, Kathy Torrence, Christine Cole Johnson, Sara Mc Clellan, Kalin Albertsen, Kim Sprenger, Ryan Norton, Jody Wietharn Kristopher, Linda Warren, Byung Choi, Casey O'Quinn, Mark K. Haron, Chris J. Jennings, Karen Robinson, Joan Molton, Dorothy Hastings, Robert I. Garver, Christopher J. Cangelosi, Jeannette Lynch, Peter Ohan, Angela Campbell, Dawn Mead, Miriam Galbraith, Divine Hartwell, Natalya Portnov, Gene L. Colice, Andetta R. Hunsaker, Analisa Somoza, Todd Risa, Daniel C. Sullivan, Karthikeyan Meganathan, Tammy DeCoste, Peter Zamora, Richard M. Fagerstrom, Iiana Gareen, Phyllis J. Walters, Barbara L. Carter, Alem Mulugeta, Rob Bowman, Kavita Garg, Andrea Franco, Mary Adams Zafar Awan, Edward Reed Smith, Rachel Phillips, Michelle Aganon-Acheta, Fred R. Hirsch, Peter Jenkins, Pamela Taybus, Joy Knowles, Karen M. Horton, Cheryl Spoutz-Ryan, Sarah Landes, William G. Hocking, Laura B. Schroeder, Erini Makariou, Jered Sieren, Kaylene Evans, Erin Nekervis, Brenda Polding, Tonda Robinson, Joel L. Weissfeld, Terry J. Sackett, Michael F. McNitt-Gray, Leslie Dobson, Raymond Weatherby, Randell Kruger, Revathy B. Iyer, Mary Krisk, Anthony Levering, Susan Collins, Alison Schmidt, William M. Hanson, Patricia Schuler, Karen Glanz, Morgan Ford, Beatrice Trotman-Bickenson, Richard Guzman, Paul Koppel, Judith K. Amorosa, Meredith Slear, Dayna Love, Carol Vaughn, Kellyn Adams, Celeste Monje, Garry Morrison, Sherri Mesquita, Paul Cronin, Tony Blake, Constance Elbon-Copp, Robert A. Clark, Felix Mestas, Erich Allman, Armen Markarian, Cheryl Souza, Karen O’Toole, Elliot K. Fishman, Karen Augustine, Jane Hill, Bonnie Kwit, Ralph Drosten, Susan Foley, Stacy E. Smith, Angie Bailey, Jennifer Bishop Kaufmann, Shelly Meese, Phillip M. Boiselle, Howard Morrow, Thomas D. Hinke, Barry Edelstein, Erin Schuler, William C. Bailey, Donna Letizia, David S. Gierada, Frederick J. Larke, Robin Haverman, Sarah Baum, Sally Hurst, Richard L. Morin, Ben Dickstein, William Russell, J. Anthony Seibert, Sophia Sabina, Mary Alyce Riley, Michael A. Taylor, Katherine BeAngelis, Robert A. Hawkins, Fernando R. Gutierrez, Amie Welch, Heather Lancor, George Armah, James Blaine, Eric Henricks, Joel Dunnington, Carole Walker, Laura Motley, Melody Kolich, Bruce J. Hillman, David W. Sturges, Mindy Lofthouse, Amy Warren, Michael Black, Mark Kolich, Lisa A. Holloway, Shannon M. Pretzel, Susan Shannon, Yassminda Harts, Dallas Sorrel, Lance A. Yokochi, Diana Wisler, Arthur Sandy, Roberta Clune, Shirley Terrian, Shalonda Manning, Bradley Willcox, Thomas J. Payne, James L. Tatum, Dale Brawner, Sandy Morales, Rodolfo C. Morice, Amy Vieth, Emily Jewitt, Chelsea O'Carroll, Theresa C. McLoud, John E. Langenfeld, Chris H. Cagnon, Lisa B. Hinshaw, Gena Kucera, Helena R. Richter, Drew Torigian, June McSwain, Courtney Eysmans, Vinis Salazar, David Spizarny, Mary Kelly-Truran, Mark Whitty, Henry Albano, Connie L. Sathre, William R. Geiser, Barnett S. Kramer, Marianna Gustitis, Gordon C. Jones, Neil E. Caporaso, Timothy Welsh, Roger Tischner, Ana Maria Mendez, Dominick A. Antico, Cathy L. Bornhorst, Carla Chadwell, Stephanie Pawlak, Kelli M. West, Joe V. Selby, Randall Kruger, Jodi Hildestad, Elaine Freesmeier, Nicole Rivas, Andrew Goodman, Naima Vera-Gonzalez, Stuart Lutzker, Eric M. Hart, Melanie Yeh, Shane Sorrell, Deb Multerer, Sharon Jacoby, Debbie Gembala, Elizabeth Fleming, Myrle Johnson, Michael J. Flynn, Frank Tabrah, Martin L. Schwartz, Deanna Mandley, Brad Siga, Guillermo Marquez, Jeffrey Koford, Victoria Jenkins, Janice Pitts, Constantine A. Gatsonis, Natalie Baptiste, Edith M. Marom, Gina Sammons, Anne Burrough, Martha Ramirez, Jack Cahill, Carl Jaffe, Linda Heinrichs, Aura Cole, Paul Rust, Alon Coppens, Gregg Hamm, Lisa Conklin, Kathleen A. Robbins, Carleaner Williams, Gwen Chalom, Winston Sterling, Colleen Hudak, Lea Matous, Ella A. Kazerooni, Denise Kriescher, David A. Lynch, Liz Bolan, Jacob Wolf, Jonathan G. Goldin, Roberta Quinn, L. A. Schneider, Kathleen A. Murray, Erica Sturgeon, Jennifer Avrin, Michelle T. Biringer, Mark Hinson, Cynthia Reiners, Brian Chin, Amy Brunst, Ann M. Lambrecht, Katherine Lohmann, Jennifer Bacon, Ulander Giles, Diane Shepherd, William T. Corey, Timothy Cosgrove, Lana C. Walters, Nancy Kadish, Hilary C. Nosker, Christine D. Berg, Thomas Payne, Jackie Becker, Kanistha Sookpisal, Lyn Seguin, Todd R. Hazelton, Roy Adaniya, James Fisher, Annmarie Walsh, Shirleen Hyun, Laura Stark, Kenneth Hansen, Carolyn Nelson, Martin Tammemagi, Mary A. Wolfsberger, Barry H. Gross, Valentina Ortico, Marge Watry, Jeff Childs, Gabe Herron, Loretta Thorpe, Lisa Damon, Evanthia Papadopoulos, Denise Moline, Voula E. Christopoulos, John D. Minna, Tony Jones, Mitchell Machtay, Michael Plunkett, Melissa Laughren, Luis Zagarra, Adam Leming, Eda Ordonez, Chris Howell, Marissa Peters, Wendy Mosiman, Joanne Gerber, Alfonso Lorenzo, Barbara L. McComb, Laura Hill, Gale Christensen, Hanna Comer, Carmen Guzman, Kathy Taylor, Misty Oviatt, Malcolm King, Lily Stone, Rex Welsh, Bernadette Pennetta, Cristina Raver, Jan E. Hyder, Stephanie Clabo, Peggy Lau, Jacqueline Fearon, Patricia Pangburn, Pamela Dow, William K. Evans, Victor De Caravalho, Mike Wirth, Brooke Johnson, Meridith Blevins, Lisa H. Gren, Sharon L. Kurjan, James P. Evans, Kirk E. Smith, Donna King, John A. Worrell, Mindy S. Geisser, Philip F. Judy, Richard Barr, Sue Misko, Stanley R. Phillips, Jillian Nickel, Christine M. McKey, Joe Austin, Donna Hartfeil, Laura Young, Shovonna White, Alexis K. Potemkin, Anthony Boulos, Tawny Martin, Karen Kofka, Heather McLaughlin, Matthew K. Siemionko, Melissa Houston, Angela Lee Rowley, Adys Fernandez, Murray Backer, Jagdish Singh, Mary Weston, Nancy Payte, Charles Apgar, John K. Gohagan, Jeff Fairbanks, Wylie Burke, David Chi, Michael Nahill, Kevin DeMarco, Karen Patella, Beverly Rozanok, Carol M. Moser, Nicole Matetic Mac, Karen Boyle, Dinah Lorenzo, Elanor Adkins, Phyllis Olsson, Amanda M. Adams, Sujaya Rao, K.E. Jones, Polly Kay, D. Lynn Werner, John B. Weaver, Sally Anne Kopesec, Jennifer Frye, Victoria Chun, Cathy Francow, Cheri Whiton, Jo Ann Nevilles, Andrew Bodd, Barbara Galen, Sabrina Chen, Cindy Cyphert, Stephen M. Moore, Petra J. Lewis, Shanna Nichols, Mareie Walters, Thea Palmer Zimmerman, Warren B. Gefter, Peter Dubbs, Ann Reinert, Holly Washburn, Renee MacDonald, Boleyn R. Andrist, Dianalyn M. Evans, Marvin Flores, Tricia Adrales-Bentz, Claudine Isaacs, Regina C. MacDougall, Greg M. Silverman, Nichoie Cadez, Lynne Bradford, Rochelle Williams, Angela M. McLaughlin, Ellen Sandberg, Cheryl Crozier, Robert Mayer, Richard P. Remitz, Sheron Bube, Leroy Riley, Vish Iyer, Sophie Breer, Stephen Baylin, Anna Boyle, Shannon Williams, Kristen Keating, Martin M. Oken, Gerald L. Andriole, Bruce E. Hubler, Eric T. Goodman, David Engelhart, Bonna Au, Brianne Whittaker, Tricia Hoffa, Eng Brown, Tammy Wolfsohn, Denise L. Foster, Barry H. Cohen, Linda Galocy, Matthew T. Bee, Jacqueline Matuza, Leslie Henry, Katherine Meagher, Mona Fouad, Beth McLellan, Troy Cook, John Sheflin, Lilian Villaruz, Marcella Moore, Brandy Mack-Pipkin, Vanessa Graves, Ryan Weyls, William T. Herbick, Geoffrey McLennan, Lynn Hoese, Janise Webb, Terrie Kitchner, Michele Lee, Robert T. Greenlee, Charles C. Matthews, Nicole Spiese, Jeffrey Heffernon, Dianna D. Cody, Patricia Blair, Kathy Garrett, Michael A. Sullivan, and Loretta Granger
- Subjects
Oncology ,medicine.medical_specialty ,business.industry ,Mortality rate ,medicine.disease ,law.invention ,Quality-adjusted life year ,Randomized controlled trial ,law ,Internal medicine ,medicine ,Radiology, Nuclear Medicine and imaging ,National Lung Screening Trial ,Radiology ,Overdiagnosis ,business ,Lung cancer ,Lung cancer screening ,Mass screening - Abstract
The National Lung Screening Trial (NLST) is a randomized multicenter study comparing low-dose helical computed tomography (CT) with chest radiography in the screening of older current and former heavy smokers for early detection of lung cancer, which is the leading cause of cancer-related death in the United States. Five-year survival rates approach 70% with surgical resection of stage IA disease; however, more than 75% of individuals have incurable locally advanced or metastatic disease, the latter having a 5-year survival of less than 5%. It is plausible that treatment should be more effective and the likelihood of death decreased if asymptomatic lung cancer is detected through screening early enough in its preclinical phase. For these reasons, there is intense interest and intuitive appeal in lung cancer screening with low-dose CT. The use of survival as the determinant of screening effectiveness is, however, confounded by the well-described biases of lead time, length, and overdiagnosis. Despite previous attempts, no test has been shown to reduce lung cancer mortality, an endpoint that circumvents screening biases and provides a definitive measure of benefit when assessed in a randomized controlled trial that enables comparison of mortality rates between screened individuals and a control group that does not undergo the screening intervention of interest. The NLST is such a trial. The rationale for and design of the NLST are presented.
- Published
- 2011
- Full Text
- View/download PDF
20. Design and Development of Micro Aerial Vehicles and Their Cooperative Systems for Target Search and Tracking
- Author
-
Jayantha Katupitiya, Tomonari Furukawa, Makoto Kumon, Mark Whitty, and Lin Chi Mak
- Subjects
Engineering ,Unmanned ground vehicle ,business.industry ,Real-time computing ,Aerospace Engineering ,Ground vehicles ,Tracking (particle physics) ,Base station ,Waypoint ,Microcontroller ,Mechanical stability ,Range (aeronautics) ,business ,Simulation - Abstract
This paper presents Micro Aerial Vehicles (MAVs) and their cooperative systems including Unmanned Ground Vehicles (UGVs) and a Base Station (BS), which were primarily designed for the 1st US-Asian Demonstration and Assessment on Micro-Aerial and Unmanned Ground Vehicle Technology (MAV08). The MAVs are of coaxial design, which imparts mechanical stability both outdoor and indoor while obeying a 30 cm size constraint. They have carbon fibre frames for weight reduction allowing microcontrollers and various sensors to be mounted on-board for tele-operated and waypoint control. The UGVs are similarly equipped to perform their own search and tracking mission but also to support the MAVs by relaying data between the MAVs and the BS when they are out of direct range. The BS monitors the vehicles and their environment and navigates them autonomously or with humans in the loop through the developed GUI. The ability of the MAV in flight was demonstrated by showing continuous hovering. The efficacy of the overall system was validated by autonomously controlling two UGVs for cooperative search.
- Published
- 2009
- Full Text
- View/download PDF
21. A localisation system for an indoor rotary‐wing MAV using blade mounted LEDs
- Author
-
Lin Chi Mak, Tomonari Furukawa, and Mark Whitty
- Subjects
Engineering ,Blade (geometry) ,business.industry ,Tracking (particle physics) ,Ellipse ,Industrial and Manufacturing Engineering ,law.invention ,Rotary wing ,Base station ,law ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Active vision ,business ,Pose ,Light-emitting diode - Abstract
PurposeThe purpose of this paper is to present a localisation system for an indoor rotary‐wing micro aerial vehicle (MAV) that uses three onboard LEDs and base station mounted active vision unit.Design/methodology/approachA pair of blade mounted cyan LEDs and a tail mounted red LED are used as on‐board landmarks. A base station tracks the landmarks and estimates the pose of the MAV in real time by analysing images taken using an active vision unit. In each image, the ellipse formed by the cyan LEDs is used for 5 degree of freedom (DoF) pose estimation with yaw estimation from the red LED providing the 6th DoF.FindingsAbout 1‐3.5 per cent localisation error of the MAV at various ranges, rolls and angular speeds less than 45°/s relative to the base station at known location indicates that the MAV can be accurately localised at 9‐12 Hz in an indoor environment.Research limitations/implicationsLine‐of‐sight between the base station and MAV is necessary while limited accuracy is evident in yaw estimation at long distances. Additional yaw sensors and dynamic zoom are among future work.Practical implicationsProvided an unmanned ground vehicle (UGV) as the base station equipped with its own localisation sensor, the developed system encourages the use of autonomous indoor rotary‐wing MAVs in various robotics applications, such as urban search and rescue.Originality/valueThe most significant contribution of this paper is the innovative LED configuration allowing full 6 DoF pose estimation using three LEDs, one camera and no fixed infrastructure. The active vision unit enables a wide range of observable flight as the ellipse generated by the cyan LEDs is recognisable from almost any direction.
- Published
- 2008
- Full Text
- View/download PDF
22. Employing Android devices for autonomous control of a UGV
- Author
-
Mark Whitty and Russell J. Harding
- Subjects
Software suite ,Data acquisition ,Software ,Computer science ,business.industry ,Control system ,Embedded system ,Global Positioning System ,Android (operating system) ,business ,Mobile device ,Humanoid robot - Abstract
The current availability of increasingly powerful mobile devices is driving a race to integrate such technology with a diversity of new fields. In order to explore one such largely unexplored application, this paper presents an onboard mobile device-based system for autonomous control of a UGV. Contextualised by the requirements of the Australian AGVC competition, a multi-application software suite was developed for Android, implementing a number of open-source software libraries adapted for a control system application. The developed system is capable of full soft-real time autonomous control, capitalising exclusively on the Android device's embedded sensors for data acquisition. Preliminary results indicated robust capability to perceive, map and plan paths around obstacles, and perform adequately accurate localisation in real time. The effectiveness of the system, considering minimum computational power and hardware cost, not only serves as a proof of application to drive research for the next generations of mobile devices, but garnered considerable interest in competition against classical autonomous system setups.
- Published
- 2015
- Full Text
- View/download PDF
23. Automatic grape bunch detection in vineyards for precise yield estimation
- Author
-
Scarlett Liu, Mark Whitty, and Stephen Cossell
- Subjects
Support vector machine ,Computer science ,business.industry ,Yield (wine) ,Feature extraction ,Computer vision ,Image processing ,Field of view ,Artificial intelligence ,business ,Scale (map) ,Image (mathematics) - Abstract
Precise yield estimation using image processing techniques has been demonstrated conceptually on a small scale. Expanding these solutions to larger scale applications requires significant computational power, which need to analyze the entirety of all captured image data. However, many images captured for yield estimation in these processes only contain small areas of useful features for analysis. This paper introduces an image processing algorithm combining color and texture information, and the use of a support vector machine, to accelerate fruit detection by isolating useful features in images. Experiments carried out on two varieties of red grapes (Shiraz and Cabernet Sauvignon) demonstrate an accuracy of 87% and recall of 90%. This method is also shown to remove the restriction on the field of view and background, which limited existing methods and is a first step towards precise and reliable yield estimation on a large scale.
- Published
- 2015
- Full Text
- View/download PDF
24. Metric-based detection of robot kidnapping
- Author
-
Dylan Campbell and Mark Whitty
- Subjects
Contextual image classification ,business.industry ,Computer science ,Metric (mathematics) ,Robot ,Mobile robot ,Computer vision ,Global Map ,Artificial intelligence ,False positive rate ,business ,Pose ,Mobile robot navigation - Abstract
Kidnapping occurs when a robot is unaware that it has not correctly ascertained its position. As a result, the global map may be severely deformed and the robot may be unable to perform its function. This paper presents a metric-based technique for real-time kidnap detection that utilises a set of binary classifiers to identify all kidnapping events during the autonomous operation of a mobile robot. In contrast, existing techniques either solve specific cases of kidnapping, such as elevator motion, without addressing the general case or remove dependence on local pose estimation entirely, an inefficient and computationally expensive approach. Four metrics were evaluated and the optimal thresholds for the most suitable metrics were determined, resulting in a combined detector that has a negligible probability of failing to identify kidnapping events and a low false positive rate for both indoor and outdoor environments. While this paper uses metrics specific to 3D point clouds, the approach can be generalised to other forms of data, including visual, providing that two independent ways of estimating pose are available.
- Published
- 2013
- Full Text
- View/download PDF
25. Robotics, Vision and Control. Fundamental Algorithms in MATLAB
- Author
-
Mark Whitty
- Subjects
Control and Systems Engineering ,Computer science ,business.industry ,Control (management) ,Robotics ,Control engineering ,Artificial intelligence ,MATLAB ,business ,computer ,Industrial and Manufacturing Engineering ,Computer Science Applications ,computer.programming_language - Published
- 2012
- Full Text
- View/download PDF
26. Detection of non-flat ground surfaces using V-Disparity images
- Author
-
Jayantha Katupitiya, Jun Zhao, and Mark Whitty
- Subjects
Pixel ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image segmentation ,Object detection ,Image (mathematics) ,Stereopsis ,Computer Science::Computer Vision and Pattern Recognition ,Computer vision ,Motion planning ,Artificial intelligence ,business ,Feature detection (computer vision) ,Ground plane - Abstract
Ground plane detection plays an important role in stereo vision based obstacle detection methods. Recently, V-Disparity image has been widely used for ground plane detection. The existing approach based on V-Disparity image can detect flat ground successfully but have difficulty in detecting non-flat ground. In this paper, we discuss the representation of non-flat ground in V-Disparity image, based on which we propose a method to detect non-flat ground using V-Disparity image.
- Published
- 2009
- Full Text
- View/download PDF
27. Path planning for autonomous bulldozers
- Author
-
Jayantha Katupitiya, Jose Guivant, Masami Hirayama, and Mark Whitty
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Mechanical Engineering ,02 engineering and technology ,021001 nanoscience & nanotechnology ,Development theory ,Grid ,Industrial engineering ,Phase (combat) ,Automation ,Computer Science Applications ,020901 industrial engineering & automation ,Work (electrical) ,Control and Systems Engineering ,Path (graph theory) ,Human operator ,Motion planning ,Electrical and Electronic Engineering ,0210 nano-technology ,business - Abstract
Komatsu Ltd. is reinforcing the R&D of automation technology for our earth-movers. Based on this trend, a path planning methodology for autonomous bulldozers is proposed and developed. This methodology autonomously plans an efficient path depending on a given material profile. Conventionally, existing path planning algorithms are versatile so they can be applied to any application, typically by using a grid-based map. However, in reality, a substantial effort is still required when applying to specific industry products. In contrast to this trend, the aim of this work is to develop a path planning algorithm specifically suitable for bulldozers, from the theory development phase. The novel planning methodology was developed by incorporating industry feedback, and has successfully resolved the issues which occurred when attempting to apply the existing planning methodologies. As a result of this work, the developed methodology provides an efficient path that lets bulldozers complete given tasks with minimal operation time, without a human operator on-board, and can be applied to commercial machines immediately.
28. A novel Observer-based Architecture for Water Management in Large-Scale (Hazelnut) Orchards
- Author
-
Nicolas Bono Rossello, Andrea Gasparri, Renzo Fabrizio Carpio, Emanuele Garone, Robert Fitch, Jay Katupitiya, Mark Whitty, Bono Rossello, N., Fabrizio Carpio, R., Gasparri, A., and Garone, E.
- Subjects
0209 industrial biotechnology ,Irrigation ,Estimatos ,Computer science ,Greenhouse ,Context (language use) ,02 engineering and technology ,Agricultural engineering ,Water consumption ,Weather station ,020901 industrial engineering & automation ,Water dynamics ,System models ,0202 electrical engineering, electronic engineering, information engineering ,Observers ,Water content ,Automatique ,Estimato ,Intensive farming ,business.industry ,020208 electrical & electronic engineering ,Agriculture ,Observer ,Control and Systems Engineering ,Precision agriculture ,business ,Kalman Filter - Abstract
Water management is an important aspect in modern agriculture. Irrigation systems are becoming more and more complex, trying to minimize the water consumption while ensuring the necessities of the plants. A fundamental requirement to define efficient irrigation policies is to be able to estimate the water status of the plants and of the soil. In this context, precision agriculture addresses this problem by using the latest technological advancements. In particular, most of the works in the literature aim to develop highly accurate estimations under the assumption of the availability of a dense network of sensors. Although this assumption may be adequate for intensive farming (e.g. greenhouses), it becomes quite unrealistic in the context of large-scale scenarios. In this work, we propose a novel observer-based architecture for the water management of large-scale (hazelnut) orchards which relies on a network of sparsely deployed soil moisture sensors along with a weather station and on remote sensing measurements carried out by drones with a pre-defined periodicity. The contribution is twofold: i) First a novel model of the water dynamics in an hazelnut orchard is proposed, which includes the water dynamics in the soil and in the plants, and ii) then, on the basis of this model and of the available measurements, the use of a Kalman filter with intermittent observations is proposed, taking also into account the availability of the weather station measurements. The effectiveness of the proposed solution is validated through simulation., SCOPUS: cp.p, info:eu-repo/semantics/published
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.