21 results
Search Results
2. Two-Stage Short-Term Power Load Forecasting Based on RFECV Feature Selection Algorithm and a TCN–ECA–LSTM Neural Network.
- Author
-
Liang, Hui, Wu, Jiahui, Zhang, Hua, and Yang, Jian
- Subjects
- *
LOAD forecasting (Electric power systems) , *FEATURE selection , *CONVOLUTIONAL neural networks , *FORECASTING , *ALGORITHMS , *PREDICTION models , *LINEAR network coding - Abstract
To solve the problem of feature selection and error correction after mode decomposition and improve the ability of power load forecasting models to capture complex time series information, a two-stage short-term power load forecasting method based on recursive feature elimination with a cross validation (RFECV) algorithm and time convolution network–efficient channel attention mechanism–long short-term memory network (TCN–ECA–LSTM) is presented. First, the load sequence is decomposed into a relatively stable set of modal components using variational mode decomposition. Then, the RFECV-based method filters the feature set of each modal component to construct the best feature set. Finally, a two-stage prediction model based on TCN–ECA–LSTM is established. The first stage predicts each modal component and the second stage reconstructs the load forecast based on the predicted value of the previous stage. This paper takes actual data from New South Wales, Australia, as an example, and the results show that the method proposed in this paper can build the feature set reliably and efficiently and has a higher accuracy than the conventional prediction model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Standardising Breast Radiotherapy Structure Naming Conventions: A Machine Learning Approach.
- Author
-
Haidar, Ali, Field, Matthew, Batumalai, Vikneswary, Cloak, Kirrily, Al Mouiee, Daniel, Chlap, Phillip, Huang, Xiaoshui, Chin, Vicky, Aly, Farhannah, Carolan, Martin, Sykes, Jonathan, Vinod, Shalini K., Delaney, Geoffrey P., and Holloway, Lois
- Subjects
- *
SPECIALTY hospitals , *HUMAN body , *MACHINE learning , *RETROSPECTIVE studies , *ARTIFICIAL intelligence , *CANCER treatment , *TERMS & phrases , *RESEARCH funding , *RADIOTHERAPY , *DATA analysis , *ARTIFICIAL neural networks , *RECEIVER operating characteristic curves , *THREE-dimensional printing , *BREAST tumors , *ONCOLOGY , *ALGORITHMS , *LONGITUDINAL method , *RADIATION dosimetry , *DATA mining - Abstract
Simple Summary: In radiotherapy treatment, organs at risk and target volumes are contoured by the clinicians to prepare a dosimetry plan. In retrospective data, these structures are not often standardised to universal names across the patients plans, which is required to enable data mining and analysis. In this paper, a new method was proposed and evaluated to automatically standardise radiotherapy structures names using machine learning algorithms. The proposed approach was deployed over a dataset with 1613 patients collected from Liverpool & Macarthur Cancer Therapy Centres, New South Wales, Australia. It was concluded that machine learning techniques can standardise the dosimetry plan structures, taking into consideration the integration of multiple modalities representing each structure during the training process. In progressing the use of big data in health systems, standardised nomenclature is required to enable data pooling and analyses. In many radiotherapy planning systems and their data archives, target volumes (TV) and organ-at-risk (OAR) structure nomenclature has not been standardised. Machine learning (ML) has been utilised to standardise volumes nomenclature in retrospective datasets. However, only subsets of the structures have been targeted. Within this paper, we proposed a new approach for standardising all the structures nomenclature by using multi-modal artificial neural networks. A cohort consisting of 1613 breast cancer patients treated with radiotherapy was identified from Liverpool & Macarthur Cancer Therapy Centres, NSW, Australia. Four types of volume characteristics were generated to represent each target and OAR volume: textual features, geometric features, dosimetry features, and imaging data. Five datasets were created from the original cohort, the first four represented different subsets of volumes and the last one represented the whole list of volumes. For each dataset, 15 sets of combinations of features were generated to investigate the effect of using different characteristics on the standardisation performance. The best model reported 99.416% classification accuracy over the hold-out sample when used to standardise all the nomenclatures in a breast cancer radiotherapy plan into 21 classes. Our results showed that ML based automation methods can be used for standardising naming conventions in a radiotherapy plan taking into consideration the inclusion of multiple modalities to better represent each volume. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. A Clustering Algorithm to Organize Satellite Hotspot Data for the Purpose of Tracking Bushfires Remotely.
- Author
-
Weihao Li, Dodwell, Emily, and Cook, Dianne
- Subjects
- *
WILDFIRES , *ALGORITHMS - Abstract
This paper proposes a spatiotemporal clustering algorithm and its implementation in the R package spotoroo. This work is motivated by the catastrophic bushfires in Australia throughout the summer of 2019-2020 and made possible by the availability of satellite hotspot data. The algorithm is inspired by two existing spatiotemporal clustering algorithms but makes enhancements to cluster points spatially in conjunction with their movement across consecutive time periods. It also allows for the adjustment of key parameters, if required, for different locations and satellite data sources. Bushfire data from Victoria, Australia, is used to illustrate the algorithm and its use within the package. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. A Comparative Review of Recent Kinect-Based Action Recognition Algorithms.
- Author
-
Wang, Lei, Huynh, Du Q., and Koniusz, Piotr
- Subjects
- *
HUMAN activity recognition , *HUMAN behavior , *COMPUTER vision , *DEEP learning , *ALGORITHMS - Abstract
Video-based human action recognition is currently one of the most active research areas in computer vision. Various research studies indicate that the performance of action recognition is highly dependent on the type of features being extracted and how the actions are represented. Since the release of the Kinect camera, a large number of Kinect-based human action recognition techniques have been proposed in the literature. However, there still does not exist a thorough comparison of these Kinect-based techniques under the grouping of feature types, such as handcrafted versus deep learning features and depth-based versus skeleton-based features. In this paper, we analyze and compare 10 recent Kinect-based algorithms for both cross-subject action recognition and cross-view action recognition using six benchmark datasets. In addition, we have implemented and improved some of these techniques and included their variants in the comparison. Our experiments show that the majority of methods perform better on cross-subject action recognition than cross-view action recognition, that the skeleton-based features are more robust for cross-view recognition than the depth-based features, and that the deep learning features are suitable for large datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. A Contextual and Multitemporal Active-Fire Detection Algorithm Based on FengYun-2G S-VISSR Data.
- Author
-
Lin, Zhengyang, Chen, Fang, Li, Bin, Yu, Bo, Jia, Huicong, Zhang, Meimei, and Liang, Dong
- Subjects
- *
EMERGENCY management , *GEOSTATIONARY satellites , *ERROR rates , *ALGORITHMS , *DETECTION limit , *FIRE detectors , *WILDFIRES - Abstract
Wildfires are one of the most destructive disasters on the planet. They also significantly impact the land surface. Satellite data have been widely used to detect the outbreak and monitor the expansion of fire incidents for damage assessment and disaster management. Polar-orbiting satellite data have been used for several decades but data from geostationary satellites, which can provide observations with a high temporal resolution, have received much less attention. This paper utilizes data from FengYun-2G, a Chinese geostationary satellite, to detect wildfires in two selected research regions in January 2016. The detection algorithm systemizes image-based analysis to filter out obvious nonfire pixels and temporal analysis to confirm the true detections. Fire detection is based on comparisons between predicted and observed values. The results show that the proposed method has some advantages compared with the use of polar-orbiting satellite data, including early detection and continuous observation. The validation work is conducted based on the collection 6.1 Global Monthly Fire Location Product generated from fire detections by Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. The average accuracy within the target time is 56%, while the omission error rate is over 78%. In detail, the algorithm has a lower omission error rate in Australia while it fails in detecting most of the fire pixels in India. The dominance of small fire incidents, as well as low spatial resolution greatly limit the detection ability. Many small fires were beyond the ability of Stretched Visible and Infrared Spin Scan Radiometer (S-VISSR) data when no significant fire characteristics could be captured. Future development of the algorithm will focus on improving the results by enhancing the adaption to different regions, as well as, including multisource data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. Investigation of SMAP Active–Passive Downscaling Algorithms Using Combined Sentinel-1 SAR and SMAP Radiometer Data.
- Author
-
He, Lian, Hong, Yang, Wu, Xiaoling, Ye, Nan, Walker, Jeffrey P., and Chen, Xiaona
- Subjects
- *
DATA , *SOIL moisture , *ALGORITHMS - Abstract
The aim of this paper was to test the capabilities of the Sentinel-1 radar data in downscaling Soil Moisture Active Passive (SMAP) radiometer data for high-resolution soil moisture estimation. Three different active–passive downscaling algorithms, including the brightness temperature-based downscaling algorithm (BTBDA), the soil moisture-based downscaling algorithm (SMBDA), and a change detection method (CDM), were analyzed using pairs of Sentinel-1 active and SMAP passive observations collected over a semiarid landscape in southeastern Australia from May 2015 to May 2016. While these algorithms have been tested previously, this is the first study to evaluate the three algorithms using real Sentinel-1 radar and SMAP radiometer data. The SMAP passive observations were disaggregated to 9-, 3-, and 1-km scales and then compared with ground soil moisture measurements. The results suggest that the root-mean-square error (RMSE) in downscaled soil moisture at 9-km resolution was 0.057, 0.056, and 0.067 cm3/cm3 for the BTBDA, SMBDA, and CDM, respectively. The accuracy of downscaling methods was generally decreased when applied at the finer spatial resolution. The SMBDA had overall better performance in terms of correctly detecting the soil moisture pattern and relatively lower RMSE values, and is, therefore, recommended for the combined Sentinel-1 radar and SMAP radiometer setup for soil moisture monitoring. The influence of incidence angle normalization of Sentinel-1 SAR data on downscaled soil moisture was also investigated and found to be minimal. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. An implementation algorithm to improve skin‐to‐skin practice in the first hour after birth.
- Author
-
Brimdyr, Kajsa, Cadwell, Karin, Stevens, Jeni, and Takahashi, Yuki
- Subjects
- *
ALGORITHMS , *CESAREAN section , *CHILDBIRTH , *DELIVERY (Obstetrics) , *INFANT health services , *OXIMETRY , *POSTNATAL care , *PUERPERIUM , *QUALITY assurance , *STATISTICAL sampling , *VAGINA , *VIDEO recording , *PULSE oximeters , *SECONDARY analysis , *DATA analysis software - Abstract
Abstract: Evidence supporting the practice of skin‐to‐skin contact and breastfeeding soon after birth points to physiologic, social, and psychological benefits for both mother and baby. The 2009 revision of Step 4 of the WHO/UNICEF “Ten Steps to Successful Breastfeeding” elaborated on the practice of skin‐to‐skin contact between the mother and her newly born baby indicating that the practice should be “immediate” and “without separation” unless documented medically justifiable reasons for delayed contact or interruption exist. While in immediate, continuous, uninterrupted skin‐to‐skin contact with mother in the first hour after birth, babies progress through 9 instinctive, complex, distinct, and observable stages including self‐attachment and suckling. However, the most recent Cochrane review of early skin‐to‐skin contact cites inconsistencies in the practice; the authors found “inadequate evidence with respect to details … such as timing of initiation and dose.” This paper introduces a novel algorithm to analyse the practice of skin to skin in the first hour using two data sets and suggests opportunities for practice improvement. The algorithm considers the mother's Robson criteria, skin‐to‐skin experience, and Widström's 9 Stages. Using data from vaginal births in Japan and caesarean births in Australia, the algorithm utilizes data in a new way to highlight challenges to best practice. The use of a tool to analyse the implementation of skin‐to‐skin care in the first hour after birth illuminates the successes, barriers, and opportunities for improvement to achieving the standard of care for babies. Future application should involve more diverse facilities and Robson's classifications. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
9. Basic Testing of the duchamp Source Finder.
- Author
-
Westmeier, T. and Serra, P.
- Subjects
- *
ALGORITHMS , *SIGNALS & signaling , *SPECTROMETRY , *NOISE - Abstract
This paper presents and discusses the results of basic source finding tests in three dimensions (using spectroscopic data cubes) with duchamp, the standard source finder for the Australian Square Kilometre Array Pathfinder. For this purpose, we generated different sets of unresolved and extended Hi model sources. These models were then fed into duchamp, using a range of different parameters and methods provided by the software. The main aim of the tests was to study the performance of duchamp on sources with different parameters and morphologies and assess the accuracy of duchamp's source parametrisation. Overall, we find duchamp to be a powerful source finder capable of reliably detecting sources down to low signal-to-noise ratios and accurately measuring their position and velocity. In the presence of noise in the data, duchamp's measurements of basic source parameters, such as spectral line width and integrated flux, are affected by systematic errors. These errors are a consequence of the effect of noise on the specific algorithms used by duchamp for measuring source parameters in combination with the fact that the software only takes into account pixels above a given flux threshold and hence misses part of the flux. In scientific applications of duchamp these systematic errors would have to be corrected for. Alternatively, duchamp could be used as a source finder only, and source parametrisation could be done in a second step using more sophisticated parametrisation algorithms. This paper discusses the results of basic source finding tests with the duchamp source finder on different source models. We find duchamp to be a powerful source finder, capable of reliably detecting sources down to low signal-to-noise ratios. duchamp's measurements of basic source parameters, however, are affected by systematic errors. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
10. Optimal nutrition therapy in paediatric critical care in the Asia-Pacific and Middle East: a consensus.
- Author
-
Jan Hau Lee, Rogers, Elizabeth, Yek Kee Chorm, Samransamruajkit, Rujipat, Pei Lin Koh, Miqdady, Mohamad, Al-Mehaidib, Ali Ibrahim, Pudjiadi, Antonius, Singhi, Sunit, Mehta, Nilesh M., Lee, Jan Hau, Chor, Yek Kee, and Koh, Pei Lin
- Subjects
- *
DIET therapy , *PEDIATRIC intensive care , *PARENTERAL feeding , *CATASTROPHIC illness , *ALGORITHMS , *CONSENSUS (Social sciences) , *CRITICAL care medicine , *DIETITIANS , *ENTERAL feeding , *INTENSIVE care units , *NUTRITIONAL assessment , *PEDIATRICS , *DIETARY proteins , *SYSTEMATIC reviews , *NUTRITIONAL status , *THERAPEUTICS - Abstract
Background and Objectives: Current practices and available resources for nutrition therapy in paediatric intensive care units (PICUs) in the Asia Pacific-Middle East region are expected to differ from western countries. Existing guidelines for nutrition management in critically ill children may not be directly applicable in this region. This paper outlines consensus statements developed by the Asia Pacific-Middle East Consensus Working Group on Nutrition Therapy in the Paediatric Critical Care Environment. Challenges and recommendations unique to the region are described.Methods and Study Design: Following a systematic literature search from 2004-2014, consensus statements were developed for key areas of nutrient delivery in the PICU. This review focused on evidence applicable to the Asia Pacific-Middle East region. Quality of evidence and strength of recommendations were rated according to the Grading of Recommendation Assessment, Development and Evaluation approach.Results: Enteral nutrition (EN) is the preferred mode of nutritional support. Feeding algorithms that optimize EN should be encouraged and must include: assessment and monitoring of nutritional status, selection of feeding route, time to initiate and advance EN, management strategies for EN intolerance and indications for using parenteral nutrition (PN). Despite heterogeneity in nutritional status of patients, availability of resources and diversity of cultures, PICUs in the region should consider involvement of dieticians and/or nutritional support teams.Conclusions: Robust evidence for several aspects of optimal nutrition therapy in PICUs is lacking. Nutritional assessment must be implemented to document prevalence and impact of malnutrition. Nutritional support must be given greater priority in PICUs, with particular emphasis in optimizing EN delivery. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
11. A Metric for Performance Evaluation of Multi-Target Tracking Algorithms.
- Author
-
Ristic, Branko, Vo, Ba-Ngu, Clark, Daniel, and Vo, Ba-Tuong
- Subjects
- *
SIGNAL processing , *ALGORITHMS , *PERFORMANCE evaluation , *ESTIMATION theory , *EMAIL systems , *NUMERICAL analysis , *MATHEMATICAL optimization - Abstract
Performance evaluation of multi-target tracking algorithms is of great practical importance in the design, parameter optimization and comparison of tracking systems. The goal of performance evaluation is to measure the distance between two sets of tracks: the ground truth tracks and the set of estimated tracks. This paper proposes a mathematically rigorous metric for this purpose. The basis of the proposed distance measure is the recently formulated consistent metric for performance evaluation of multi-target filters, referred to as the OSPA metric. Multi-target filters sequentially estimate the number of targets and their position in the state space. The OSPA metric is therefore defined on the space of finite sets of vectors. The distinction between filtering and tracking is that tracking algorithms output tracks and a track represents a labeled temporal sequence of state estimates, associated with the same target. The metric proposed in this paper is therefore defined on the space of finite sets of tracks and incorporates the labeling error. Numerical examples demonstrate that the proposed metric behaves in a manner consistent with our expectations. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
12. Empirical aspects of record linkage across multiple data sets using statistical linkage keys: the experience of the PIAC cohort study.
- Author
-
Karmel, Rosemary, Anderson, Phil, Gibson, Diane, Peut, Ann, Duckett, Stephen, and Wells, Yvonne
- Subjects
- *
COMMUNITY services , *COHORT analysis , *MEDICAL record linkage , *ELDER care , *ALGORITHMS - Abstract
Background: In Australia, many community service program data collections developed over the last decade, including several for aged care programs, contain a statistical linkage key (SLK) to enable derivation of client-level data. In addition, a common SLK is now used in many collections to facilitate the statistical examination of crossprogram use. In 2005, the Pathways in Aged Care (PIAC) cohort study was funded to create a linked aged care database using the common SLK to enable analysis of pathways through aged care services. Linkage using an SLK is commonly deterministic. The purpose of this paper is to describe an extended deterministic record linkage strategy for situations where there is a general person identifier (e.g. an SLK) and several additional variables suitable for data linkage. This approach can allow for variation in client information recorded on different databases. Methods: A stepwise deterministic record linkage algorithm was developed to link datasets using an SLK and several other variables. Three measures of likely match accuracy were used: the discriminating power of match key values, an estimated false match rate, and an estimated step-specific trade-off between true and false matches. The method was validated through examining link properties and clerical review of three samples of links. Results: The deterministic algorithm resulted in up to an 11% increase in links compared with simple deterministic matching using an SLK. The links identified are of high quality: validation samples showed that less than 0.5% of links were false positives, and very few matches were made using non-unique match information (0.01%). There was a high degree of consistency in the characteristics of linked events. Conclusions: The linkage strategy described in this paper has allowed the linking of multiple large aged care service datasets using a statistical linkage key while allowing for variation in its reporting. More widely, our deterministic algorithm, based on statistical properties of match keys, is a useful addition to the linker's toolkit. In particular, it may prove attractive when insufficient data are available for clerical review or follow-up, and the researcher has fewer options in relation to probabilistic linkage. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
13. Field-Scale Assessment of Uncertainties in Drip Irrigation Lateral Parameters.
- Author
-
Gyasi-Agyei, Yeboah
- Subjects
- *
IRRIGATION , *SOIL erosion , *HYDRAULICS , *RAILROADS , *ALGORITHMS - Abstract
Grass establishment on railway embankment steep slopes for erosion control in Central Queensland, Australia, is aided by drip lateral irrigation systems. The effective field values of the lateral parameters may be different from the manufacturer supplied ones due to manufacturing variations of the emitters, environmental factors, and water quality. This paper has provided a methodology for estimating drip lateral effective parameter values under field conditions. The hydraulic model takes into account the velocity head change and a proper selection of the friction coefficient formula based on the Reynolds number. Fittings and emitter insertion head losses were incorporated into the hydraulic model. Pressure measurements at some locations within the irrigation system, and the inlet discharges, were used to calibrate the lateral parameters in a statistical framework that allows estimation of parameter uncertainties using the Metropolis algorithm. It is observed that the manufacturer’s supplied parameters were significantly different from the calibrated ones, underestimating pressures within the irrigation system for a given inlet discharge, stressing the need for field testing. The parameter posterior distributions were found to be unimodal and nearly normally distributed. The emitter head loss coefficient distribution being very significant suggests the need to incorporate it into the hydraulic modeling. Although the example given in this paper relates to steep slopes, the methodologies are general and can be applied to any use of drip laterals. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
14. Impact of Carbon Prices on Wholesale Electricity Prices and Carbon Pass-Through Rates in the Australian National Electricity Market.
- Author
-
Wild, Phillip, Bell, William Paul, and Foster, John
- Subjects
- *
CARBON offsetting , *ELECTRIC rates , *ELECTRIC industries , *MULTIAGENT systems , *ALGORITHMS - Abstract
This paper investigates the effect of a carbon price on wholesale electricity prices and carbon-pass-through rates in the states comprising the Australian National Electricity Market (NEM). The methodology utilize an agent-based model, which contains many features salient to the NEM including intra-state and inter-state transmission branches, regional location of generators and load centres and accommodation of unit commitment features. The model uses a Direct Current Optimal Power Flow (DC OPF) algorithm to determine optimal dispatch of generation plant, power flows on transmission branches and wholesale prices. The results include sensitivity analysis of carbon prices on wholesale prices and carbon pass-through rates for different states within the NEM. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
15. Application of the extreme learning machine algorithm for the prediction of monthly Effective Drought Index in eastern Australia.
- Author
-
Deo, Ravinesh C. and Şahin, Mehmet
- Subjects
- *
MACHINE learning , *DROUGHTS , *CLIMATE change mitigation , *WATER supply , *PREDICTION theory , *ALGORITHMS - Abstract
The prediction of future drought is an effective mitigation tool for assessing adverse consequences of drought events on vital water resources, agriculture, ecosystems and hydrology. Data-driven model predictions using machine learning algorithms are promising tenets for these purposes as they require less developmental time, minimal inputs and are relatively less complex than the dynamic or physical model. This paper authenticates a computationally simple, fast and efficient non-linear algorithm known as extreme learning machine (ELM) for the prediction of Effective Drought Index (EDI) in eastern Australia using input data trained from 1957–2008 and the monthly EDI predicted over the period 2009–2011. The predictive variables for the ELM model were the rainfall and mean, minimum and maximum air temperatures, supplemented by the large-scale climate mode indices of interest as regression covariates, namely the Southern Oscillation Index, Pacific Decadal Oscillation, Southern Annular Mode and the Indian Ocean Dipole moment. To demonstrate the effectiveness of the proposed data-driven model a performance comparison in terms of the prediction capabilities and learning speeds was conducted between the proposed ELM algorithm and the conventional artificial neural network (ANN) algorithm trained with Levenberg–Marquardt back propagation. The prediction metrics certified an excellent performance of the ELM over the ANN model for the overall test sites, thus yielding Mean Absolute Errors, Root-Mean Square Errors, Coefficients of Determination and Willmott's Indices of Agreement of 0.277, 0.008, 0.892 and 0.93 (for ELM) and 0.602, 0.172, 0.578 and 0.92 (for ANN) models. Moreover, the ELM model was executed with learning speed 32 times faster and training speed 6.1 times faster than the ANN model. An improvement in the prediction capability of the drought duration and severity by the ELM model was achieved. Based on these results we aver that out of the two machine learning algorithms tested, the ELM was the more expeditious tool for prediction of drought and its related properties. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
16. Spatial interpolation of McArthur's Forest Fire Danger Index across Australia: Observational study.
- Author
-
Sanabria, L.A., Qin, X., Li, J., Cechet, R.P., and Lucas, C.
- Subjects
- *
FOREST fire management , *INTERPOLATION , *SPATIAL ecology , *ALGORITHMS , *MACHINE learning - Abstract
Abstract: Fire danger indices are used by fire management agencies to assess fire weather conditions and issue public warnings. The most widely used fire danger indices in Australia are the McArthur Fire Forest Danger Index and the Grassland Fire Danger Index. These indices are calculated at weather stations using measurements of weather variables and fuel information. For a vast country like Australia when assessing the risk of severe fire weather events, it is also important to calculate the spatial distribution of these indices considering the extreme tail of the distribution. The spatial distribution of one of the fire weather danger indices regularly used in Australia is presented in this paper. In particular, we present the spatial distribution of the long-term tendency of extreme values of the McArthur Forest Fire Danger Index (FFDI). This indicator of fire weather conditions was assessed by calculating the return period of its extreme values by fitting extreme value distributions to data sets of FFDI at 78 recording stations around Australia. The spatial distribution of these return periods was obtained by using spatial interpolation algorithms with the recording stations measurements. Two conventional and two new algorithms based on machine-learning techniques were tested. This study shows that the best interpolation results for the FFDI can be obtained by using a combination of random forest and inverse distance weighting interpolation algorithms. The spatial distribution of the seasonal FFDI return period shows that the highest FFDI over large parts of southern Australia occurs during the summer months whilst in northern Australia it occurs in spring. The results also show that the FFDI in eastern Australia, the most populated region of the country, is higher inland than in the coastal areas particularly during spring and summer. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
17. Joint registry approach for identification of outlier prostheses.
- Author
-
Steiger, Richard N de, Miller, Lisa N, Davidson, David C, Ryan, Philip, and Graves, Stephen E
- Subjects
- *
ALGORITHMS , *ARTHROPLASTY , *ARTIFICIAL joints , *REPORTING of diseases , *HEALTH outcome assessment , *COMPLICATIONS of prosthesis , *REGRESSION analysis , *TREATMENT effectiveness , *PROPORTIONAL hazards models - Abstract
Background and purpose Joint Replacement Registries play a significant role in monitoring arthroplasty outcomes by publishing data on survivorship of individual prostheses or combinations of prostheses. The difference in outcomes can be device- or non-device-related, and these factors can be analyzed separately. Although registry data indicate that most prostheses have similar outcomes, some have a higher than anticipated rate of revision when compared to all other prostheses in their class. This report outlines how the Australian Orthopaedic Association National Joint Replacement Registry (AOANJRR) has developed a method to report prostheses with a higher than expected rate of revision. These are referred to as 'outlier' prostheses. Material and methods Since 2004, the AOANJRR has developed a standardized process for identifying outliers. This is based on a 3-stage process consisting of an automated algorithm, an extensive analysis of individual prostheses or combinations by registry staff, and finally a meeting involving a panel from the Australian Orthopaedic Association Arthroplasty Society. Outlier prostheses are listed in the Annual Report as (1) identified but no longer used in Australia, (2) those that have been re-identified and that are still used, and (3) those that are being identified for the first time. Results 78 prostheses or prosthesis combinations have been identified as being outliers using this approach (AOANJRR 2011 Annual Report). In addition, 5 conventional hip prostheses were initially identified, but after further analysis no longer met the defined criteria. 1 resurfacing hip prosthesis was initially identified, subsequently removed from the list, and then re-identified the following year when further data were available. All unicompartmental and primary total knee prostheses identified as having a higher than expected rate of revision have continued to be re-identified. Interpretation It is important that registries use a transparent and accountable process to identify an outlier prosthesis. This paper describes the development, implementation, assessment, and impact of the approach used by the Australian Registry. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
18. The Characterised Noise Hi Source Finder: Detecting Hi Galaxies Using a Novel Implementation of Matched Filtering.
- Author
-
Jurek, R.
- Subjects
- *
GALAXIES , *ALGORITHMS , *SPARSE approximations , *NOISE - Abstract
The spectral line datacubes obtained from the Square Kilometre Array (SKA) and its precursors, such as the Australian SKA Pathfinder (ASKAP), will be sufficiently large to necessitate automated detection and parametrisation of sources. Matched filtering is widely acknowledged as the best possible method for the automated detection of sources. This paper presents the Characterised Noise Hi (cnhi) source finder, which employs a novel implementation of matched filtering. This implementation is optimised for the 3-D nature of the Hi spectral line observations of the planned Wide-field ASKAP Legacy L-band All-sky Blind surveY (WALLABY). The cnhi source finder also employs a novel sparse representation of 3-D objects, with a high compression rate, to implement the Lutz one-pass algorithm on datacubes that are too large to process in a single pass. WALLABY will use ASKAP's phenomenal 30 square degree field of view to image ~70% of the sky. It is expected that WALLABY will find 500 000 Hi galaxies out to z ~ 0.2. The Characterised Noise Hi source finder uses a novel approach to find sources in Hi spectral line datacubes using matched filtering. The concept, initial implementation and testing are presented here. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
19. Compact continuum source finding for next generation radio surveys.
- Author
-
Hancock, P. J., Murphy, T., Gaensler, B. M., Hopkins, A., and Curran, J. R.
- Subjects
- *
RADIO sources (Astronomy) , *CONTINUUM mechanics , *ALGORITHMS , *COMPLETENESS theorem , *RELIABILITY in engineering , *TELESCOPES - Abstract
ABSTRACT We present a detailed analysis of four of the most widely used radio source-finding packages in radio astronomy, and a program being developed for the Australian Square Kilometer Array Pathfinder telescope. The four packages: sextractor, sfind, imsad and selavy are shown to produce source catalogues with high completeness and reliability. In this paper we analyse the small fraction (∼1 per cent) of cases in which these packages do not perform well. This small fraction of sources will be of concern for the next generation of radio surveys which will produce many thousands of sources on a daily basis, in particular for blind radio transients surveys. From our analysis we identify the ways in which the underlying source-finding algorithms fail. We demonstrate a new source-finding algorithm aegean, based on the application of a Laplacian kernel, which can avoid these problems and can produce complete and reliable source catalogues for the next generation of radio surveys. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
20. Macrocell Path-Loss Prediction Using Artificial Neural Networks.
- Author
-
Östlin, Erik, Zepernick, Hans-Jürgen, and Suzuki, Hajime
- Subjects
- *
ARTIFICIAL neural networks , *BACK propagation , *CODE division multiple access , *MOBILE communication systems , *ALGORITHMS - Abstract
This paper presents and evaluates artificial neural network (ANN) models used for macrocell path-loss prediction. Measurement data obtained by utilizing the IS-95 pilot signal from a commercial code-division multiple-access (CDMA) mobile network in rural Australia are used to train and evaluate the models. A simple neuron model and feed-forward networks with different numbers of hidden layers and neurons are evaluated regarding their training time, prediction accuracy, and generalization properties. Furthermore, different backpropagation training algorithms, such as gradient descent and Levenberg--Marquardt, are evaluated. The artificial neural network inputs are chosen to be distance to base station, parameters easily obtained from terrain path profiles, land usage, and vegetation type and density near the receiving antenna. The path-loss prediction results obtained by using the ANN models are evaluated against different versions of the semi-terrain based propagation model Recommendation ITU-R P.1546 and the Okumura--Hata model. The statistical analysis shows that a non-complex ANN model performs very well compared with traditional propagation models with regard to prediction accuracy, complexity, and prediction time. The average ANN prediction results were 1) maximum error: 22 dB; 2) mean error: 0 dB; and 3) standard deviation: 7 dB. A multilayered feedforward network trained using the standard backpropagation algorithm was compared with a neuron model trained using the Levenberg--Marquardt algorithm. It was found that the training time decreases from 150 000 to 10 iterations, while the prediction accuracy is maintained. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
21. Pattern Discovery on Australian Medical Claims Data--A Systematic Approach.
- Author
-
Ah Chung Tsoi, Shu Zhang, and Hagenbuchner, Markus
- Subjects
- *
MEDICAL records , *HEALTH insurance , *ALGORITHMS , *DATABASES , *MARKOV processes , *PATIENTS - Abstract
The national health insurance system in Australia records details on medical services and claims provided to its population. An effective method to the discovery of temporal behavioral patterns in the data set is proposed in this paper. The method consists of a two-step approach which is applied recursively to the data set. First, a clustering algorithm is used to segment the data into classes. Then, hidden Markov models are employed to find the underlying temporal behavioral patterns. These steps are applied recursively to features extracted from the data set until convergence. The main objective is to minimize the misclassification of patient profiles into various classes. This results in a hierarchical tree model consisting of a number of classes; each class groups similar patient temporal behavioral patterns together. The capabilities of the proposed method are demonstrated through the application to a subset of the Australian national health insurance data set. It is shown that the proposed method not only clusters data into various categories of interest, but it also automatically marks the periods in which similar temporal behavioral patterns occurred. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.