30 results
Search Results
2. Adapting to the test: performing algorithmic adaptivity in Danish schools.
- Author
-
Høvsgaard Maguire, Laura
- Subjects
NATIONAL competency-based educational tests ,ALGORITHMS ,SCIENCE education ,STUDENT attitudes ,EDUCATION - Abstract
Algorithmic practices are becoming increasingly more central within educational governance. By focusing on the mechanisms of a particular algorithmic testing system in Denmark, this paper highlights how such practices are implicated in the emergence of new accountability infrastructures. It adopts an STS approach drawing specifically upon Michel Callon's concepts framing, overflowing, and re-framing. The paper examines how algorithmic adaptivity has become central in the framing of the Danish national test and traces the ways in which students, teachers, and schools respond to such proceduralized interactions. While algorithmic adaptivity was introduced as a way of providing students with an equal test experience, it also inscribes student adaptability into test practices, generating new student affectivities and teacher responsibilities in the process. The paper argues that this is a matter of adapting to the test and highlights how the mundane practices of testing situations also become a subject of governance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
3. Information literacy as a site for anticipation: temporal tactics for infrastructural meaning-making and algo-rhythm awareness.
- Author
-
Haider, Jutta and Sundin, Olof
- Subjects
INFORMATION literacy ,EXPECTATION (Psychology) ,SOCIAL responsibility of business ,THEMATIC analysis ,AWARENESS ,HEALTH literacy - Abstract
Purpose: The article makes an empirical and conceptual contribution to understanding the temporalities of information literacies. The paper aims to identify different ways in which anticipation of certain outcomes shapes strategies and tactics for engagement with algorithmic information intermediaries. The paper suggests that, given the dominance of predictive algorithms in society, information literacies need to be understood as sites of anticipation. Design/methodology/approach: The article explores the ways in which the invisible algorithms of information intermediaries are conceptualised, made sense of and challenged by young people in their everyday lives. This is couched in a conceptual discussion of the role of anticipation in understanding expressions of information literacies in algorithmic cultures. The empirical material drawn on consists of semi-structured, pair interviews with 61 17–19 year olds, carried out in Sweden and Denmark. The analysis is carried out by means of a qualitative thematic analysis in three steps and along two sensitising concepts – agency and temporality. Findings: The results are presented through three themes, anticipating personalisation, divergences and interventions. These highlight how articulating an anticipatory stance works towards connecting individual responsibilities, collective responsibilities and corporate interests and thus potentially facilitating an understanding of information as co-constituted by the socio-material conditions that enable it. This has clear implications for the framing of information literacies in relation to algorithmic systems. Originality/value: The notion of algo-rhythm awareness constitutes a novel contribution to the field. By centring the role of anticipation in the emergence of information literacies, the article advances understanding of the temporalities of information. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Decision support and algorithmic support: the construction of algorithms and professional discretion in social work: Beslutningsstøtte og algoritmisk støtte: Konstruktionen af algoritmer og det professionelle skøn i socialt arbejde.
- Author
-
Meilvang, Marie Leth and Dahler, Anne Marie
- Subjects
LOCAL government ,SOCIAL workers ,RESEARCH methodology ,INTERVIEWING ,QUALITATIVE research ,DECISION making ,AT-risk people ,PROFESSIONALISM ,THEMATIC analysis ,SOCIAL case work ,ALGORITHMS - Abstract
Copyright of European Journal of Social Work is the property of Routledge and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
5. Algorithmic assemblages of care: imaginaries, epistemologies and repair work.
- Author
-
Schwennesen, Nete
- Subjects
ALGORITHMS ,FIELDWORK (Educational method) ,THEORY of knowledge ,MEDICAL ethics ,MEDICAL practice ,REHABILITATION centers ,RESPONSIBILITY ,SYSTEMS design ,USER interfaces ,ETHNOLOGY research - Abstract
In the past decade, the figure of the algorithm has emerged as a matter of concern in discussions about the current state of the healthcare sector and what it may become. While analytical focus has mainly centred on 'algorithmic entities', the paper argues that we have to move our analytical focus towards 'algorithmic assemblages', if we are to understand how advanced algorithms will affect health care. Departing from this figure, the paper explores how an algorithmic system, designed to 'take on' the role of a physiotherapist in physical rehabilitation programmes in Denmark, was designed and made to work in practice. On the basis of ethnographic fieldwork, it is demonstrated that the algorithmic system is a fragile accomplishment and outcome of negotiations between the imaginaries embedded in its design and the ongoing adjustments of IT workers, patients and professionals. Drawing on recent work on the fragility and incompleteness of algorithms, it is suggested that the algorithmic system needs to be creatively 'repaired' to build and maintain enabling connections between bodies in‐motion and professionals in arrangements of care. The paper concludes by addressing accountability for the workings of algorithmic systems in medical practice, suggesting that such questions must also be discussed in relation to encounters between algorithmic imaginaries, health professionals and patients, and the various forms of 'repair work' needed to enable algorithmic systems to work in practice. Such acts of accountability cannot be understood within an ethics of transparency, but are better thought of as an ethics of 'response‐ability', given the need to intervene and engage with the open‐ended outcomes of algorithmic systems. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. A vision-based instrument for measuring milk somatic cell count.
- Author
-
Gao, Fei, Wang, Jinchao, Ge, Yisu, and Lu, Shufang
- Subjects
SOMATIC cells ,DAIRY cattle ,HOUGH transforms ,RAW milk ,ALGORITHMS ,MILK ,MASTITIS ,LACTATION - Abstract
The method of rapidly measuring the somatic cell count of raw milk is important for detecting the quality of milk and monitoring the health of dairy cows. This paper investigates a vision-based measurement algorithm and develops an instrument for measuring the somatic cell count. In detail, first, microscopic images of glass slides are automatically captured. Second, the Hough transform is used to calibrate the microscopic image so as to compute the sample volume. Third, a discrete optimization model is set up to determine the image segmentation threshold. Fourth, the features of milk somatic cell are expressed in area, ratio of length to width and circularity. The least-square circle method is employed to detect the circularity of the milk somatic cell. Then, an automatic milk somatic cell count is achieved via a recursion algorithm. Finally, a milk somatic cell counting instrument is developed based on the proposed algorithm, which is verified to be applicable via comparison with the FossMatic 5000, made in Denmark. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
7. A Multi-Agent Reinforcement Learning Approach to Price and Comfort Optimization in HVAC-Systems.
- Author
-
Blad, Christian, Bøgh, Simon, and Kallesøe, Carsten
- Subjects
REINFORCEMENT learning ,ARTIFICIAL intelligence ,HEATING control ,HEAT pumps ,DEEP learning ,ALGORITHMS - Abstract
This paper addresses the challenge of minimizing training time for the control of Heating, Ventilation, and Air-conditioning (HVAC) systems with online Reinforcement Learning (RL). This is done by developing a novel approach to Multi-Agent Reinforcement Learning (MARL) to HVAC systems. In this paper, the environment formed by the HVAC system is formulated as a Markov Game (MG) in a general sum setting. The MARL algorithm is designed in a decentralized structure, where only relevant states are shared between agents, and actions are shared in a sequence, which are sensible from a system's point of view. The simulation environment is a domestic house located in Denmark and designed to resemble an average house. The heat source in the house is an air-to-water heat pump, and the HVAC system is an Underfloor Heating system (UFH). The house is subjected to weather changes from a data set collected in Copenhagen in 2006, spanning the entire year except for June, July, and August, where heat is not required. It is shown that: (1) When comparing Single Agent Reinforcement Learning (SARL) and MARL, training time can be reduced by 70% for a four temperature-zone UFH system, (2) the agent can learn and generalize over seasons, (3) the cost of heating can be reduced by 19% or the equivalent to 750 kWh of electric energy per year for an average Danish domestic house compared to a traditional control method, and (4) oscillations in the room temperature can be reduced by 40% when comparing the RL control methods with a traditional control method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. A fast heuristic for large-scale capacitated arc routing problems.
- Author
-
Wøhlk, Sanne and Laporte, Gilbert
- Subjects
HEURISTIC ,OPERATIONS research ,ALGORITHMS ,C++ - Abstract
The purpose of this paper is to develop a fast heuristic called FASTCARP for the solution of large-scale capacitated arc routing problems, with or without duration constraints. This study is motivated by a waste collection problem in Denmark. After a preprocessing phase, FASTCARP creates a giant tour, partitions the graph into districts, and construct routes within each district. It then iteratively merges and splits adjacent districts and reoptimises the routes. The heuristic was tested on 264 benchmark instances containing up to 11,640 nodes, 12,675 edges, 8581 required edges, and 323 vehicles. FASTCARP was compared with an alternative heuristic called BASE and with several Path-Scanning algorithms. On small graphs, it was better but slower than BASE. On larger graphs, it was much faster and only slightly worse than BASE in terms of solution quality. It also outperforms the Path-Scanning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
9. The multidimensional nD‐GRAS method: Applications for the projection of multiregional input–output frameworks and valuation matrices.
- Author
-
Valderas‐Jaramillo, Juan Manuel and Rueda‐Cantuche, José Manuel
- Subjects
VALUATION ,MATRICES (Mathematics) ,INPUT-output analysis ,ALGORITHMS ,ANALYTICAL solutions - Abstract
Copyright of Papers in Regional Science is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2021
- Full Text
- View/download PDF
10. Responses to Medical Treatment in 192 Patients with Pancreatic Neuroendocrine Neoplasms Referred to the Copenhagen Neuroendocrine Tumour Centre in 2000–2020.
- Author
-
Petersen, Sofie Skovlund, Møller, Stine, Slott, Cecilie, Krogh, Jesper, Hansen, Carsten Palnæs, Kjaer, Andreas, Holmager, Pernille, Oturai, Peter, Garbyal, Rajendra Singh, Langer, Seppo W., Knigge, Ulrich, and Andreassen, Mikkel
- Subjects
RADIOISOTOPE therapy ,CANCER treatment ,RESEARCH funding ,TREATMENT effectiveness ,RETROSPECTIVE studies ,DESCRIPTIVE statistics ,CARBOPLATIN ,PANCREATIC tumors ,ETOPOSIDE ,CANCER chemotherapy ,NEUROENDOCRINE tumors ,PROGRESSION-free survival ,CONFIDENCE intervals ,SPECIALTY hospitals ,MEDICAL referrals ,OVERALL survival ,ALGORITHMS ,EVALUATION - Abstract
Simple Summary: Pancreatic neuroendocrine tumors are a rare and heterogenous group of neoplasms. Surgical resection is the only curative option. However, there has been an increase in palliative medical options. The aim of this retrospective study was to investigate responses for the most commonly used medical treatments in 192 patients. The current results support the effectiveness of somatostatin analogues in low-grade tumors and showed that it might also be used in patients with Ki-67 ≥ 10%. Treatment with streptozocin and 5-floururacil as first-line treatment showed good efficacy for G2 disease. Due to good efficacy and generally good tolerability PRRT might be considered as first-line treatment for NET G2. The results confirmed poor prognosis in high-grade tumors treated with carboplatin/etoposide or temozolomide. The current results provide valuable knowledge as current treatment algorithms and sequencing are primarily guided by expert opinions with limited evidence. Background: Given the rarity and heterogeneity of pancreatic neuroendocrine neoplasms (pNEN), treatment algorithms and sequencing are primarily guided by expert opinions with limited evidence. Aim: To investigate overall survival (OS), median progression-free survival (mPFS), and prognostic factors associated with the most common medical treatments for pNEN. Methods: Retrospective single-center study encompassing patients diagnosed and monitored between 2000 and 2020 (n = 192). Results: Median OS was 36 (95% CI: 26–46) months (99 months for grade (G) 1, 62 for G2, 14 for G3, and 10 for neuroendocrine carcinomas). Patients treated with somatostatin analogues (SSA) (n = 59, median Ki-67 9%) had an mPFS of 28 months. Treatment line (HR (first line as reference) 4.1, 95% CI: 1.9–9.1, p ≤ 0.001) emerged as an independent risk factor for time to progression. Patients with a Ki-67 index ≥10% (n = 28) had an mPFS of 27 months. Patients treated with streptozocin/5-fluorouracil (STZ/5FU) (n = 70, first-line treatment n = 68, median Ki-67 10%) had an mPFS of 20 months, with WHO grade serving as an independent risk factor (HR (G1 (n = 8) vs. G2 (n = 57)) 2.8, 95% CI: 1.1–7.2, p-value = 0.031). Median PFS was 21 months for peptide receptor radionuclide therapy (PRRT) (n = 41, first line n = 2, second line n = 29, median Ki-67 8%), 5 months for carboplatin and etoposide (n = 66, first-line treatment n = 60, median Ki-67 80%), and 3 months for temozolomide-based therapy (n = 56, first-line treatment n = 17, median Ki-67 30%). Conclusion: (1) Overall survival was, as expected, highly dependent on grade; (2) median PFS for SSA was around 2.5 years without difference between tumors with Ki-67 above or below 10%; (3) STZ/5FU as first-line treatment exhibited a superior mPFS of 20 months compared to what has historically been reported for targeted treatments; (4) PRRT in G2 pNEN achieved an mPFS similar to first-line chemotherapy; and (5) limited treatment efficacy was observed in high-grade tumors when treated with carboplatin and etoposide or temozolomide. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Short-Term Load Probabilistic Forecasting Based on Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise Reconstruction and Salp Swarm Algorithm.
- Author
-
Hu, Tianyu, Zhou, Mengran, Bian, Kai, Lai, Wenhao, and Zhu, Ziwei
- Subjects
LOAD forecasting (Electric power systems) ,HILBERT-Huang transform ,FORECASTING ,MACHINE learning ,ELECTRICAL load ,ALGORITHMS ,PROBABILITY density function - Abstract
Short-term load forecasting is an important part of load forecasting, which is of great significance to the optimal power flow and power supply guarantee of the power system. In this paper, we proposed the load series reconstruction method combined improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) with sample entropy (S
E ). The load series is decomposed by ICEEMDAN and is reconstructed into a trend component, periodic component, and random component by comparing with the sample entropy of the original series. Extreme learning machine optimized by salp swarm algorithm (SSA-ELM) is used to predict respectively, and the final prediction value is obtained by superposition of the prediction results of the three components. Then, the prediction error of the training set is divided into four load intervals according to the predicted value, and the kernel probability density is estimated to obtain the error distribution of the training set. Combining the predicted value of the prediction set with the error distribution of the corresponding load interval, the prediction load interval can be obtained. The prediction method is verified by taking the hourly load data of a region in Denmark in 2019 as an example. The final experimental results show that the proposed method has a high prediction accuracy for short-term load forecasting. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
12. Danish Prostate Registry (DanProst) – an Updated Version of the Danish Prostate Cancer Registry, Methodology, and Early Results.
- Author
-
Stroomberg, Hein Vincent, Larsen, S. Benzon, Lanthén, G. Samsø, Nielsen, T. Kjaer, Helgstrand, J. T., Brasso, K, and Røder, A
- Subjects
REPORTING of diseases ,ULTRASONIC imaging ,BIOPSY ,PROSTATE ,RETROSPECTIVE studies ,COMPARATIVE studies ,RESEARCH funding ,HISTOLOGICAL techniques ,DESCRIPTIVE statistics ,SYSTEMATIZED Nomenclature of Medicine ,PROSTATE-specific antigen ,PROSTATE tumors ,ALGORITHMS - Abstract
In 2016, we introduced the Danish Prostate Cancer Registry (DaPCaR) which was built on the National Pathology Register from 1995 to 2011. DaPCaR was laborious to use as most data had to be manually imputed with no regular updates. In here we present a new comprehensive centralized prostate registry called the Danish Prostate Registry (DanProst), which includes all men having undergone any histological evaluation of prostate tissue merged with laboratory-, treatment-, prescription data as well as vital status. Here the data included and the methodology of DanProst are described. DanProst is built upon all men with a histological assessment of the prostate from the Danish National Registry for Pathology. The primary histology and potential prostate cancer histological diagnosis for each unique individual is extracted and translated by newly made algorithms for topography, procedure, diagnostic conclusion, and pathological staging. Further information is added from DaPCaR, the CPR Registry, the Danish Cause of Death Registry, the Danish Cancer Registry, the National Patient Registry, the Danish Register of Laboratory Results for Research, and the Danish National Prescription Registry. The translation algorithms were validated based on the comparison with DaPCaR in the period 2010–2016. DanProst includes 190,422 men. A total of 95,152 (50%) men are diagnosed with prostate cancer until 2021. Median diagnostic PSA was 11 ng/ml, most men are diagnosed by ultrasound-guided biopsy (N = 63,751; 67%), and most frequently defined primary treatment was radical prostatectomy (N = 14,778; 19%). DanProst to DaPCaR coherency was > 99%, 95%, and 94% for the primary histological procedure, primary histological conclusion, and diagnostic histological conclusion, respectively. DanProst is a continuously updated, centrally kept, validated registry with automatic integration of data from other national registries, allowing for contemporary nationwide analysis in men with histological assessment of the prostate. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Advanced Fully Convolutional Networks for Agricultural Field Boundary Detection.
- Author
-
Taravat, Alireza, Wagner, Matthias P., Bonifacio, Rogerio, Petit, David, and Vanderhoof, Melanie
- Subjects
SIGNAL convolution ,DEEP learning ,CONVOLUTIONAL neural networks ,IMAGE segmentation ,ALGORITHMS - Abstract
Accurate spatial information of agricultural fields is important for providing actionable information to farmers, managers, and policymakers. On the other hand, the automated detection of field boundaries is a challenging task due to their small size, irregular shape and the use of mixed-cropping systems making field boundaries vaguely defined. In this paper, we propose a strategy for field boundary detection based on the fully convolutional network architecture called ResU-Net. The benefits of this model are two-fold: first, residual units ease training of deep networks. Second, rich skip connections within the network could facilitate information propagation, allowing us to design networks with fewer parameters but better performance in comparison with the traditional U-Net model. An extensive experimental analysis is performed over the whole of Denmark using Sentinel-2 images and comparing several U-Net and ResU-Net field boundary detection algorithms. The presented results show that the ResU-Net model has a better performance with an average F
1 score of 0.90 and average Jaccard coefficient of 0.80 in comparison to the U-Net model with an average F1 score of 0.88 and an average Jaccard coefficient of 0.77. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
14. Between a logic of disruption and a logic of continuation: Negotiating the legitimacy of algorithms used in automated clinical decision-making.
- Author
-
Torenholt, Rikke and Langstrup, Henriette
- Subjects
ATTITUDES of medical personnel ,HEALTH outcome assessment ,DECISION support systems ,PATIENTS' attitudes ,CARDIAC rehabilitation ,LOGIC ,ETHNOLOGY ,ALGORITHMS ,BREAST tumors - Abstract
In both popular and academic discussions of the use of algorithms in clinical practice, narratives often draw on the decisive potentialities of algorithms and come with the belief that algorithms will substantially transform healthcare. We suggest that this approach is associated with a logic of disruption. However, we argue that in clinical practice alongside this logic, another and less recognised logic exists, namely that of continuation : here the use of algorithms constitutes part of an established practice. Applying these logics as our analytical framing, we set out to explore how algorithms for clinical decision-making are enacted by political stakeholders, healthcare professionals, and patients, and in doing so, study how the legitimacy of delegating to an algorithm is negotiated and obtained. Empirically we draw on ethnographic fieldwork carried out in relation to attempts in Denmark to develop and implement Patient Reported Outcomes (PRO) tools – involving algorithmic sorting – in clinical practice. We follow the work within two disease areas: heart rehabilitation and breast cancer follow-up care. We show how at the political level, algorithms constitute tools for disrupting inefficient work and unsystematic patient involvement, whereas closer to the clinical practice, algorithms constitute a continuation of standardised and evidence-based diagnostic procedures and a continuation of the physicians' expertise and authority. We argue that the co-existence of the two logics have implications as both provide a push towards the use of algorithms and how a logic of continuation may divert attention away from new issues introduced with automated digital decision-support systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. The journey of research data: Accessing nordic health data for the purposes of developing an algorithm.
- Author
-
Cathaoir, Katharina Ó, Gunnarsdóttir, Hrefna Dögg, and Hartlev, Mette
- Subjects
GENERAL Data Protection Regulation, 2016 ,DATA protection ,INFORMATION sharing ,RESEARCH ethics ,ALGORITHMS - Abstract
This article traces the journey of Nordic health data requested for developing a healthcare algorithm. We focus on the legal requirements and highlight that differences in the legislation of Denmark, Norway and Iceland, and the interpretation thereof by responsible bodies, can pose a barrier for scientific researchers. In addition, non-legal institutional requirements or practices may hamper data access. First, despite some European harmonization, the mandate of research ethics committees and the data protection authorities vary in the three countries. Second, domestic institutions impose tailored requirements, sometimes only allowing domestic or affiliated researchers to access data sets. Third, the manner in which a dataset is collected, catalogued and stored has implications for data access. We make several recommendations for increasing transparency in Nordic data access, such as, increasing knowledge sharing regarding interpretation of General Data Protection Regulation (GDPR) criteria, adopting clearer regulations and pursuing greater citizen engagement in secondary use of health data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Beyond data protection concerns – the European passenger name record system.
- Author
-
Olsen, Henrik Palmer and Wiesener, Cornelius
- Subjects
DATA protection ,PASSENGERS ,RISK assessment ,CRIME statistics ,TERRORISM - Abstract
In this article, we examine the European framework of collecting and analysing flight passenger name record (PNR) data for the purpose of combating terrorism and serious crime. The focus is mainly on the EU PNR Directive of 2016, but we also consider the specific legislative framework in Germany and Denmark. In light of the recent review of the Directive, the article aims at exploring the policy-related, legal and technological challenges. In doing so, it goes beyond established data protection concerns. In particular, we debunk the popular claim that PNR analysis in and of itself entails the risk of discrimination of certain groups – a claim commonly levelled against algorithmic analysis. We also provide useful insights into the specific legal safeguards vis-à-vis automated profiling and decision-making through human review. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Development of a diagnostic algorithm identifying cases of dislocation after primary total hip arthroplasty—based on 31,762 patients from the Danish Hip Arthroplasty Register.
- Author
-
Hermansen, Lars L, Viberg, Bjarke, and Overgaard, Søren
- Subjects
HIP joint diseases ,REPORTING of diseases ,TOTAL hip replacement ,PREDICTIVE tests ,SURGICAL complications ,SURGERY ,PATIENTS ,HIP joint dislocation ,OSTEOARTHRITIS ,DESCRIPTIVE statistics ,ALGORITHMS ,LONGITUDINAL method ,MEDICAL coding - Abstract
Background and purpose — Dislocation of total hip arthroplasties (THA) is often treated with closed reduction and traditionally not registered in orthopedic registers. This study aimed to create an algorithm designed to identify cases of dislocations of THAs with high sensitivity, specificity, and positive predictive value (PPV) based on codes from the Danish National Patient Register (DNPR). Patients and methods — All patients (n = 31,762) with primary osteoarthritis undergoing THA from January 1, 2010 to December 31, 2014 were included from the Danish Hip Arthroplasty Register (DHR). We extracted available data for every hospital contact in the DNPR during a 2-year follow-up period, then conducted a comprehensive nationwide review of 5,096 patient files to register all dislocations and applied codes. Results — We identified 1,890 hip dislocations among 1,094 of the included 31,762 THAs. More than 70 different diagnoses and 55 procedural codes were coupled to the hospital contacts with dislocation. A combination of the correct codes produced a sensitivity of 63% and a PPV of 98%. Adding alternative and often applied codes increased the sensitivity to 91%, while the PPV was maintained at 93%. Additional steps increased sensitivity to 95% but at the expense of an unacceptable decrease in the PPV to 82%. Specificity was, in all steps, greater than 99%. Interpretation — The developed algorithm achieved high and acceptable values for sensitivity, specificity, and predictive values. We found that surgeons in most cases coded correctly. However, the codes were not always transferred to the discharge summary. In perspective, this kind of algorithm may be used in Danish quality registers. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. HVDC loss factors in the Nordic power market.
- Author
-
Tosatto, Andrea and Chatzivasileiadis, Spyros
- Subjects
- *
ELECTRICITY markets , *COST functions , *ALGORITHMS - Abstract
• Linear loss factors penalize one HVDC line over the other. • Piecewise-linear loss factors better represent quadratic loss functions. • Piecewise-linear loss factors allow for a better distribution of power flows. • HVDC loss factors only disproportionately increase AC losses. • HVDC and AC loss factors lead to losses minimization. In the Nordic countries (Sweden, Norway, Finland and Denmark), many interconnectors are formed by long High-Voltage Direct-Current (HVDC) lines. Every year, the operation of such interconnectors costs millions of Euros to Transmission System Operators (TSOs) due to the high amount of losses that are not considered while clearing the market. To counteract this problem, Nordic TSOs (Svenska kraftnät - Sweden, Statnett - Norway, Fingrid - Finland, Energinet - Denmark) have proposed to introduce linear HVDC loss factors in the market clearing algorithm. The assessment of such a measure requires a detailed model of the system under investigation. In this paper we develop and introduce a detailed market model of the Nordic countries and we analyze the impact of different loss factor formulations. We show that linear loss factors penalize one HVDC line over the other, and this can jeopardize revenues of merchant HVDC lines. In this regard, we propose piecewise-linear loss factors: a simple to implement but highly effective solution. Moreover, we demonstrate how the introduction of only HVDC loss factors is a partial solution, since it disproportionately increases the AC losses. Our results show that the inclusion of AC loss factors can eliminate this problem. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Automatic processing of time domain induced polarization data using supervised artificial neural networks.
- Author
-
Barfod, Adrian S, Lévy, Léa, and Larsen, Jakob Juul
- Subjects
ARTIFICIAL neural networks ,INDUCED polarization ,OUTLIER detection ,ALGORITHMS ,MACHINE learning ,BORING & drilling (Earth & rocks) - Abstract
Processing of geophysical data is a time consuming task involving many different steps. One approach for accelerating and automating processing of geophysical data is to look towards machine learning (ML). ML encompasses a wide range of tools, which can be used to automate complicated and/or tedious tasks. We present strategies for automating the processing of time-domain induced polarization (IP) data using ML. An IP data set from Grindsted in Denmark is used to investigate the applicability of neural networks for processing such data. The Grindsted data set consists of eight profiles, with approximately 2000 data curves per profile, on average. Each curve needs to be processed, which, using the manual approach, can take 1–2 hr per profile. Around 20 per cent of the curves were manually processed and used to train and validate an artificial neural network. Once trained, the network could process all curves, in 6–15 s for each profile. The accuracy of the neural network, when considering the manual processing as a reference, is 90.8 per cent. At first, the network could not detect outlier curves, that is where entire chargeability curves were significantly different from their spatial neighbours. Therefore, an outlier curve detection algorithm was developed and implemented to work in tandem with the network. The automatic processing approach developed here, involving the neural network and the outlier curve detection, leads to similar inversion results as the manual processing, with the two significant advantages of reduced processing times and enhanced processing consistency. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Missing Data Imputation for Multisite Rainfall Networks: A Comparison between Geostatistical Interpolation and Pattern-Based Estimation on Different Terrain Types.
- Author
-
ORIANI, FABIO, STISEN, SIMON, DEMIREL, MEHMET C., and MARIETHOZ, GREGOIRE
- Subjects
INTERPOLATION ,RAINFALL ,MISSING data (Statistics) ,ALGORITHMS ,RESAMPLING (Statistics) ,DATA mining - Abstract
Missing rainfall data are a major limitation for distributed hydrological modeling and climate studies. Practitioners need reliable approaches that can be employed on a daily basis, often with too limited data in space to feed complex predictive models. In this study we compare different automatic approaches for missing data imputation, including geostatistical interpolation and pattern-based estimation algorithms. We introduce two pattern-based approaches based on the analysis of historical data patterns: (i) an iterative version of K-nearest neighbor (IKNN) and (ii) a new algorithm called vector sampling (VS) that combines concepts of multiple-point statistics and resampling. Both algorithms can draw estimations from variably incomplete data patterns, allowing the target dataset to be at the same time the training dataset. Tested on five case studies from Denmark, Australia, and Switzerland, the algorithms show a different performance that seems to be related to the terrain type: on flat terrains with spatially homogeneous rain events, geostatistical interpolation tends to minimize the average error, while in mountainous regions with nonstationary rainfall statistics, data mining can recover better the rainfall patterns. The VS algorithm, requiringminimal parameterization, turns out to be a convenient option for routine application on complex and poorly gauged terrains. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. Addressing priority challenges in the detection and assessment of colorectal polyps from capsule endoscopy and colonoscopy in colorectal cancer screening using machine learning.
- Author
-
Blanes-Vidal, Victoria, Baatrup, Gunnar, and Nadimi, Esmaeil S.
- Subjects
TUMOR prevention ,RECTUM tumors ,COLON tumor prevention ,ALGORITHMS ,COLONOSCOPY ,INTESTINAL polyps ,MACHINE learning ,PAIRED comparisons (Mathematics) ,PRIORITY (Philosophy) ,RESEARCH evaluation ,CAPSULE endoscopy ,DESCRIPTIVE statistics ,EARLY detection of cancer ,DATA science - Abstract
Background: Colorectal capsule endoscopy (CCE) is a potentially valuable patient-friendly technique for colorectal cancer screening in large populations. Before it can be widely applied, significant research priorities need to be addressed. We present two innovative data science algorithms which can considerably improve acquisition and analysis of relevant data on colorectal polyps obtained from capsule endoscopy. Material and methods: A fully paired study was performed (2015–2016), where 255 participants from the Danish national screening program had CCE, colonoscopy, and histopathology of all detected polyps. We developed: (1) a new algorithm to match CCE and colonoscopy polyps, based on objective measures of similarity between polyps, and (2) a deep convolutional neural network (CNN) for autonomous detection and localization of colorectal polyps in colon capsule endoscopy. Results and conclusion: Unlike previous matching methods, our matching algorithm is able to objectively quantify the similarity between CCE and colonoscopy polyps based on their size, morphology and location, and provides a one-to-one unequivocal match between CCE and colonoscopy polyps. Compared to previous methods, the autonomous detection algorithm showed unprecedented high accuracy (96.4%), sensitivity (97.1%) and specificity (93.3%), calculated in respect to the number of polyps detected by trained nurses and gastroenterologists after visualizing frame-by-frame the CCE videos. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. Does a Diagnostic Classification Algorithm Help to Predict the Course of Low Back Pain? A Study of Danish Chiropractic Patients With 1-Year Follow-up.
- Author
-
HARTVIGSEN, LISBETH, KONGSTED, ALICE, VACH, WERNER, SALMI, LOUIS-RACHID, and HESTBAEK, LISE
- Subjects
ALGORITHMS ,CHIROPRACTIC ,CONFIDENCE intervals ,LONGITUDINAL method ,MEDICAL practice ,SCIENTIFIC observation ,HEALTH outcome assessment ,PATIENTS ,QUESTIONNAIRES ,REGRESSION analysis ,LOGISTIC regression analysis ,DATA analysis ,LUMBAR pain - Abstract
BACKGROUND: A diagnostic classification algorithm, "the Petersen classification," consisting of 12 categories based on a standardized examination protocol, was developed for the primary purpose of identifying clinically homogeneous subgroups of individuals with low back pain (LBP), OBJECTIVES: To investigate whether a diagnostic classification algorithm is associated with activity limitation and LBP intensity at follow-up assessments of 2 weeks, 3 months, and 1 year, and whether the algorithm improves outcome prediction when added to a set of known predictors. METHODS: This was a prospective observational study of 934 consecutive adult patients with new episodes of LBP who were visiting chiropractic practices in primary care and categorized according to the Petersen classification. Outcomes were disability and pain intensity measured with questionnaires at 2 weeks and 3 months, and 1-year trajectories of LBP based on weekly responses to text messages. Associations were analyzed with linear and logistic regression models. In a subgroup of patients, the numbers of visits to primary and secondary care were described. RESULTS: The Petersen classification was statistically significantly associated with all outcomes (P<.001) but explained very little of the variance (R² = 0.00-0.05). Patients in the nerve root involvement category had the most pain and activity limitation and the most visits to primary and secondary care. Patients in the myofascial pain category were the least affected. CONCLUSION: The Petersen classification was not helpful in determining individual prognosis in patients with LBP receiving usual care in chiropractic practice. However, patients should be examined for potential nerve root involvement to improve prediction of likely outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
23. Do women in Europe live longer and happier lives than men?.
- Author
-
Solé-Auró, Aïda, Jasilionis, Domantas, Li, Peng, and Oksuzyan, Anna
- Subjects
ALGORITHMS ,HAPPINESS ,LIFE expectancy ,RETIREMENT ,SATISFACTION ,SEX distribution ,SURVEYS ,PSYCHOLOGY of women ,ATTITUDES toward death ,DISEASE prevalence - Abstract
Background The article examines gender differences in happy life expectancy at age 50 (LE50) and computes the age-specific contributions of mortality and happiness effects to gender differences in happy LE50 in 16 European countries. Methods Abridged life tables and happy LE50 were calculated using conventional life tables and Sullivan's method. Age-specific death rates were calculated from deaths and population exposures in the Human Mortality Database. Happiness prevalence was estimated using the 2010–11 Survey of Health, Ageing and Retirement in Europe. Happiness was defined using a single question about life satisfaction on a scale of 0–10. A decomposition algorithm was applied to estimate the exact contributions of the differences in mortality and happiness to the overall gender gap in happy LE50. Results Gender differences in happy LE50 favour women in all countries except Portugal (0.43 years in Italy and 3.55 years in Slovenia). Generally, the contribution of the gender gap in happiness prevalence is smaller than the one in mortality. The male advantage in the prevalence of happiness partially offsets the effects of the female advantage in mortality on the total gender gap in happy LE50. Gender differences in unhappy life years make up the greatest share of the gender gap in total LE50 in all countries except Denmark, Germany, Netherlands, Slovenia and Sweden. Conclusion Countries with the largest gender gap in LE are not necessarily the countries with larger differences in happy LE50. The remaining years of life of women are expected to be spent not only in unhealthy but also in unhappy state. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
24. Exploring the concurrent validity of the nationwide assessment of permanent nursing home residence in Denmark - A cross-sectional data analysis using two administrative registries.
- Author
-
Bebe, Anna, Sternhagen Nielsen, Anni Brit, Grauers Willadsen, Tora, Søndergaard, Jens, Siersma, Volkert, Rós Nicolaisdóttir, Dagný, Kragstrup, Jakob, Boch Waldorff, Frans, Nielsen, Anni Brit Sternhagen, Willadsen, Tora Grauers, Nicolaisdóttir, Dagný Rós, and Waldorff, Frans Boch
- Subjects
NURSING care facilities ,NURSING home patients ,SENIOR housing ,ALGORITHMS ,MEDICAL care for older people ,COMPARATIVE studies ,RESEARCH methodology ,MEDICAL cooperation ,RESEARCH ,EVALUATION research ,ACQUISITION of data ,CROSS-sectional method - Abstract
Background: Many register studies make use of information about permanent nursing home residents. Statistics Denmark (StatD) identifies nursing home residents by two different indirect methods, one based on reports from the municipalities regarding home care in taken place in a nursing home, and the other based on an algorithm created by StatD. The aim of the present study was to validate StatD's nursing home register using dedicated administrative municipality records on individual nursing home residents as gold standard.Methods: In total, ten Danish municipalities were selected. Within each Danish Region, we randomly selected one municipality reporting to Stat D (Method 1) and one not reporting where instead an algorithm created by StatD was used to discover nursing home residents (Method 2). Method 1 means that municipalities reported to Stat D whether home care has taken place in a nursing home or in a private home. Method 2 is based on an algorithm created by Stat D for the municipalities where Method 1 is not applicable. Our gold standard was the information from the local administrative system in all ten selected municipalities. Each municipality provided a list with all individuals > 65 years living in a nursing home on January 1st, 2013 as well as the central personal number. This was compared to the list of individuals >65 living in nursing home facilities in the same ten municipalities on January 1st, 2013 retrieved from StatD.Results: According to the data received directly from the municipalities, which was used as our gold Standard 3821 individuals were identified as nursing home residents. The StatD register identified 6,141 individuals as residents. Additionally, 556 of the individuals identified by the municipalities were not identified in the StatD register. Overall sensitivity for the ten municipalities in the StatD nursing home register was 0.85 (95% CI 0.84-0.87) and the PPV was 0.53 (95% CI 0.52-0.54). The municipalities for which nursing home status was based on the StatD algorithm (method 2) had a sensitivity of 0.84 (95% CI 0.82-0.86) and PPV of 0.48 (95% CI 0.46-0.50). Both slightly lower than the reporting municipalities (method 1) where the sensitivity was 0.87(95% CI 0.85-0.88) and the PPV was 0.57 (95% CI 0.56-0.59). Additionally, the sensitivity and PPV of the Stat D register varied heavily among the ten municipalities from 0.51 (95% CI 0.43-0.59) to 0.96 (95% CI 0.95-0.98) and PPV correspondingly, from 0.14 (95% CI: 0.11-0.17) to 0.73 (95% CI 0.69-0.77).Conclusions: The overall PPV of StatD nursing home register was low and differences between municipalities existed. Even in countries with extensive nation-wide registers, validating studies should be conducted for outcomes based on these registers. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
25. Estimating a population cumulative incidence under calendar time trends.
- Author
-
Hansen, Stefan N., Overgaard, Morten, Andersen, Per K., and Parner, Erik T.
- Subjects
PATHOLOGICAL psychology ,KAPLAN-Meier estimator ,DISEASE risk factors ,PROPORTIONAL hazards models ,MATHEMATICAL models ,PSYCHIATRIC diagnosis ,DIAGNOSIS of obsessive-compulsive disorder ,PSYCHIATRIC epidemiology ,ALGORITHMS ,ATTENTION-deficit hyperactivity disorder ,COMPUTER simulation ,OBSESSIVE-compulsive disorder ,RISK assessment ,TIME ,THEORY ,TOURETTE syndrome ,DISEASE incidence ,DISEASE prevalence ,DIAGNOSIS - Abstract
Background: The risk of a disease or psychiatric disorder is frequently measured by the age-specific cumulative incidence. Cumulative incidence estimates are often derived in cohort studies with individuals recruited over calendar time and with the end of follow-up governed by a specific date. It is common practice to apply the Kaplan-Meier or Aalen-Johansen estimator to the total sample and report either the estimated cumulative incidence curve or just a single point on the curve as a description of the disease risk.Methods: We argue that, whenever the disease or disorder of interest is influenced by calendar time trends, the total sample Kaplan-Meier and Aalen-Johansen estimators do not provide useful estimates of the general risk in the target population. We present some alternatives to this type of analysis.Results: We show how a proportional hazards model may be used to extrapolate disease risk estimates if proportionality is a reasonable assumption. If not reasonable, we instead advocate that a more useful description of the disease risk lies in the age-specific cumulative incidence curves across strata given by time of entry or perhaps just the end of follow-up estimates across all strata. Finally, we argue that a weighted average of these end of follow-up estimates may be a useful summary measure of the disease risk within the study period.Conclusions: Time trends in a disease risk will render total sample estimators less useful in observational studies with staggered entry and administrative censoring. An analysis based on proportional hazards or a stratified analysis may be better alternatives. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
26. Acceptance and commitment group therapy (ACT-G) for health anxiety: a randomized controlled trial.
- Author
-
Eilenberg, T., Fink, P., Jensen, J. S., Rief, W., and Frostholm, L.
- Subjects
ANXIETY disorders treatment ,ALGORITHMS ,CHI-squared test ,CONFIDENCE intervals ,GROUP psychotherapy ,HEALTH surveys ,HELP-seeking behavior ,LONGITUDINAL method ,PATIENT satisfaction ,QUALITY of life ,QUESTIONNAIRES ,STATISTICAL sampling ,SCALE analysis (Psychology) ,SELF-evaluation ,STATISTICS ,T-test (Statistics) ,DATA analysis ,EFFECT sizes (Statistics) ,EDUCATIONAL attainment ,ACCEPTANCE & commitment therapy ,RANDOMIZED controlled trials ,DATA analysis software ,MEDICAL coding ,DESCRIPTIVE statistics - Abstract
Background.Severe health anxiety is frequent and costly, yet rarely diagnosed or treated. Earlier treatment studies show problems with recruitment, dropout and recovery. In the current study, the authors aimed to test the effect of acceptance and commitment group therapy (ACT-G) compared to waitlist in patients with severe health anxiety.Method.During March 2010 to April 2012, 126 consecutively referred patients meeting research criteria for severe health anxiety were block-randomized (1:1) to ACT-G or a 10 months’ waitlist (Clinicaltrials.gov, no. NCT01158430). Patients allocated to ACT-G were treated in seven groups of nine patients between December 2010 and October 2012 and received nine weekly 3-h group sessions and a booster session consisting of ACT techniques. The primary outcome was decided a priori as the mean change in self-reported illness worry on the Whiteley-7 Index (WI) from baseline to 10 months’ follow-up. Secondary outcomes were improvement in emotional distress and health-related quality of life at 10 months’ follow-up.Results.Intention-to-treat analysis showed a statistically significant mean difference of 20.5 points [95% confidence interval (CI) 11.7–29·4, p < 0.001] on the WI between the groups at 10 months, and the between-group effect sizes were large (Cohen's d = 0.89, 95% CI 0.50–1.29). The number needed to treat was 2.4 (95% CI 1.4–3.4, p < 0.001). Diagnosis and treatment were well accepted by the patients.Conclusions.ACT-G seems feasible, acceptable and effective in treating severe health anxiety. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
27. The 'true' incidence of surgically treated deep prosthetic joint infection after 32,896 primary total hip arthroplasties.
- Author
-
Gundtoft, Per Hviid, Overgaard, Søren, Schønheyder, Henrik Carl, Møller, Jens Kjølseth, Kjærsgaard-Andersen, Per, and Pedersen, Alma Becic
- Subjects
ALGORITHMS ,CONFIDENCE intervals ,REPORTING of diseases ,INFECTION ,LONGITUDINAL method ,COMPLICATIONS of prosthesis ,RESEARCH funding ,TOTAL hip replacement ,DISEASE incidence ,DESCRIPTIVE statistics - Abstract
Background and purpose - It has been suggested that the risk of prosthetic joint infection (PJI) in patients with total hip arthroplasty (THA) may be underestimated if based only on arthroplasty registry data. We therefore wanted to estimate the 'true' incidence of PJI in THA using several data sources. Patients and methods - We searched the Danish Hip Arthroplasty Register (DHR) for primary THAs performed between 2005 and 2011. Using the DHR and the Danish National Register of Patients (NRP), we identified first revisions for any reason and those that were due to PJI. PJIs were also identified using an algorithm incorporating data from microbiological, prescription, and clinical biochemistry databases and clinical findings from the medical records. We calculated cumulative incidence with 95% confidence interval. Results - 32,896 primary THAs were identified. Of these, 1,546 had first-time revisions reported to the DHR and/or the NRP. For the DHR only, the 1- and 5-year cumulative incidences of PJI were 0.51% (0.44-0.59) and 0.64% (0.51-0.79). For the NRP only, the 1- and 5-year cumulative incidences of PJI were 0.48% (0.41-0.56) and 0.57% (0.45-0.71). The corresponding 1- and 5-year cumulative incidences estimated with the algorithm were 0.86% (0.77-0.97) and 1.03% (0.87-1.22). The incidences of PJI based on the DHR and the NRP were consistently 40% lower than those estimated using the algorithm covering several data sources. Interpretation - Using several available data sources, the 'true' incidence of PJI following primary THA was estimated to be approximately 40% higher than previously reported by national registries alone. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
28. Satisfaction with daily occupations amongst asylum seekers in Denmark.
- Author
-
Morville, Anne-Le, Erlandsson, Lena-Karin, Danneskiold-Samsøe, Bente, Amris, Kirstine, and Eklund, Mona
- Subjects
REFUGEES ,ALGORITHMS ,JOB satisfaction ,QUESTIONNAIRES ,STATISTICAL hypothesis testing ,STATISTICS ,STATISTICAL power analysis ,DATA analysis ,VISUAL analog scale ,DATA analysis software ,DESCRIPTIVE statistics ,MANN Whitney U Test - Abstract
Aim: The aim of this study was to describe asylum seekers' satisfaction with daily occupations and activity level while in a Danish asylum centre, and whether this changed over time. Another aim was to describe whether exposure to torture, self-rated health measures, and ADL ability were related to their satisfaction with daily occupations and activity level. Methods: A total of 43 asylum seekers at baseline and 17 at follow-up were included. The questionnaires Satisfaction with Daily Occupations, Major Depression Inventory, WHO-5 Wellbeing, Pain Detect, a questionnaire covering torture, and basic social information were used as well as Assessment of Motor and Process Skills. Results: The results showed a low level of satisfaction with daily occupations at both baseline and follow-up. There was no statistically significant change in satisfaction or activity level between baseline and the follow-up. Associations between AMPS process skills - education, worst pain and activity level - were present at baseline, as was a relationship between AMPS process skills and satisfaction. At follow-up, associations between WHO-5 and satisfaction and activity level and between MDI scores and activity level were found. Conclusion: Asylum seekers experience a low level of satisfaction with daily occupations, both at arrival and after 10 months in an asylum centre. There is a need for further research and development of occupation-focused rehabilitation methods for the asylum seeker population. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
29. Automatic segmentation of the heart in radiotherapy for breast cancer.
- Author
-
Lorenzen, Ebbe L., Brink, Carsten, and Ewertz, Marianne
- Subjects
PREVENTION of heart diseases ,RESEARCH evaluation ,ALGORITHMS ,BREAST tumors ,COMPARATIVE studies ,COMPUTED tomography ,CONFIDENCE intervals ,T-test (Statistics) ,DATA analysis software ,ACCURACY ,DESCRIPTIVE statistics - Abstract
Background. The aim of this study was to evaluate two fully automatic segmentation methods in comparison with manual delineations for their use in delineating the heart on planning computed tomography (CT) used in radiotherapy for breast cancer. Material and methods. Automatic delineation of heart in 15 breast cancer patients was performed by two different automatic delineation systems. Analysis of accuracy and precision of the differences between manual and automatic delineations were evaluated on volume, mean dose, maximum dose and spatial distance differences. Two sets of manual delineations were used in the evaluation: 1) a set prior to common delineation guidelines; and 2) a second set repeated with a common set of guidelines. Results. Systematic differences between automatic and manual delineations were small for volume as well as dose. The uncertainty of the difference in volume was smaller than or similar to the inter-observer variation in manual delineations. For dose, the uncertainty was similar to manual delineations performed without common guidelines but slightly higher than the variation in manual delineations with common guidelines. Spatial differences between average manual and automatic delineations were largest at the base of the heart, where also large variations are observed in the manual delineations. Both algorithms could be improved slightly at the apex of the heart where the variation of automatic delineation was larger than for the manual delineations. Conclusion. Automatic delineation is an equal alternative to manual delineation when compared to the inter-observer variation. The reduction in precision of measured dose was small compared to other uncertainties affecting the estimated heart dose and would for most applications be outweighed by the benefits of fully automated delineations. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
30. Zero problems with compositional data of physical behaviors: a comparison of three zero replacement methods.
- Author
-
Rasmussen, Charlotte Lund, Palarea-Albaladejo, Javier, Johansson, Melker Staffan, Crowley, Patrick, Stevens, Matthew Leigh, Gupta, Nidhi, Karstad, Kristina, and Holtermann, Andreas
- Subjects
ACCELEROMETERS ,ALGORITHMS ,HEALTH behavior ,STATISTICS ,DATA analysis ,SEDENTARY lifestyles ,PHYSICAL activity ,DESCRIPTIVE statistics - Abstract
Background: Researchers applying compositional data analysis to time-use data (e.g., time spent in physical behaviors) often face the problem of zeros, that is, recordings of zero time spent in any of the studied behaviors. Zeros hinder the application of compositional data analysis because the analysis is based on log-ratios. One way to overcome this challenge is to replace the zeros with sensible small values. The aim of this study was to compare the performance of three existing replacement methods used within physical behavior time-use epidemiology: simple replacement, multiplicative replacement, and log-ratio expectation-maximization (lrEM) algorithm. Moreover, we assessed the consequence of choosing replacement values higher than the lowest observed value for a given behavior. Method: Using a complete dataset based on accelerometer data from 1310 Danish adults as reference, multiple datasets were simulated across six scenarios of zeros (5–30% zeros in 5% increments). Moreover, four examples were produced based on real data, in which, 10 and 20% zeros were imposed and replaced using a replacement value of 0.5 min, 65% of the observation threshold, or an estimated value below the observation threshold. For the simulation study and the examples, the zeros were replaced using the three replacement methods and the degree of distortion introduced was assessed by comparison with the complete dataset. Results: The lrEM method outperformed the other replacement methods as it had the smallest influence on the structure of relative variation of the datasets. Both the simple and multiplicative replacements introduced higher distortion, particularly in scenarios with more than 10% zeros; although the latter, like the lrEM, does preserve the ratios between behaviors with no zeros. The examples revealed that replacing zeros with a value higher than the observation threshold severely affected the structure of relative variation. Conclusions: Given our findings, we encourage the use of replacement methods that preserve the relative structure of physical behavior data, as achieved by the multiplicative and lrEM replacements, and to avoid simple replacement. Moreover, we do not recommend replacing zeros with values higher than the lowest observed value for a behavior. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.