34 results
Search Results
2. Prognostic value of a novel artificial intelligence-based coronary CTA-derived ischemia algorithm among patients with normal or abnormal myocardial perfusion.
- Author
-
Bär S, Maaniitty T, Nabeta T, Bax JJ, Earls JP, Min JK, Saraste A, and Knuuti J
- Subjects
- Humans, Female, Male, Retrospective Studies, Middle Aged, Aged, Prognosis, Finland, Time Factors, Coronary Stenosis diagnostic imaging, Coronary Stenosis physiopathology, Coronary Stenosis mortality, Coronary Vessels diagnostic imaging, Coronary Vessels physiopathology, Reproducibility of Results, Risk Factors, Severity of Illness Index, Positron-Emission Tomography, Adenosine administration & dosage, Vasodilator Agents, Angina, Unstable diagnostic imaging, Angina, Unstable etiology, Angina, Unstable mortality, Angina, Unstable physiopathology, Myocardial Perfusion Imaging methods, Predictive Value of Tests, Coronary Angiography, Coronary Artery Disease diagnostic imaging, Coronary Artery Disease physiopathology, Coronary Artery Disease mortality, Coronary Circulation, Computed Tomography Angiography, Algorithms, Artificial Intelligence
- Abstract
Background: Among patients with obstructive coronary artery disease (CAD) on coronary computed tomography angiography (CTA), downstream positron emission tomography (PET) perfusion imaging can be performed to assess the presence of myocardial ischemia. A novel artificial-intelligence-guided quantitative computed tomography ischemia algorithm (AI-QCT
ischemia ) aims to predict ischemia directly from coronary CTA images. We aimed to study the prognostic value of AI-QCTischemia among patients with obstructive CAD on coronary CTA and normal or abnormal downstream PET perfusion., Methods: AI-QCTischemia was calculated by blinded analysts among patients from the retrospective coronary CTA cohort at Turku University Hospital, Finland, with obstructive CAD on initial visual reading (diameter stenosis ≥50%) being referred for downstream15 O-H2 O-PET adenosine stress perfusion imaging. All coronary arteries with their side branches were assessed by AI-QCTischemia . Absolute stress myocardial blood flow ≤2.3 ml/g/min in ≥2 adjacent segments was considered abnormal. The primary endpoint was death, myocardial infarction, or unstable angina pectoris. The median follow-up was 6.2 [IQR 4.4-8.3] years., Results: 662 of 768 (86%) patients had conclusive AI-QCTischemia result. In patients with normal15 O-H2 O-PET perfusion, an abnormal AI-QCTischemia result (n = 147/331) vs. normal AI-QCTischemia result (n = 184/331) was associated with a significantly higher crude and adjusted rates of the primary endpoint (adjusted HR 2.47, 95% CI 1.17-5.21, p = 0.018). This did not pertain to patients with abnormal15 O-H2 O-PET perfusion (abnormal AI-QCTischemia result (n = 269/331) vs. normal AI-QCTischemia result (n = 62/331); adjusted HR 1.09, 95% CI 0.58-2.02, p = 0.794) (p-interaction = 0.039)., Conclusion: Among patients with obstructive CAD on coronary CTA referred for downstream15 O-H2 O-PET perfusion imaging, AI-QCTischemia showed incremental prognostic value among patients with preserved perfusion by15 O-H2 O-PET imaging, but not among those with reduced perfusion., Competing Interests: Declaration of competing interest Dr. Bär received research grants to the institution from Medis Medical Imaging Systems, Bangerter-Rhyner Stiftung (Basel, Switzerland) and Abbott, outside the submitted work. Dr. Saraste received consultancy fees from Astra Zeneca and Pfizer, and speaker fees from Abbott, Astra Zeneca, Janssen, Novartis and Pfizer. Dr. Bax received speaker fees from Abbott. Drs Earls and Min are employees of and hold equity in Cleerly Inc. Dr. Knuuti received consultancy fees from GE Healthcare and Synektik and speaker fees from Bayer, Lundbeck, Boehringer-Ingelheim, Pfizer and Siemens, outside of the submitted work. All other authors have reported that they have no relationships relevant to the contents of this paper to disclose., (Copyright © 2024 The Authors. Published by Elsevier Inc. All rights reserved.)- Published
- 2024
- Full Text
- View/download PDF
3. Physiological State Evaluation in Working Environment Using Expert System and Random Forest Machine Learning Algorithm.
- Author
-
Butkevičiūtė, Eglė, Bikulčienė, Liepa, and Žvironienė, Aušra
- Subjects
MENTAL illness prevention ,WORK environment ,EXPERT systems ,LIFESTYLES ,OBESITY ,WELL-being ,EMPLOYEE attitudes ,CHRONIC diseases ,WORK-related injuries ,SELF-evaluation ,MOBILE apps ,RANDOM forest algorithms ,MACHINE learning ,EVALUATION research ,HUMAN body ,HEALTH status indicators ,DISABILITY evaluation ,PHYSICAL activity ,HEALTH behavior ,QUESTIONNAIRES ,DECISION making ,DESCRIPTIVE statistics ,INFANT mortality ,LOGIC ,ALGORITHMS ,CHRONIC fatigue syndrome ,COVID-19 pandemic - Abstract
Healthy lifestyle is one of the most important factors in the prevention of premature deaths, chronic diseases, productivity loss, obesity, and other economic and social aspects. The workplace plays an important role in promoting the physical activity and wellbeing of employees. Previous studies are mostly focused on individual interviews, various questionnaires that are a conceptual information about individual health state and might change according to question formulation, specialist competence, and other aspects. In this paper the work ability was mostly related to the employee's physiological state, which consists of three separate systems: cardiovascular, muscular, and neural. Each state consists of several exercises or tests that need to be performed one after another. The proposed data transformation uses fuzzy logic and different membership functions with three or five thresholds, according to the analyzed physiological feature. The transformed datasets are then classified into three stages that correspond to good, moderate, and poor health condition using machine learning techniques. A three-part Random Forest method was applied, where each part corresponds to a separate system. The obtained testing accuracies were 93%, 87%, and 73% for cardiovascular, muscular, and neural human body systems, respectively. The results indicate that the proposed work ability evaluation process may become a good tool for the prevention of possible accidents at work, chronic fatigue, or other health problems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Contextual patterns in mobile service usage.
- Author
-
Verkasalo, Hannu
- Subjects
MOBILE communication systems ,CELL phones ,ALGORITHMS ,GLOBAL Positioning System ,EMAIL ,SOCIAL conditions in Great Britain ,SOCIAL history - Abstract
Mobile services differ from other services because of their temporal and spatial attributes. Mobile services additionally differ from each other in their value-added to the end-user. Some services—such as emailing and voice—are more business oriented. On the other hand, various free-time oriented services are provided in new smartphones, such as imaging and music playback. The present paper studies how mobile services are used in different contexts. For this, the paper develops a specialized algorithm that can be used with handset-based usage data acquired straight from end-users in an established panel study process. Educated guesses can be drawn on the user context based on the developed algorithm. In the present exercise usage contexts were divided into home, office and “on the move”. The algorithm is used with exemplary data from Finland and the UK covering 324 consumers in 2006. More than 70% of contextual use cases are correctly classified based on raw data. According to exemplary results particularly multimedia services are used “on the move”, whereas legacy mobile services experience more evenly distributed usage across all contexts. The algorithm that identifies context based on raw data provides a new angle to mobile end-user research. In the future, the accuracy of the algorithm will be improved with the integration of seamless cell-id logging and GPS data. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
5. Customer Classification and Load Profiling Method for Distribution Systems.
- Author
-
Mutanen, Antti, Ruska, Maija, Repo, Sami, and Jarventausta, Pertti
- Subjects
ELECTRIC power distribution ,TEMPERATURE effect ,ELECTRONIC indexes ,ENERGY consumption ,STATE estimation in electric power systems ,RATIOMETER (Electric meter) ,ALGORITHMS - Abstract
In Finland, customer class load profiles are used extensively in distribution network calculation. State estimation systems, for example, use the load profiles to estimate the state of the network. Load profiles are also needed to predict future loads in distribution network planning. In general, customer class load profiles are obtained through sampling in load research projects. Currently, in Finland, customer classification is based on the uncertain customer information found in the customer information system. Customer information, such as customer type, heating solution, and tariff, is used to connect the customers with corresponding customer class load profiles. Now that the automatic meter-reading systems are becoming more common, customer classification and load profiling could be done according to actual consumption data. This paper proposes the use of the ISODATA algorithm for customer classification. The proposed customer classification and load profiling method also includes temperature dependency correction and outlier filtering. The method is demonstrated in this paper by studying a set of 660 hourly metered customers. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
6. Detecting the onset of snow-melt using SSM/I data and the self-organizing map.
- Author
-
Takala, M., Pulliainen, J., Huttunen, M., and Hallikainen, M.
- Subjects
SELF-organizing maps ,ARTIFICIAL neural networks ,SELF-organizing systems ,ALGORITHMS ,ROBUST control ,AUTOMATIC control systems ,RADIOMETERS ,RADIATION measurement instruments - Abstract
In this paper, we present an algorithm to estimate the onset of seasonal snow-melt using space-borne microwave radiometer data. We have earlier developed a simple model called a Channel Difference Algorithm (CDA) to estimate the beginning of the snow-melt. The new algorithm, the SOM Detection Algorithm (SDA), is based on the use of an artificial neural network system a called Self-Organizing Map (SOM). The purpose of this research is to develop a robust and simple algorithm feasible for operative use. The algorithm is tested using SSM/I data with hydrological predictions as reference data. The reference data covers two winters, 1997 and 1998, and is for the boreal forest zone in Finland. The results are promising. The SDA is able to estimate the beginning of the final snow-melt well, especially if the snow water equivalent exhibits large values. Using low-pass filtering for the SDA estimated time series, the estimation can be improved. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
7. Analysis of a statistically initialized fuzzy logic scheme for classifying the severity of convective storms in Finland.
- Author
-
Rossi, Pekka J., Hasu, Vesa, Koistinen, Jarmo, Moisseev, Dmitri, Mäkelä, Antti, and Saltikoff, Elena
- Subjects
STORMS ,FUZZY logic ,DATA quality ,ALGORITHMS ,RADAR meteorology - Abstract
ABSTRACT This paper proposes a method for classifying the severity of individual convective storms with real-time weather radar and lightning location data. The algorithm is based on a statistically initialized fuzzy logic model with human-oriented linguistic inference rules. When combined with an object-oriented convective storm tracking algorithm, the severity classification uses the past severity values in addition to the current state of the storm. Furthermore, the proposed method can be customized to correspond to the user-specific needs of different end user groups. The membership functions of the fuzzy logic model are initialized using the statistical analysis of various storm attributes derived from radar and lightning data, which potentially allows the adaptation of the model to different climates. The statistically initialized severity classification is also stable with respect to systematic errors in the measurements. To adjust the model to the Finnish climate, approximately 40 000 storms were tracked with a weather radar-based object-oriented convective storm tracking algorithm. The tracking was performed over a relatively large study area, with the composite of eight C-band weather radars, enabling also the analysis of relatively long-lived intense storms, such as mesoscale convective systems. In addition, the presented work illustrates the statistical characteristics of convective storms in Finland. The study also discusses the importance of careful data quality control when conducting a statistical analysis with an object-oriented storm tracking algorithm. Copyright © 2013 Royal Meteorological Society [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
8. The Entanglement of Dialectal Variation and Speaker Normalization.
- Author
-
Rankinen, Wil and de Jong, Kenneth
- Subjects
- *
VOWELS , *LINGUISTICS , *PHONETICS , *ETHNIC groups , *ALGORITHMS ,PHYSIOLOGICAL aspects of speech - Abstract
This paper explores the relationship between speaker normalization and dialectal identity in sociolinguistic data, examining a database of vowel formants collected from 88 monolingual American English speakers in Michigan's Upper Peninsula. Audio recordings of Finnish- and Italian-heritage American English speakers reading a passage and a word list were normalized using two normalization procedures. These algorithms are based on different concepts of normalization: Lobanov, which models normalization as based on experience with individual talkers, and Labov ANAE, which models normalization as based on experience with scale-factors inherent in acoustic resonators of all kinds. The two procedures yielded different results; while the Labov ANAE method reveals a cluster shifting of low and back vowels that correlated with heritage, the Lobanov procedure seems to eliminate this sociolinguistic variation. The difference between the two procedures lies in how they treat relations between formant changes, suggesting that dimensions of variation in the vowel space may be treated differently by different normalization procedures, raising the question of how anatomical variation and dialectal variation interact in the real world. The structure of the sociolinguistic effects found with the Labov ANAE normalized data, but not in the Lobanov normalized data, suggest that the Lobanov normalization does over-normalize formant measures and remove sociolinguistically relevant information. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Novel estimation of aerosol processes with particle size distribution measurements: a case study with TOMAS algorithm.
- Author
-
McGuffin, Dana L., Yuanlong Huang, Flagan, Richard C., Tuukka Petäjä, Ydstie, B. Erik, and Adams, Peter J.
- Subjects
- *
PARTICLE size determination , *AEROSOLS , *ALGORITHMS , *ATMOSPHERIC aerosols , *CHEMICAL models , *PARTICULATE matter - Abstract
Atmospheric aerosol microphysical processes are a significant source of uncertainty in predicting climate change. Specifically, aerosol nucleation, emissions, and growth rates, which are simulated in chemical transport models to predict the particle size distribution, are not understood well. However, long-term size distribution measurements made at several ground-based sites across Europe implicitly contain information about the processes that created those size distributions. This work aims to extract that information by developing and applying an inverse technique to constrain aerosol emissions as well as nucleation and growth rates based on hourly size distribution measurements. We developed an inverse method based upon process control theory into an online estimation technique to scale aerosol emissions, growth, and nucleation so that the model-measurement bias in three measured aerosol properties exponentially decays. The properties, which are calculated from the measured and predicted size distributions, used to constrain aerosol nucleation, emission, and growth rates are the number of particles with diameter between 3 nm and 6 nm, the number with diameter greater than 10 nm, and the total dry volume of aerosol (N3-6, N10, Vdry), respectively. In this paper, we focus on developing and applying the estimation methodology in a zero-dimensional "box" model as a proof-of- concept before applying it to a three-dimensional simulation in subsequent work. The methodology is first tested on a dataset of synthetic and perfect measurements that span diverse environments in which the true particle emissions, growth, and nucleation rates are known. The inverse technique accurately estimates the aerosol microphysical process rates with an average and maximum error of 2 % and 13 %, respectively. Next, we investigate the effect that measurement noise has on the estimated rates. The method is robust to typical instrument noise in the aerosol properties as there is a negligible increase in bias of the estimated process rates. Finally, the methodology is applied to long-term datasets of in-situ size distribution measurements in Western Europe from May 2006 through June 2007. At Melpitz, Germany and Hyytiälä, Finland, the average diurnal profiles of estimated 3 nm particle formation rates are reasonable, having peaks near noon local time with average peak values of 1 and 0.15 cm(-3) s(-1), respectively. The normalized absolute error in estimated N3-6, N10, and Vdry at three European measurement sites is less than 15 %, showing that the estimation framework developed here has potential to decrease model-measurement bias while constraining uncertain aerosol microphysical processes. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. Automatic stem mapping by merging several terrestrial laser scans at the feature and decision levels.
- Author
-
Liang X and Hyyppä J
- Subjects
- Automation, Finland, Image Processing, Computer-Assisted, Algorithms, Decision Making, Lasers, Plant Stems anatomy & histology, Trees anatomy & histology
- Abstract
Detailed up-to-date ground reference data have become increasingly important in quantitative forest inventories. Field reference data are conventionally collected at the sample plot level by means of manual measurements, which are both labor-intensive and time-consuming. In addition, the number of attributes collected from the tree stem is limited. More recently, terrestrial laser scanning (TLS), using both single-scan and multi-scan techniques, has proven to be a promising solution for efficient stem mapping at the plot level. In the single-scan method, the laser scanner is placed at the center of the plot, creating only one scan, and all trees are mapped from the single-scan point cloud. Consequently, the occlusion of stems increases as the range of the scanner increases, depending on the forest's attributes. In the conventional multi-scan method, several scans are made simultaneously inside and outside of the plot to collect point clouds representing all trees within the plot, and these scans are accurately co-registered by using artificial reference targets manually placed throughout the plot. The additional difficulty of applying the multi-scan method is due to the point-cloud registration of several scans not being fully automated yet. This paper proposes a multi-single-scan (MSS) method to map the sample plot. The method does not require artificial reference targets placed on the plot or point-level registration. The MSS method is based on the fully automated processing of each scan independently and on the merging of the stem positions automatically detected from multiple scans to accurately map the sample plot. The proposed MSS method was tested on five dense forest plots. The results show that the MSS method significantly improves the stem-detection accuracy compared with the single-scan approach and achieves a mapping accuracy similar to that achieved with the multi-scan method, without the need for the point-level registration.
- Published
- 2013
- Full Text
- View/download PDF
11. First principles modeling of nonlinear incidence rates in seasonal epidemics.
- Author
-
Ponciano JM and Capistrán MA
- Subjects
- Child, Cohort Studies, Computational Biology, Databases, Factual, Finland, Gambia, Humans, Measles epidemiology, Models, Statistical, Research Design, Respiratory Syncytial Virus Infections epidemiology, Seasons, Stochastic Processes, Algorithms, Epidemics, Epidemiologic Methods, Nonlinear Dynamics
- Abstract
In this paper we used a general stochastic processes framework to derive from first principles the incidence rate function that characterizes epidemic models. We investigate a particular case, the Liu-Hethcote-van den Driessche's (LHD) incidence rate function, which results from modeling the number of successful transmission encounters as a pure birth process. This derivation also takes into account heterogeneity in the population with regard to the per individual transmission probability. We adjusted a deterministic SIRS model with both the classical and the LHD incidence rate functions to time series of the number of children infected with syncytial respiratory virus in Banjul, Gambia and Turku, Finland. We also adjusted a deterministic SEIR model with both incidence rate functions to the famous measles data sets from the UK cities of London and Birmingham. Two lines of evidence supported our conclusion that the model with the LHD incidence rate may very well be a better description of the seasonal epidemic processes studied here. First, our model was repeatedly selected as best according to two different information criteria and two different likelihood formulations. The second line of evidence is qualitative in nature: contrary to what the SIRS model with classical incidence rate predicts, the solution of the deterministic SIRS model with LHD incidence rate will reach either the disease free equilibrium or the endemic equilibrium depending on the initial conditions. These findings along with computer intensive simulations of the models' Poincaré map with environmental stochasticity contributed to attain a clear separation of the roles of the environmental forcing and the mechanics of the disease transmission in shaping seasonal epidemics dynamics.
- Published
- 2011
- Full Text
- View/download PDF
12. Evaluation of forest nutrition based on large-scale foliar surveys: are nutrition profiles the way of the future?
- Author
-
Luyssaert S, Sulkava M, Raitio H, and Hollmén J
- Subjects
- Data Collection, Ecosystem, Finland, Minerals analysis, Algorithms, Environmental Monitoring methods, Micronutrients analysis, Trees
- Abstract
This paper introduces the use of nutrition profiles as a first step in the development of a concept that is suitable for evaluating forest nutrition on the basis of large-scale foliar surveys. Nutrition profiles of a tree or stand were defined as the nutrient status, which accounts for all element concentrations, contents and interactions between two or more elements. Therefore a nutrition profile overcomes the shortcomings associated with the commonly used concepts for evaluating forest nutrition. Nutrition profiles can be calculated by means of a neural network, i.e. a self-organizing map, and an agglomerative clustering algorithm with pruning. As an example, nutrition profiles were calculated to describe the temporal variation in the mineral composition of Scots pine and Norway spruce needles in Finland between 1987 and 2000. The temporal trends in the frequency distribution of the nutrition profiles of Scots pine indicated that, between 1987 and 2000, the N, S, P, K, Ca, Mg and Al decreased, whereas the needle mass (NM) increased or remained unchanged. As there were no temporal trends in the frequency distribution of the nutrition profiles of Norway spruce, the mineral composition of the needles of Norway spruce needles subsequently did not change. Interpretation of the (lack of) temporal trends was outside the scope of this example. However, nutrition profiles prove to be a new and better concept for the evaluation of the mineral composition of large-scale surveys only when a biological interpretation of the nutrition profiles can be provided.
- Published
- 2004
- Full Text
- View/download PDF
13. Extraction of full absorption peaks in airborne gamma-spectrometry by filtering techniques coupled with a study of the derivatives. Comparison with the window method.
- Author
-
Guillot L
- Subjects
- Cesium Radioisotopes analysis, Finland, Humans, Potassium Radioisotopes analysis, Radiation Monitoring instrumentation, Radiation Monitoring standards, Spectrometry, Gamma instrumentation, Spectrometry, Gamma standards, Thorium analysis, Uranium analysis, Air Pollution, Radioactive analysis, Algorithms, Radiation Monitoring methods, Signal Processing, Computer-Assisted, Software Validation, Spectrometry, Gamma methods
- Abstract
In this paper, an adaptation of a spectral profile analysis method, currently used in high-resolution spectrometry, to airborne gamma measurements is presented. A new algorithm has been developed for extraction of full absorption peaks by studying the variations in the spectral profile of data recorded with large-volume NaI detectors (16 l) with a short sampling time (2 s). The use of digital filters, taking into consideration the characteristics of the absorption peaks, significantly reduced the counting fluctuations, making detection possible based on study of the first and second derivatives. The absorption peaks are then obtained by modelling, followed by subtraction of the Compton continuum in the detection window. Compared to the conventional stripping ratio method, spectral profile analysis offers similar performance for the natural radioelements. The 137Cs 1SD detection limit is approximately 1200 Bq/m2 in a natural background of 200 Bq/kg 40K, 33 Bq/kg 238U and 33 Bq/kg 232Th. At low energy the very high continuum leads to detection limits similar to those obtained by the windows method, but the results obtained are more reliable. In the presence of peak overlaps, however, analysis of the spectral profile alone is not sufficient to separate the peaks, and further processing is necessary. Within the framework of environmental monitoring studies, spectral profile analysis is of great interest because it does not require any assumptions about the nature of the nuclides. The calculation of the concentrations from the results obtained is simple and reliable, since only the full absorption contributions are taken into consideration. A quantitative estimate of radioactive anomalies can thus be obtained rapidly.
- Published
- 2001
- Full Text
- View/download PDF
14. PHASED MULTI-TARGET AREAL DEVELOPMENT COMPETITIONS: ALGORITHMS FOR COMPETITOR ALLOCATION.
- Author
-
Lahdenperä, Pertti
- Subjects
REAL estate development ,ALGORITHMS ,ECONOMIC competition ,URBAN planning ,HOUSING development ,PUBLIC lands - Abstract
Copyright of International Journal of Strategic Property Management is the property of Vilnius Gediminas Technical University and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2009
- Full Text
- View/download PDF
15. The Workforce Scheduling Process Using the PEAST Algorithm.
- Author
-
Kyngäs, Nico R. M., Nurmi, Kimmo J., and Kyngäs, Jari R.
- Subjects
LABOR supply ,PRIVATE companies ,METAHEURISTIC algorithms ,COMPUTATIONAL intelligence ,ALGORITHMS - Abstract
Workforce scheduling has become increasingly important for both the public sector and private companies. Good rosters have many benefits for an organization, such as lower costs, more effective utilization of resources and fairer workloads and distribution of shifts. This paper presents a framework and an algorithm that have been successfully used to model and solve workforce scheduling problems in Finnish companies. The algorithm has been integrated into market-leading workforce management software in Finland. [ABSTRACT FROM AUTHOR]
- Published
- 2013
16. Real-world data on diffuse large B-cell lymphoma in 2010-2019: usability of large data sets of Finnish hospital data lakes.
- Author
-
Tuominen, Samuli, Uusi-Rauva, Kristiina, Blom, Tea, Jyrkkiö, Sirkku, Tuppurainen, Kaisa, and Alanne, Erika
- Subjects
THERAPEUTIC use of antineoplastic agents ,HOSPITALS ,ACQUISITION of data ,B cell lymphoma ,RETROSPECTIVE studies ,SURVIVAL analysis (Biometry) ,RESEARCH funding ,ALGORITHMS - Abstract
Background: Real-world data on diffuse large B-cell lymphoma (DLBCL) has remained incomplete. In Finland, health record data originally recorded in different hospital data record systems are collectively available via data lake technology, enabling efficient extraction and analysis of large data sets. The usability of Finnish data lake data in the assessment of DLBCL was evaluated. Methods: Adult DLBCL patients diagnosed between 2010 and 2019, home municipality in the Hospital District of Southwest Finland and data available in respective data lake were included. Results: The algorithmic determination of treatment lines and respective survival was successful. Patient characterization was feasible, albeit partly incomplete because of limited data content/availability and coverage. Stage, International Prognostic Index and cell of origin were available for 63.0, 68.3 and 28.4% of patients, respectively. Genetic aberrations were not structurally available or feasible to extract without a manual chart review. Conclusion: Finnish data lakes represent an efficient way to analyze large DLBCL data sets. The current study provides a tool for developing recording practices in routine care. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. MARITIME TRAFFIC EXTERNALITIES IN THE GULF OF FINLAND UNTIL 2030.
- Author
-
Kalli, Juha, Saikku, Reetta, Repka, Sari, and Tapaninen, Ulla
- Subjects
- *
ENVIRONMENTAL impact analysis , *SUSTAINABLE development , *ALGORITHMS , *MARITIME shipping , *TRANSPORTATION costs , *SHIPBORNE automatic identification systems , *CARBON dioxide mitigation - Abstract
Maritime traffic in the Gulf of Finland has grown remarkably during the 2000s. This increase has an impact on die environment and exposes it to risks. These problems should be controlled to guarantee sustainable development and the welfare of inhabitants in die area. A method for estimating die impact of ship-originated air emissions on the environment is to calculate their environmental externalities which are a part of the total marginal social costs of shipping. The internalization of externalities as a control metiiod of transport would comply with the polluter pays principle and act as a fair traffic control metiiod between transport modes. In this paper, we present the results of C02, NOx, SOx and PM emissions originating from ships and tiieir externalities in die Gulf of Finland up to 2015. The calculation algorithm developed for this study produces emission estimates per annum and converts them into externalities. We focus on passenger, tanker, general cargo, Ro-Ro, container and bulk vessel ship types representing almost 90% of the total NOx emissions of shipping in the area. Scenario modelling is a method for estimating die effects of forthcoming or planned regulations and helps with targeting emission abatement actions to maximize tiieir profit. The results of the calculation algorithm show that externalities can be used as a consultative tool for transport-related decision-making. The costs are given at the price levels of the year 2000. The total external cost of ship-originated C02, NOx, SOx and PM emissions in the Gulf of Finland was almost €175 million in 2007. Due to increased traffic volumes, diese costs will increase to nearly €214 million in 2015. The majority of externalities are produced by C02 emissions. If we deduct C02 externalities from the results, we get total externalities of €57 million in 2007. Following eight years (2015), externalities would be 28% or €41 million lower. This would be as a result of regulation reducing the sulphur content of marine fuels. Regulating SOx and PM emissions will slow down the increasing trend of shipborne externalities in the Gulf of Finland; however, die externalities are still growing. In order to achieve a downward trend, die two major compounds resulting in externalities must be reduced, which requires strict actions to lower shipborne C02 and NOx emissions. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
18. Customized frequent patterns mining algorithms for enhanced Top-Rank-K frequent pattern mining.
- Author
-
Abdelaal, Areej Ahmad, Abed, Sa'ed, Al-Shayeji, Mohammad, and Allaho, Mohammad
- Subjects
- *
SEQUENTIAL pattern mining , *ALGORITHMS , *DATA mining - Abstract
• Customizing general frequent pattern mining algorithms to efficient Top-Rank-K ones. • Employing Dynamic Minimum Support Threshold Raising strategy to ensure efficiency. • Outperforming BTK algorithm with a 90% runtime improvement. • Experiments on real and synthetic datasets including Connect and Retail. Mining frequent patterns (FP) is an essential task in data mining. The parameter required for this task is typically the minimum support threshold. Tuning this parameter to a suitable value is a difficult task, especially for inexperienced users. Thus, the Top-Rank-K frequent patterns mining problem was introduced. It requires the user to input an easily-evaluated parameter, K , in order to obtain the set of all frequent patterns from the most frequent to the K th rank of frequency. In this paper, we customize three general Frequent Pattern Mining (FPM) algorithms, namely FIN, PrePost, and PrePost+, to develop specialized Top-Rank-K FP mining algorithms: TK_FIN, TK_PrePost, and TK_PrePost+. The Dynamic Minimum Support Raising strategy is applied on these algorithms to ensure efficiency. Experimentally, we evaluate the performance of these algorithms against an original, efficient, Top-Rank-K algorithm, BTK. The three presented algorithms perform 90% better than BTK in most of the experiments, with respect to runtime. Between the three Top-Rank-K FPM algorithms we present, TK_FIN achieves the best performance from both runtime and memory consumption perspectives. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. HVDC loss factors in the Nordic power market.
- Author
-
Tosatto, Andrea and Chatzivasileiadis, Spyros
- Subjects
- *
ELECTRICITY markets , *COST functions , *ALGORITHMS - Abstract
• Linear loss factors penalize one HVDC line over the other. • Piecewise-linear loss factors better represent quadratic loss functions. • Piecewise-linear loss factors allow for a better distribution of power flows. • HVDC loss factors only disproportionately increase AC losses. • HVDC and AC loss factors lead to losses minimization. In the Nordic countries (Sweden, Norway, Finland and Denmark), many interconnectors are formed by long High-Voltage Direct-Current (HVDC) lines. Every year, the operation of such interconnectors costs millions of Euros to Transmission System Operators (TSOs) due to the high amount of losses that are not considered while clearing the market. To counteract this problem, Nordic TSOs (Svenska kraftnät - Sweden, Statnett - Norway, Fingrid - Finland, Energinet - Denmark) have proposed to introduce linear HVDC loss factors in the market clearing algorithm. The assessment of such a measure requires a detailed model of the system under investigation. In this paper we develop and introduce a detailed market model of the Nordic countries and we analyze the impact of different loss factor formulations. We show that linear loss factors penalize one HVDC line over the other, and this can jeopardize revenues of merchant HVDC lines. In this regard, we propose piecewise-linear loss factors: a simple to implement but highly effective solution. Moreover, we demonstrate how the introduction of only HVDC loss factors is a partial solution, since it disproportionately increases the AC losses. Our results show that the inclusion of AC loss factors can eliminate this problem. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. A benchmark case study for seismic event relative location.
- Author
-
Gibbons, S J, Kværna, T, Tiira, T, and Kozlovskaya, E
- Subjects
SEISMIC event location ,ALGORITHMS ,SIGNAL-to-noise ratio - Abstract
'Precision seismology' encompasses a set of methods which use differential measurements of time-delays to estimate the relative locations of earthquakes and explosions. Delay-times estimated from signal correlations often allow far more accurate estimates of one event location relative to another than is possible using classical hypocentre determination techniques. Many different algorithms and software implementations have been developed and different assumptions and procedures can often result in significant variability between different relative event location estimates. We present a Ground Truth (GT) dataset of 55 military surface explosions in northern Finland in 2007 that all took place within 300 m of each other. The explosions were recorded with a high signal-to-noise ratio to distances of about 2°, and the exceptional waveform similarity between the signals from the different explosions allows for accurate correlation-based time-delay measurements. With exact coordinates for the explosions, we are able to assess the fidelity of relative location estimates made using any location algorithm or implementation. Applying double-difference calculations using two different 1-D velocity models for the region results in hypocentre-to-hypocentre distances which are too short and it is clear that the wavefield leaving the source region is more complicated than predicted by the models. Using the GT event coordinates, we are able to measure the slowness vectors associated with each outgoing ray from the source region. We demonstrate that, had such corrections been available, a significant improvement in the relative location estimates would have resulted. In practice we would of course need to solve for event hypocentres and slowness corrections simultaneously, and significant work will be needed to upgrade relative location algorithms to accommodate uncertainty in the form of the outgoing wavefield. We present this data set, together with GT coordinates, raw waveforms for all events on six regional stations, and tables of time-delay measurements, as a reference benchmark by which relative location algorithms and software can be evaluated. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. Clouds over Hyytiälä, Finland: an algorithm to classify clouds based on solar radiation and cloud base height measurements.
- Author
-
Ylivinkka, Ilona, Kaupinmäki, Santeri, Virman, Meri, Peltola, Maija, Taipale, Ditte, Petäjä, Tuukka, Kerminen, Veli-Matti, Kulmala, Markku, and Ezhova, Ekaterina
- Subjects
SOLAR radiation ,ALGORITHMS ,HEIGHT measurement ,GLOBAL radiation ,CUMULUS clouds ,SOLAR chimneys - Abstract
We developed a simple algorithm to classify clouds based on global radiation and cloud base height measured by pyranometer and ceilometer, respectively. We separated clouds into seven different classes (stratus, stratocumulus, cumulus, nimbostratus, altocumulus + altostratus, cirrus + cirrocumulus + cirrostratus and clear sky + cirrus). We also included classes for cumulus and cirrus clouds causing global radiation enhancement, and we classified multilayered clouds, when captured by the ceilometer, based on their height and characteristics (transmittance, patchiness and uniformity). The overall performance of the algorithm was nearly 70 % when compared with classification by an observer using total-sky images. The performance was best for clouds having well-distinguishable effects on solar radiation: nimbostratus clouds were classified correctly in 100 % of the cases. The worst performance corresponds to cirriform clouds (50 %). Although the overall performance of the algorithm was good, it is likely to miss the occurrences of high and multilayered clouds. This is due to the technical limits of the instrumentation: the vertical detection range of the ceilometer and occultation of the laser pulse by the lowest cloud layer. We examined the use of clearness index, which is defined as a ratio between measured global radiation and modeled radiation at the top of the atmosphere, as an indicator of clear-sky conditions. Our results show that cumulus, altocumulus, altostratus and cirriform clouds can be present when the index indicates clear-sky conditions. Those conditions have previously been associated with enhanced aerosol formation under clear skies. This is an important finding especially in the case of low clouds coupled to the surface, which can influence aerosol population via aerosol–cloud interactions. Overall, caution is required when the clearness index is used in the analysis of processes affected by partitioning of radiation by clouds. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
22. Enlarging the Severe Hail Database in Finland by Using a Radar-Based Hail Detection Algorithm and Email Surveys to Limit Underreporting and Population Biases.
- Author
-
Tuovinen, Jari-Petteri, Hohti, Harri, and Schultz, David M.
- Subjects
HAIL ,CLIMATOLOGY ,EMAIL ,TELEPHONE calls ,INDUSTRIAL location ,FAKE news ,POPULATION density ,DATABASES - Abstract
Collecting hail reports to build a climatology is challenging in a sparsely populated country such as Finland. To expand an existing database, a new approach involving daily verification of a radar- and numerical weather prediction–based hail detection algorithm was trialed during late May–August for the 10-yr period, 2008–17. If the algorithm suggested a high likelihood of hail from each identified convective cell in specified locations, then an email survey was sent to people and businesses in these locations. Telephone calls were also used occasionally. Starting from 2010, the experiment was expanded to include trained storm spotters performing the surveys (project called TATSI). All the received hail reports were documented (severe or ≥2 cm, and nonsevere, excluding graupel), giving a more complete depiction of hail occurrence in Finland. In combination with reports from the general public, news, and social media, our hail survey resulted in a 292% increase in recorded severe hail days and a 414% increase in observed severe hail cases compared to a climatological study (1930–2006). More than 2200 email surveys were sent, and responses to these surveys accounted for 53% of Finland's severe hail cases during 2008–17. Most of the 2200 emails were sent into rural locations with low population density. These additional hail reports allowed problems with the initial radar-based hail detection algorithm to be identified, leading to the introduction of a new hail index in 2009 with improved detection and nowcasting of severe hail. This study shows a way to collect hail reports in a sparsely populated country to mitigate underreporting and population biases. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. Development and validation of a supervised machine learning radar Doppler spectra peak-finding algorithm.
- Author
-
Kalesse, Heike, Vogl, Teresa, Paduraru, Cosmin, and Luke, Edward
- Subjects
DOPPLER radar ,SUPERVISED learning ,MACHINE learning ,ATMOSPHERIC radiation measurement ,ALGORITHMS - Abstract
In many types of clouds, multiple hydrometeor populations can be present at the same time and height. Studying the evolution of these different hydrometeors in a time–height perspective can give valuable information on cloud particle composition and microphysical growth processes. However, as a prerequisite, the number of different hydrometeor types in a certain cloud volume needs to be quantified. This can be accomplished using cloud radar Doppler velocity spectra from profiling cloud radars if the different hydrometeor types have sufficiently different terminal fall velocities to produce individual Doppler spectrum peaks. Here we present a newly developed supervised machine learning radar Doppler spectra peak-finding algorithm (named PEAKO). In this approach, three adjustable parameters (spectrum smoothing span, prominence threshold, and minimum peak width at half-height) are varied to obtain the set of parameters which yields the best agreement of user-classified and machine-marked peaks. The algorithm was developed for Ka-band ARM zenith-pointing radar (KAZR) observations obtained in thick snowfall systems during the Atmospheric Radiation Measurement Program (ARM) mobile facility AMF2 deployment at Hyytiälä, Finland, during the Biogenic Aerosols – Effects on Clouds and Climate (BAECC) field campaign. The performance of PEAKO is evaluated by comparing its results to existing Doppler peak-finding algorithms. The new algorithm consistently identifies Doppler spectra peaks and outperforms other algorithms by reducing noise and increasing temporal and height consistency in detected features. In the future, the PEAKO algorithm will be adapted to other cloud radars and other types of clouds consisting of multiple hydrometeors in the same cloud volume. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Development and validation of classifiers and variable subsets for predicting nursing home admission.
- Author
-
Nuutinen, Mikko, Leskelä, Riikka-Leena, Suojalehto, Ella, Tirronen, Anniina, and Komssi, Vesa
- Subjects
NURSING home care ,NURSING home residents ,NURSING care facilities ,SICK people ,MEDICAL care ,INSTITUTIONAL care ,HOME care service statistics ,ALGORITHMS ,COMPARATIVE studies ,EPIDEMIOLOGY ,FORECASTING ,HOSPITAL care ,RESEARCH methodology ,MEDICAL cooperation ,RESEARCH ,RISK assessment ,EVALUATION research ,SENIOR housing ,STATISTICAL models - Abstract
Background: In previous years a substantial number of studies have identified statistically important predictors of nursing home admission (NHA). However, as far as we know, the analyses have been done at the population-level. No prior research has analysed the prediction accuracy of a NHA model for individuals.Methods: This study is an analysis of 3056 longer-term home care customers in the city of Tampere, Finland. Data were collected from the records of social and health service usage and RAI-HC (Resident Assessment Instrument - Home Care) assessment system during January 2011 and September 2015. The aim was to find out the most efficient variable subsets to predict NHA for individuals and validate the accuracy. The variable subsets of predicting NHA were searched by sequential forward selection (SFS) method, a variable ranking metric and the classifiers of logistic regression (LR), support vector machine (SVM) and Gaussian naive Bayes (GNB). The validation of the results was guaranteed using randomly balanced data sets and cross-validation. The primary performance metrics for the classifiers were the prediction accuracy and AUC (average area under the curve).Results: The LR and GNB classifiers achieved 78% accuracy for predicting NHA. The most important variables were RAI MAPLE (Method for Assigning Priority Levels), functional impairment (RAI IADL, Activities of Daily Living), cognitive impairment (RAI CPS, Cognitive Performance Scale), memory disorders (diagnoses G30-G32 and F00-F03) and the use of community-based health-service and prior hospital use (emergency visits and periods of care).Conclusion: The accuracy of the classifier for individuals was high enough to convince the officials of the city of Tampere to integrate the predictive model based on the findings of this study as a part of home care information system. Further work need to be done to evaluate variables that are modifiable and responsive to interventions. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
25. Estimating a population cumulative incidence under calendar time trends.
- Author
-
Hansen, Stefan N., Overgaard, Morten, Andersen, Per K., and Parner, Erik T.
- Subjects
PATHOLOGICAL psychology ,KAPLAN-Meier estimator ,DISEASE risk factors ,PROPORTIONAL hazards models ,MATHEMATICAL models ,PSYCHIATRIC diagnosis ,DIAGNOSIS of obsessive-compulsive disorder ,PSYCHIATRIC epidemiology ,ALGORITHMS ,ATTENTION-deficit hyperactivity disorder ,COMPUTER simulation ,OBSESSIVE-compulsive disorder ,RISK assessment ,TIME ,THEORY ,TOURETTE syndrome ,DISEASE incidence ,DISEASE prevalence ,DIAGNOSIS - Abstract
Background: The risk of a disease or psychiatric disorder is frequently measured by the age-specific cumulative incidence. Cumulative incidence estimates are often derived in cohort studies with individuals recruited over calendar time and with the end of follow-up governed by a specific date. It is common practice to apply the Kaplan-Meier or Aalen-Johansen estimator to the total sample and report either the estimated cumulative incidence curve or just a single point on the curve as a description of the disease risk.Methods: We argue that, whenever the disease or disorder of interest is influenced by calendar time trends, the total sample Kaplan-Meier and Aalen-Johansen estimators do not provide useful estimates of the general risk in the target population. We present some alternatives to this type of analysis.Results: We show how a proportional hazards model may be used to extrapolate disease risk estimates if proportionality is a reasonable assumption. If not reasonable, we instead advocate that a more useful description of the disease risk lies in the age-specific cumulative incidence curves across strata given by time of entry or perhaps just the end of follow-up estimates across all strata. Finally, we argue that a weighted average of these end of follow-up estimates may be a useful summary measure of the disease risk within the study period.Conclusions: Time trends in a disease risk will render total sample estimators less useful in observational studies with staggered entry and administrative censoring. An analysis based on proportional hazards or a stratified analysis may be better alternatives. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
26. Improving the Well-Being and Safety of Children with Sensors and Mobile Technology.
- Author
-
Kinnunen, Matti, Ervasti, Mari, Jutila, Mirjami, Pantsar, Susanna, Sesay, Adama M., Pääkkönen, Satu, Mäki, Marianne, Mian, Salman Qayyum, Oinas-Kukkonen, Harri, Oduor, Michael, Kuonanoja, Liisa, Riekki, Jukka, Juho, Anita, Ahokangas, Petri, Perälä-Heape, Maritta, Kotovaara, Hanna, and Alasaarela, Esko
- Subjects
ALGORITHMS ,APPLICATION software ,CHILDREN'S accident prevention ,GEOGRAPHIC information systems ,RADIO frequency identification systems ,USER interfaces ,WORLD Wide Web ,WEARABLE technology ,MOBILE apps - Abstract
The well-being and safety of children and young people are important aspects in all contexts of everyday life. In particular, a feeling of insecurity might be a problem when being alone. Bullying is also common among school-age children and teenagers. Hence, there is a great need for personalized support systems to resolve these problems. This article describes a new area of research in sensor and social web development to help indicate children's insecurity in their daily environment. Deeper integration of sensors and the social web would allow us to foresee drastic changes in communities and new social–ethical scenarios will emerge. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
27. Map of science with topic modeling: Comparison of unsupervised learning and human-assigned subject classification.
- Author
-
Suominen, Arho and Toivanen, Hannes
- Subjects
ALGORITHMS ,CLASSIFICATION ,METADATA ,RESEARCH funding ,SCIENCE ,STATISTICS ,SUBJECT headings ,TIME series analysis ,DATA analysis ,DATA analysis software - Abstract
The delineation of coordinates is fundamental for the cartography of science, and accurate and credible classification of scientific knowledge presents a persistent challenge in this regard. We present a map of Finnish science based on unsupervised-learning classification, and discuss the advantages and disadvantages of this approach vis-à-vis those generated by human reasoning. We conclude that from theoretical and practical perspectives there exist several challenges for human reasoning-based classification frameworks of scientific knowledge, as they typically try to fit new-to-the-world knowledge into historical models of scientific knowledge, and cannot easily be deployed for new large-scale data sets. Automated classification schemes, in contrast, generate classification models only from the available text corpus, thereby identifying credibly novel bodies of knowledge. They also lend themselves to versatile large-scale data analysis, and enable a range of Big Data possibilities. However, we also argue that it is neither possible nor fruitful to declare one or another method a superior approach in terms of realism to classify scientific knowledge, and we believe that the merits of each approach are dependent on the practical objectives of analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
28. Polynomial estimation of the smoothing splines for the new Finnish reference values for spirometry.
- Author
-
Kainu, Annette and Timonen, Kirsi
- Subjects
SPIROMETRY ,PULMONOLOGY ,PULMONARY function tests ,REFERENCE values ,MATHEMATICAL models ,LUNG physiology ,ALGORITHMS ,CLINICAL trials ,COMPARATIVE studies ,RESEARCH methodology ,MEDICAL cooperation ,REGRESSION analysis ,RESEARCH ,EVALUATION research ,STATISTICAL models - Abstract
BackgroundDiscontinuity of spirometry reference values from childhood into adulthood has been a problem with traditional reference values, thus modern modelling approaches using smoothing spline functions to better depict the transition during growth and ageing have been recently introduced. Following the publication of the new international Global Lung Initiative (GLI2012) reference values also new national Finnish reference values have been calculated using similar GAMLSS-modelling, with spline estimates for mean (Mspline) and standard deviation (Sspline) provided in tables. The aim of this study was to produce polynomial estimates for these spline functions to use in lieu of lookup tables and to assess their validity in the reference population of healthy non-smokers.MethodsLinear regression modelling was used to approximate the estimated values for Mspline and Sspline using similar polynomial functions as in the international GLI2012 reference values. Estimated values were compared to original calculations in absolute values, the derived predicted mean and individually calculated z-scores using both values.ResultsPolynomial functions were estimated for all 10 spirometry variables. The agreement between original lookup table-produced values and polynomial estimates was very good, with no significant differences found. The variation slightly increased in larger predicted volumes, but a range of −0.018 to +0.022 litres of FEV1 representing ± 0.4% of maximum difference in predicted mean.ConclusionsPolynomial approximations were very close to the original lookup tables and are recommended for use in clinical practice to facilitate the use of new reference values. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
29. External validation of the Norwegian survival prediction model in trauma after major trauma in Southern Finland.
- Author
-
Raj, R., Brinck, T., Skrifvars, M. B., and Handolin, L.
- Subjects
WOUNDS & injuries ,PREDICTION models ,HOSPITALS ,DEATH rate ,CONFIDENCE intervals ,PATIENTS ,ALGORITHMS ,COMPARATIVE studies ,LONGITUDINAL method ,RESEARCH methodology ,MEDICAL cooperation ,RESEARCH ,RESEARCH evaluation ,SURVIVAL analysis (Biometry) ,EVALUATION research ,TREATMENT effectiveness ,PREDICTIVE tests ,ACQUISITION of data ,RETROSPECTIVE studies ,STATISTICAL models ,HOSPITAL mortality ,TRAUMA severity indices - Abstract
Background: The Norwegian Survival Prediction Model in Trauma (NORMIT) is a newly developed outcome prediction model for patients with trauma. We aimed to compare the novel NORMIT to the more commonly used Trauma and Injury Severity Score (TRISS) in Finnish trauma patients.Methods: We performed a retrospective open-cohort study, using the trauma registry of Helsinki university hospital's trauma unit, including severely injured patients (new injury severity score > 15) admitted from 2007 to 2011. We used 30-day in-hospital mortality as the primary outcome, and discharge functional outcome as a secondary outcome of interest. Model performance was evaluated by comparing discrimination (by area under the receiver operating characteristic curve [AUC]), using a re-sample bootstrap technique, and by assessing calibration (GiViTI belt).Results: We identified 1111 patients fulfilling the study inclusion criteria. Overall mortality was 13% (n = 147). NORMIT showed slightly better discrimination for mortality prediction (AUC = 0.83, 95% confidence interval [CI] = 0.80-0.86 vs. AUC = 0.79, 95% CI = 0.75-0.83, P = 0.004) and functional outcome prediction (AUC = 0.78, 95% CI = 0.76-0.82 vs. AUC = 0.75, 95% CI = 0.72-0.78, P < 0.001) than TRISS. Calibration testing revealed poor calibration for both NORMIT and TRISS (P < 0.001), by giving too pessimistic predictions (predicted survival significantly lower than actual survival).Conclusion: NORMIT and TRISS showed good discrimination, but poor calibration, in this mixed cohort of severely injured trauma patients from Southern Finland. We found NORMIT to be a feasible alternative to TRISS for trauma patient outcome prediction, but trauma prediction models with improved calibration are needed. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
30. Reactive aggression among children with and without autism spectrum disorder.
- Author
-
Kaartinen, Miia, Puura, Kaija, Helminen, Mika, Salmelin, Raili, Pelkonen, Erja, and Juujärvi, Petri
- Subjects
AGE distribution ,AGGRESSION (Psychology) ,ALGORITHMS ,AUTISM ,COMPARATIVE studies ,COMPUTER adaptive testing ,INTELLIGENCE tests ,PERSONALITY tests ,RESEARCH funding ,SELF-management (Psychology) ,SEX distribution ,STATISTICS ,T-test (Statistics) ,DATA analysis ,PROMPTS (Psychology) ,CASE-control method ,SYMPTOMS ,CHILDREN - Abstract
Twenty-seven boys and eight girls with ASD and thirty-five controls matched for gender, age and total score intelligence were studied to ascertain whether boys and girls with ASD display stronger reactive aggression than boys and girls without ASD. Participants performed a computerized version of the Pulkkinen aggression machine that examines the intensity of reactive aggression against attackers of varying gender and age. Relative to the control group boys, the boys with ASD reacted with more serious forms of aggression when subjected to mild aggressive attacks and did not consider a child attacker's opposite sex an inhibitory factor. The girls with ASD, on the other hand, reacted less aggressively than the girls without ASD. According to the results boys with ASD may not follow the typical development in cognitive regulation of reactive aggression. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
31. Measurement of Physical Fitness and 24/7 Physical Activity, Standing, Sedentary Behavior, and Time in Bed in Working-Age Finns: Study Protocol for FINFIT 2021.
- Author
-
Husu, Pauliina, Vähä-Ypyä, Henri, Tokola, Kari, Sievänen, Harri, Mänttäri, Ari, Kokko, Sami, Kaikkonen, Kaisu M., Savonen, Kai, and Vasankari, Tommi
- Subjects
PHYSICAL fitness ,PHYSICAL activity ,SEDENTARY behavior ,ALGORITHMS - Abstract
Background: Population studies gathering measured data on fitness and physical behavior, covering physical activity, standing, sedentary behavior, and time in bed, are scarce. This article describes the protocol of the FINFIT 2021 study that measures fitness and physical behavior in a population-based sample of adults and analyzes their associations and dose–response relationships with several health indicators. Methods: The study comprises a stratified random sample of 20–69-year-old men and women (n = 16,500) from seven city-centered regions in Finland. Physical behavior is measured 24/7 by tri-axial accelerometry and analyzed with validated MAD-APE algorithms. Health and fitness examinations include fasting blood samples, measurements of blood pressure, anthropometry, and health-related fitness. Domains of health, functioning, well-being, and socio-demographics are assessed by a questionnaire. The data are being collected between September 2021 and February 2022. Discussion: The study provides population data on physical fitness and physical behavior 24/7. Physical behavior patterns by intensity and duration on an hour-by-hour basis will be provided. In the future, the baseline data will be assessed against prospective register-based data on incident diseases, healthcare utilization, sickness absence, premature retirement, and death. A similar study will be conducted every fourth year with a new random population sample. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. Evaluation of consensus methods in predictive species distribution modelling.
- Author
-
Marmion, Mathieu, Parviainen, Miia, Luoto, Miska, Heikkinen, Risto K., and Thuiller, Wilfried
- Subjects
SPECIES distribution ,FORECASTING ,PLANT species ,MEDIAN (Mathematics) ,PROBABILITY theory ,ALGORITHMS ,CONSERVATION biology - Abstract
Aim Spatial modelling techniques are increasingly used in species distribution modelling. However, the implemented techniques differ in their modelling performance, and some consensus methods are needed to reduce the uncertainty of predictions. In this study, we tested the predictive accuracies of five consensus methods, namely Weighted Average (WA), Mean(All), Median(All), Median(PCA), and Best, for 28 threatened plant species. Location North-eastern Finland, Europe. Methods The spatial distributions of the plant species were forecasted using eight state-of-the-art single-modelling techniques providing an ensemble of predictions. The probability values of occurrence were then combined using five consensus algorithms. The predictive accuracies of the single-model and consensus methods were assessed by computing the area under the curve (AUC) of the receiver-operating characteristic plot. Results The mean AUC values varied between 0.697 (classification tree analysis) and 0.813 (random forest) for the single-models, and from 0.757 to 0.850 for the consensus methods. WA and Mean(All) consensus methods provided significantly more robust predictions than all the single-models and the other consensus methods. Main conclusions Consensus methods based on average function algorithms may increase significantly the accuracy of species distribution forecasts, and thus they show considerable promise for different conservation biological and biogeographical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
33. Landsat ETM+ Images in the Estimation of Seasonal Lake Water Quality in Boreal River Basins.
- Author
-
Kallio, Kari, Attila, Jenni, Härmä, Pekka, Koponen, Sampsa, Pulliainen, Jouni, Hyytiäinen, Ulla-Maija, and Pyhälahti, Timo
- Subjects
WATER quality management ,ATMOSPHERIC effects on remote sensing ,LANDSAT satellites ,ORGANIC compounds ,REFLECTANCE ,ALGORITHMS ,TURBIDITY ,LAKES - Abstract
We investigated the use of Landsat ETM+ images in the monitoring of turbidity, colored dissolved organic matter (CDOM), and Secchi disk transparency (Z
SD ) in lakes of two river basins located in southern Finland. The ETM+ images were acquired in May, June, and September 2002 and were corrected for atmospheric disturbance using the simplified method of atmospheric correction (SMAC) model. The in situ measurements consisted of water sampling in the largest lake of the region, routine monitoring results for the whole study area, and ZSD observations made by volunteers. The ranges of the water quality variables in the dataset were as follows: turbidity, 0.6–25 FNU; absorption coefficient of CDOM at 400 nm, 1.0–12.2 m−1 ; ZSD , 0.5–5.5 m; and chlorophyll a concentration, 2.4–80 μg L−1 . The estimation accuracies of the image-specific empirical algorithms expressed as relative errors were 23.0% for turbidity, 17.4% for CDOM, and 21.1% for ZSD . If concurrent in situ measurements had not been used for algorithm training, the average error would have been about 37%. The atmospheric correction improved the estimation accuracy only slightly compared with the use of top-of-atmospheric reflectances. The accuracy of the water quality estimates without concurrent in situ measurements could have been improved if in-image atmospheric parameters had been available. The underwater reflectance simulations of the ETM+ channel wavelengths using water quality typical for Finnish lakes (data from 1113 lakes) indicated that region-specific algorithms may be needed in other parts of the country, particularly in the case of ZSD . Despite the limitations in the spectral and radiometric resolutions, ETM+ imagery can be an effective aid, particularly in the monitoring and management of small lakes (<1 km2 ), which are often not included in routine monitoring programs. [ABSTRACT FROM AUTHOR]- Published
- 2008
- Full Text
- View/download PDF
34. Acute middle ear infection in small children: a Bayesian analysis using multiple time scales.
- Author
-
Andreev, A. and Arjas, E.
- Subjects
ALGORITHMS ,COMPARATIVE studies ,LONGITUDINAL method ,MATHEMATICS ,RESEARCH methodology ,MEDICAL cooperation ,OTITIS media ,PROBABILITY theory ,RESEARCH ,EVALUATION research ,DISEASE incidence ,ACUTE diseases ,STATISTICAL models - Abstract
The study is based on a sample of 965 children living in Oulu region (Finland), who were monitored for acute middle ear infections from birth to the age of two years. We introduce a nonparametrically defined intensity model for ear infections, which involves both fixed and time dependent covariates, such as calendar time, current age, length of breast-feeding time until present, or current type of day care. Unmeasured heterogeneity, which manifests itself in frequent infections in some children and rare in others and which cannot be explained in terms of the known covariates, is modelled by using individual frailty parameters. A Bayesian approach is proposed to solve the inferential problem. The numerical work is carried out by Monte Carlo integration (Metropolis-Hastings algorithm). [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.