939 results
Search Results
2. Short Paper: Terrorist Fraud in Distance Bounding: Getting Around the Models
- Author
-
David Gerault
- Subjects
ComputingMilieux_MANAGEMENTOFCOMPUTINGANDINFORMATIONSYSTEMS ,Bounding overwatch ,Computer science ,Short paper ,Terrorism ,Rfid authentication ,Key (cryptography) ,ComputingMilieux_COMPUTERSANDSOCIETY ,Adversary ,Gas meter prover ,Computer security ,computer.software_genre ,computer - Abstract
Terrorist fraud is an attack against distance bounding protocols, whereby a malicious prover allows an adversary to authenticate on their behalf without revealing their secret key. In this paper, we propose new attack strategies that lead to successful terrorist frauds on proven-secure protocols.
- Published
- 2021
- Full Text
- View/download PDF
3. Hostile Blockchain Takeovers (Short Paper)
- Author
-
Joseph Bonneau
- Subjects
Blockchain ,Computer science ,Process (engineering) ,05 social sciences ,Short paper ,Face (sociological concept) ,02 engineering and technology ,Adversary ,Computer security ,computer.software_genre ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,computer ,Protocol (object-oriented programming) ,050203 business & management - Abstract
Most research modelling Bitcoin-style decentralised consensus protocols has assumed profit-motivated participants. Complementary to this analysis, we revisit the notion of attackers with an extrinsic motivation to disrupt the consensus process (Goldfinger attacks). We outline several routes for obtaining a majority of decision-making power in the consensus protocol (a hostile takeover). Our analysis suggests several fundamental differences between proof-of-work and proof-of-stake systems in the face of such an adversary.
- Published
- 2019
- Full Text
- View/download PDF
4. A Short Paper on the Incentives to Share Private Information for Population Estimates
- Author
-
Jens Grossklags, Patrick Loiseau, and Michela Chessa
- Subjects
Population estimate ,Incentive ,Analytics ,business.industry ,Computer science ,Internet privacy ,Short paper ,Data analysis ,business ,Computer security ,computer.software_genre ,Private information retrieval ,computer - Abstract
Consumers are often willing to contribute their personal data for analytics projects that may create new insights into societal problems. However, consumers also have justified privacy concerns about the release of their data.
- Published
- 2015
- Full Text
- View/download PDF
5. A Short Paper on Blind Signatures from Knowledge Assumptions
- Author
-
Lucjan Hanzlik and Kamil Kluczniak
- Subjects
Theoretical computer science ,Computer science ,String (computer science) ,0102 computer and information sciences ,02 engineering and technology ,Okamoto–Uchiyama cryptosystem ,Approx ,computer.software_genre ,01 natural sciences ,Signature (logic) ,Random oracle ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Blind signature ,020201 artificial intelligence & image processing ,Data mining ,Impossibility ,computer ,Standard model (cryptography) - Abstract
This paper concerns blind signature schemes. We focus on two moves constructions, which imply concurrent security. There are known efficient blind signature schemes based on the random oracle model and on the common reference string model. However, constructing two move blind signatures in the standard model is a challenging task, as shown by the impossibility results of Fischlin et al. The recent construction by Garg et al. (Eurocrypt’14) bypasses this result by using complexity leveraging, but it is impractical due to the signature size (\(\approx \) 100 kB). Fuchsbauer et al. (Crypto’15) presented a more practical construction, but with a security argument based on interactive assumptions. We present a blind signature scheme that is two-move, setup-free and comparable in terms of efficiency with the results of Fuchsbauer et al. Its security is based on a knowledge assumption.
- Published
- 2017
- Full Text
- View/download PDF
6. Private eCash in Practice (Short Paper)
- Author
-
Sébastien Gambs, Solenn Brunet, Nicolas Desmoulins, Jacques Traore, Saïd Gharout, and Amira Barki
- Subjects
Scheme (programming language) ,Subscriber identity module ,Computer science ,media_common.quotation_subject ,020302 automobile design & engineering ,02 engineering and technology ,Computer security ,computer.software_genre ,Payment ,Security token ,law.invention ,ecash ,0203 mechanical engineering ,law ,0202 electrical engineering, electronic engineering, information engineering ,Blind signature ,020201 artificial intelligence & image processing ,Use case ,computer ,computer.programming_language ,media_common ,Anonymity - Abstract
Most electronic payment systems for applications, such as eTicketing and eToll, involve a single entity acting as both merchant and bank. In this paper, we propose an efficient privacy-preserving post-payment eCash system suitable for this particular use case that we refer to, afterwards, as private eCash. To this end, we introduce a new partially blind signature scheme based on a recent Algebraic MAC scheme due to Chase et al. Unlike previous constructions, it allows multiple presentations of the same signature in an unlinkable way. Using it, our system is the first versatile private eCash system where users must only hold a sole reusable token (i.e. a reusable coin spendable to a unique merchant). It also enables identity and token revocations as well as flexible payments. Indeed, our payment tokens are updated in a partially blinded way to collect refunds without invading user’s privacy. By implementing it on a Global Platform compliant SIM card, we show its efficiency and suitability for real-world use cases, even for delay-sensitive applications and on constrained devices as a transaction can be performed in only 205 ms.
- Published
- 2017
- Full Text
- View/download PDF
7. What is the Best Way to Allocate Teacher’s Efforts: How Accurately Should We Write on the Board? When Marking Comments on Student Papers?
- Author
-
Olga Kosheleva and Karen Villaverde
- Subjects
Multimedia ,Computer science ,Handwriting ,computer.software_genre ,Legibility ,computer - Abstract
Writing on the board is an important part of a lecture. Lecturers’ handwriting is not always perfect. Usually, a lecturer can write slower and more legibly, this will increase understandability but slow down the lecture. In this chapter, we analyze an optimal trade-off between speed and legibility.
- Published
- 2017
- Full Text
- View/download PDF
8. DroidAuditor: Forensic Analysis of Application-Layer Privilege Escalation Attacks on Android (Short Paper)
- Author
-
Ahmad-Reza Sadeghi, Stephan Heuser, Marco Negro, and Praveen Kumar Pendyala
- Subjects
Logic bomb ,Computer science ,business.industry ,020206 networking & telecommunications ,Access control ,02 engineering and technology ,Static analysis ,Computer security ,computer.software_genre ,Application layer ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Dynamic program analysis ,Android (operating system) ,business ,computer ,Mobile device ,Privilege escalation - Abstract
Smart mobile devices process and store a vast amount of security- and privacy-sensitive data. To protect this data from malicious applications mobile operating systems, such as Android, adopt fine-grained access control architectures. However, related work has shown that these access control architectures are susceptible to application-layer privilege escalation attacks. Both automated static and dynamic program analysis promise to proactively detect such attacks. Though while state-of-the-art static analysis frameworks cannot adequately address native and highly obfuscated code, dynamic analysis is vulnerable to malicious applications using logic bombs to avoid early detection.
- Published
- 2017
- Full Text
- View/download PDF
9. KBID: Kerberos Bracelet Identification (Short Paper)
- Author
-
Michael Rushanan, Joseph Carrigan, and Paul D. Martin
- Subjects
Password ,Authentication ,Service (systems architecture) ,computer.internet_protocol ,Computer science ,010401 analytical chemistry ,Wearable computer ,020206 networking & telecommunications ,02 engineering and technology ,Computer security ,computer.software_genre ,01 natural sciences ,0104 chemical sciences ,Password strength ,ComputingMilieux_MANAGEMENTOFCOMPUTINGANDINFORMATIONSYSTEMS ,Identification (information) ,Ticket ,0202 electrical engineering, electronic engineering, information engineering ,Kerberos ,computer - Abstract
The most common method for a user to gain access to a system, service, or resource is to provide a secret, often a password, that verifies her identity and thus authenticates her. Password-based authentication is considered strong only when the password meets certain length and complexity requirements, or when it is combined with other methods in multi-factor authentication. Unfortunately, many authentication systems do not enforce strong passwords due to a number of limitations; for example, the time taken to enter complex passwords. We present an authentication system that addresses these limitations by prompting a user for credentials once and then storing an authentication ticket in a wearable device that we call Kerberos Bracelet Identification (KBID).
- Published
- 2017
- Full Text
- View/download PDF
10. ScienceWISE: A Web-Based Interactive Semantic Platform for Paper Annotation and Ontology Editing
- Author
-
Alexey Boyarsky, Roman Prokofyev, Anton Astafiev, Oleg Ruchayskiy, and Christophe Guéret
- Subjects
Ontology Inference Layer ,Information retrieval ,business.industry ,Computer science ,Ontology-based data integration ,Linked data ,computer.file_format ,Ontology (information science) ,World Wide Web ,Annotation ,Simple Knowledge Organization System ,Web application ,Upper ontology ,business ,computer - Abstract
The ScienceWISE system is a collaborative ontology editor and paper annotation tool designed to help researchers in their discovery. In this paper, we describe the system currently deployed at sciencewise.info and the exposition of its data as Linked Data. During the “RDFization” process, we faced issues to encode the knowledge base in SKOS and find resources to link to on the LOD. We discuss these issues and the remaining open challenges to implement some target features.
- Published
- 2015
- Full Text
- View/download PDF
11. Cryptographic Assumptions: A Position Paper
- Author
-
Shafi Goldwasser and Yael Tauman Kalai
- Subjects
Cryptographic primitive ,Theoretical computer science ,business.industry ,Cryptography ,0102 computer and information sciences ,02 engineering and technology ,Cryptographic protocol ,Computer security ,computer.software_genre ,Mathematical proof ,01 natural sciences ,Computational hardness assumption ,Field (computer science) ,Random oracle ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Security of cryptographic hash functions ,business ,computer ,Mathematics - Abstract
The mission of theoretical cryptography is to define and construct provably secure cryptographic protocols and schemes. Without proofs of security, cryptographic constructs offer no guarantees whatsoever and no basis for evaluation and comparison. As most security proofs necessarily come in the form of a reduction between the security claim and an intractability assumption, such proofs are ultimately only as good as the assumptions they are based on. Thus, the complexity implications of every assumption we utilize should be of significant substance, and serve as the yard stick for the value of our proposals. Lately, the field of cryptography has seen a sharp increase in the number of new assumptions that are often complex to define and difficult to interpret. At times, these assumptions are hard to untangle from the constructions which utilize them. We believe that the lack of standards of what is accepted as a reasonable cryptographic assumption can be harmful to the credibility of our field. Therefore, there is a great need for measures according to which we classify and compare assumptions, as to which are safe and which are not. In this paper, we propose such a classification and review recently suggested assumptions in this light. This follows the footsteps of Naor Crypto 2003. Our governing principle is relying on hardness assumptions that are independent of the cryptographic constructions.
- Published
- 2015
- Full Text
- View/download PDF
12. Social Media on a Piece of Paper: A Study of Hybrid and Sustainable Media Using Active Infrared Vision
- Author
-
Thitirat Siriborvornratanakul
- Subjects
Engineering ,Active infrared ,Multimedia ,business.industry ,Social media ,computer.software_genre ,business ,computer ,Boom - Abstract
In this world of digital and social media booms, a number of people spend their valuable times burying heads in smartphones, resulting in unintentional increased gaps in physical relationship with people nearby. A hybrid digital-physical medium is a possible solution for this problem by means of externalizing social media data and integrating them into a physical medium somehow. In this way, using social media will simultaneously connect us with both virtual and physical worlds.
- Published
- 2015
- Full Text
- View/download PDF
13. Hyperdata: Update APIs for RDF Data Sources (Vision Paper)
- Author
-
Jacek Kopecký
- Subjects
World Wide Web ,Structure (mathematical logic) ,Metadata ,Open data ,Information retrieval ,Named graph ,Computer science ,Hyperdata ,Linked data ,computer.file_format ,Hyperlink ,RDF ,computer - Abstract
The Linked Data effort has been focusing on how to publish open data sets on the Web, and it has had great results. However, mechanisms for updating linked data sources have been neglected in research. We propose a structure for Linked Data resources into named graphs, connected through hyperlinks and self described with light metadata, that is a natural match for using standard HTTP methods to implement application-specific (high-level) public update APIs.
- Published
- 2015
- Full Text
- View/download PDF
14. Does NICE influence the adoption and uptake of generics in the UK?
- Author
-
Victoria Serra-Sastre, Simona Bianchi, P. O'Neill, and Jorge Mestre-Ferrandiz
- Subjects
HC ,Index (economics) ,media_common.quotation_subject ,Generic entry ,Economics, Econometrics and Finance (miscellaneous) ,Nice ,Drug Costs ,Competition (economics) ,NICE ,03 medical and health sciences ,0302 clinical medicine ,Excellence ,Generic competition ,Drugs, Generic ,Humans ,030212 general & internal medicine ,Market share ,health care economics and organizations ,media_common ,computer.programming_language ,Original Paper ,Health economics ,Public economics ,I18 ,I11 ,030503 health policy & services ,Health Policy ,Commerce ,Health technology ,humanities ,United Kingdom ,RM Therapeutics. Pharmacology ,RA Public aspects of medicine ,Business ,0305 other medical science ,computer ,RA ,Public finance - Abstract
The aim of this paper is to examine generic competition in the UK, with a special focus on the role of Health Technology Assessment (HTA) on generic market entry and diffusion. In the UK, where no direct price regulation on pharmaceuticals exists, HTA has a leading role for recommending the use of medicines providing a non-regulatory aspect that may influence the dynamics in the generic market. The paper focuses on the role of Technology Appraisals issued by the National Institute for Health and Care Excellence (NICE). We follow a two-step approach. First, we examine the probability of generic entry. Second, conditional on generic entry, we examine the determinants of generic market share. We use data from IQVIA British Pharmaceutical Index (BPI) for the primary care market for 60 products that lost patent between 2003 and 2012. Our results suggest that market size remains one of the main drivers of generic entry. After controlling for market size, intermolecular substitution and difficulty of manufacturing increase the likelihood of generic entry. After generic entry, our estimates suggest that generic market share is highly state dependent. Our findings also suggest that while NICE recommendations do influence generic uptake, there is only marginal evidence they affect generic entry.
- Published
- 2020
15. Predicting COVID-19 statistics using machine learning regression model: Li-MuLi-Poly
- Author
-
Seema Bawa and Hari Singh
- Subjects
Mean squared error ,Computer Networks and Communications ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,Matrix (mathematics) ,symbols.namesake ,Statistics ,Linear regression ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Regular Paper ,Accuracy ,t-Test ,Polynomial regression ,Minimum mean square error ,business.industry ,COVID-19 ,020207 software engineering ,Regression analysis ,Regression ,Pearson product-moment correlation coefficient ,Hardware and Architecture ,symbols ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Software ,Information Systems - Abstract
In this paper, linear regression (LR), multi-linear regression (MLR) and polynomial regression (PR) techniques are applied to propose a model Li-MuLi-Poly. The model predicts COVID-19 deaths happening in the United States of America. The experiment was carried out on machine learning model, minimum mean square error model, and maximum likelihood ratio model. The best-fitting model was selected according to the measures of mean square error, adjusted mean square error, mean square error, root mean square error (RMSE) and maximum likelihood ratio, and the statistical t-test was used to verify the results. Data sets are analyzed, cleaned up and debated before being applied to the proposed regression model. The correlation of the selected independent parameters was determined by the heat map and the Carl Pearson correlation matrix. It was found that the accuracy of the LR model best-fits the dataset when all the independent parameters are used in modeling, however, RMSE and mean absolute error (MAE) are high as compared to PR models. The PR models of a high degree are required to best-fit the dataset when not much independent parameter is considered in modeling. However, the PR models of low degree best-fits the dataset when independent parameters from all dimensions are considered in modeling.
- Published
- 2021
16. Predicting the pandemic: sentiment evaluation and predictive analysis from large-scale tweets on Covid-19 by deep convolutional neural network
- Author
-
Sourav Das and Anup Kumar Kolya
- Subjects
Text corpus ,Predictive analysis ,Phrase ,Computer science ,Cognitive Neuroscience ,Twitter ,Stability (learning theory) ,02 engineering and technology ,Machine learning ,computer.software_genre ,Convolutional neural network ,Sentiment analysis ,Mathematics (miscellaneous) ,Deep convolutional network ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Artificial neural network ,business.industry ,Deep learning ,020206 networking & telecommunications ,Coronavirus ,Test case ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Covid-19 ,computer ,Research Paper - Abstract
Engaging deep neural networks for textual sentiment analysis is an extensively practiced domain of research. Textual sentiment classification harnesses the full computational potential of deep learning models. Typically, these research works are carried either with a popular open-source data corpus, or self-extracted short phrase texts from Twitter, Reddit, or web-scrapped text data from other resources. Rarely do we see a large amount of data on a current ongoing event is being collected and cultured further. Also, an even more complex task would be to model the data from a currently ongoing event, not only for scaling the sentiment accuracy but also for making a predictive analysis for the same. In this paper, we propose a novel approach for achieving sentiment evaluation accuracy by using a deep neural network on live-streamed tweets on Coronavirus and future case growth prediction. We develop a large tweet corpus exclusively based on the Coronavirus tweets. We split the data into train and test sets, alongside we perform polarity classification and trend analysis. The refined outcome from the trend analysis helps to train the data to provide an incremental learning curvature for our neural network, and we obtain an accuracy of 90.67%. Finally, we provide a statistical-based future prediction for Coronavirus cases growth. Not only our model outperforms several previous state-of-art experiments in overall sentiment accuracy comparison for similar tasks, but it also maintains a throughout performance stability among all the test cases when tested with several popular open-source text corpora.
- Published
- 2021
17. Optimal resource allocation for multiclass services in peer-to-peer networks via successive approximation
- Author
-
Wei Sun, Huan Liu, and Shiyong Li
- Subjects
68M10 ,0209 industrial biotechnology ,Mathematical optimization ,Optimization problem ,Computer science ,Strategy and Management ,0211 other engineering and technologies ,Computational intelligence ,02 engineering and technology ,Management Science and Operations Research ,Peer-to-peer ,computer.software_genre ,P2P networks ,Nonlinear programming ,020901 industrial engineering & automation ,Resource (project management) ,Management of Technology and Innovation ,Resource allocation ,Service (business) ,Elastic and inelastic services ,Numerical Analysis ,Original Paper ,68M20 ,021103 operations research ,90C30 ,Computational Theory and Mathematics ,Modeling and Simulation ,Convex optimization ,Successive approximation ,Statistics, Probability and Uncertainty ,computer - Abstract
Peer-to-peer (P2P) networks support a wide variety of network services including elastic services such as file-sharing and downloading and inelastic services such as real-time multiparty conferencing. Each peer who acquires a service will receive a certain level of satisfaction if the service is provided with a certain amount of resource. The utility function is used to describe the satisfaction of a peer when acquiring a service. In this paper we consider optimal resource allocation for elastic and inelastic services and formulate a utility maximization model which is an intractable and difficult non-convex optimization problem. In order to resolve it, we apply the successive approximation method and approximate the non-convex problem to a serial of equivalent convex optimization problems. Then we develop a gradient-based resource allocation scheme to achieve the optimal solutions of the approximations. After a serial of approximations, the proposed scheme can finally converge to an optimal solution of the primal utility maximization model for resource allocation which satisfies the Karush–Kuhn–Tucker conditions.
- Published
- 2021
18. Using data mining techniques to fight and control epidemics: A scoping review
- Author
-
Soheila Saeedi, Reza Safdari, Marsa Gholamzadeh, Sorayya Rezayi, and Mozhgan Tanhapour
- Subjects
medicine.medical_specialty ,Review Paper ,business.industry ,Public health ,Biomedical Engineering ,Scopus ,COVID-19 ,Bioengineering ,Disease ,Review ,computer.software_genre ,Applied Microbiology and Biotechnology ,Checklist ,Systematic review ,Knowledge extraction ,Pandemic ,Health care ,medicine ,Data mining ,Psychology ,business ,computer ,Pandemics ,Biotechnology - Abstract
The main objective of this survey is to study the published articles to determine the most favorite data mining methods and gap of knowledge. Since the threat of pandemics has raised concerns for public health, data mining techniques were applied by researchers to reveal the hidden knowledge. Web of Science, Scopus, and PubMed databases were selected for systematic searches. Then, all of the retrieved articles were screened in the stepwise process according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist to select appropriate articles. All of the results were analyzed and summarized based on some classifications. Out of 335 citations were retrieved, 50 articles were determined as eligible articles through a scoping review. The review results showed that the most favorite DM belonged to Natural language processing (22%) and the most commonly proposed approach was revealing disease characteristics (22%). Regarding diseases, the most addressed disease was COVID-19. The studies show a predominance of applying supervised learning techniques (90%). Concerning healthcare scopes, we found that infectious disease (36%) to be the most frequent, closely followed by epidemiology discipline. The most common software used in the studies was SPSS (22%) and R (20%). The results revealed that some valuable researches conducted by employing the capabilities of knowledge discovery methods to understand the unknown dimensions of diseases in pandemics. But most researches will need in terms of treatment and disease control.
- Published
- 2021
19. PASCAL mitral valve repair system versus MitraClip: comparison of transcatheter edge-to-edge strategies in complex primary mitral regurgitation
- Author
-
Muhammed Gerçek, Volker Rudolph, Kai Friedrichs, Fabian Roder, Armin Zittermann, Vera Fortmeier, and Tanja K. Rudolph
- Subjects
Male ,medicine.medical_specialty ,Cardiac Catheterization ,medicine.medical_treatment ,Primary mitral regurgitation ,MitraClip ,Effective Regurgitant Orifice Area ,Transcatheter therapy ,Mitral valve ,Internal medicine ,medicine ,Humans ,Patient group ,computer.programming_language ,Procedure time ,Aged ,Retrospective Studies ,PASCAL ,Aged, 80 and over ,Heart Valve Prosthesis Implantation ,Mitral regurgitation ,Mitral valve repair ,Original Paper ,business.industry ,Patient Selection ,Mitral Valve Insufficiency ,General Medicine ,Pascal (programming language) ,Equipment Design ,medicine.anatomical_structure ,Treatment Outcome ,Practice Guidelines as Topic ,Cardiology ,Mitral Valve ,Female ,Cardiology and Cardiovascular Medicine ,business ,computer ,Follow-Up Studies - Abstract
Background The PASCAL system is a novel device for edge-to-edge treatment of mitral regurgitation (MR). The aim of this study was to compare the safety and efficacy of the PASCAL to the MitraClip system in a highly selected group of patients with complex primary mitral regurgitation (PMR) defined as effective regurgitant orifice area (MR-EROA) ≥ 0.40 cm2, large flail gap (≥ 5 mm) or width (≥ 7 mm) or Barlow’s disease. Methods 38 patients with complex PMR undergoing mitral intervention using PASCAL (n = 22) or MitraClip (n = 16) were enrolled. Primary efficacy endpoints were procedural success and degree of residual MR at discharge. The rate of major adverse events (MAE) according to the Mitral Valve Academic Consortium (MVARC) criteria was chosen as the primary safety endpoint. Results Patient collectives did not differ relevantly regarding pertinent baseline parameters. Patients` median age was 83.0 [77.5–85.3] years (PASCAL) and 82.5 [76.5–86.5] years (MitraClip). MR-EROA at baseline was 0.70 [0.68–0.83] cm2 (PASCAL) and 0.70 [0.50–0.90] cm2 (MitraClip), respectively. 3D-echocardiographic morphometry of the mitral valve apparatus revealed no relevant differences between groups. Procedural success was achieved in 95.5% (PASCAL) and 87.5% (MitraClip), respectively. In 86.4% of the patients a residual MR grade ≤ 1 + was achieved with PASCAL whereas reduction to MR grade ≤ 1 + with MitraClip was achieved in 62.5%. Neither procedure time number of implanted devices, nor transmitral gradient differed significantly. No periprocedural MAE according to MVARC occured. Conclusion In this highly selected patient group with complex PMR both systems exhibited equal procedural safety. MitraClip and PASCAL reduced qualitative and semi-quantitative parameters of MR to an at least comparable extent. Graphic abstract
- Published
- 2021
20. Application of artificial neural networks for automated analysis of cystoscopic images: a review of the current status and future prospects
- Author
-
Alexander Reiterer, Rodrigo Suarez-Ibarrola, Arkadiusz Miernik, Simon Hein, and Misgana Negassi
- Subjects
Urology ,02 engineering and technology ,Machine learning ,computer.software_genre ,Convolutional neural network ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,Data acquisition ,Medical image analysis ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Image Processing, Computer-Assisted ,Humans ,Bladder cancer ,Artificial neural network ,medicine.diagnostic_test ,business.industry ,Deep learning ,Frame (networking) ,Cystoscopy ,medicine.disease ,Topic Paper ,Visualization ,Cystoscopic images ,030220 oncology & carcinogenesis ,020201 artificial intelligence & image processing ,Artificial intelligence ,Neural Networks, Computer ,business ,computer ,Neural networks ,Forecasting - Abstract
BackgroundOptimal detection and surveillance of bladder cancer (BCa) rely primarily on the cystoscopic visualization of bladder lesions. AI-assisted cystoscopy may improve image recognition and accelerate data acquisition.ObjectiveTo provide a comprehensive review of machine learning (ML), deep learning (DL) and convolutional neural network (CNN) applications in cystoscopic image recognition.Evidence acquisitionA detailed search of original articles was performed using the PubMed-MEDLINE database to identify recent English literature relevant to ML, DL and CNN applications in cystoscopic image recognition.Evidence synthesisIn total, two articles and one conference abstract were identified addressing the application of AI methods in cystoscopic image recognition. These investigations showed accuracies exceeding 90% for tumor detection; however, future work is necessary to incorporate these methods into AI-aided cystoscopy and compared to other tumor visualization tools. Furthermore, we present results from the RaVeNNA-4pi consortium initiative which has extracted 4200 frames from 62 videos, analyzed them with the U-Net network and achieved an average dice score of 0.67. Improvements in its precision can be achieved by augmenting the video/frame database.ConclusionAI-aided cystoscopy has the potential to outperform urologists at recognizing and classifying bladder lesions. To ensure their real-life implementation, however, these algorithms require external validation to generalize their results across other data sets.
- Published
- 2020
21. A framework for sensitivity analysis of decision trees
- Author
-
Bogumił Kamiński, Przemysław Szufel, and Michał Jakubczyk
- Subjects
Incremental decision tree ,Original Paper ,Computer science ,business.industry ,020209 energy ,Decision tree learning ,Decision trees ,Decision tree ,Evidential reasoning approach ,02 engineering and technology ,Decision rule ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Decision optimization ,0202 electrical engineering, electronic engineering, information engineering ,Influence diagram ,020201 artificial intelligence & image processing ,Decision sensitivity ,Artificial intelligence ,business ,computer ,Decision analysis ,Optimal decision - Abstract
In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.
- Published
- 2017
22. Comparison of data science workflows for root cause analysis of bioprocesses
- Author
-
Christoph Herwig, Yvonne E. Thomassen, Daniel Borchert, Diego A. Suarez-Zuluaga, and Patrick Sagmeister
- Subjects
0106 biological sciences ,Drug Industry ,Process (engineering) ,Computer science ,Data analysis ,Bioengineering ,Machine learning ,computer.software_genre ,01 natural sciences ,Workflow ,Bioreactors ,Robustness (computer science) ,010608 biotechnology ,Partial least squares regression ,Chlorocebus aethiops ,Raw data analysis ,Root cause analysis ,Animals ,Vero Cells ,Principal Component Analysis ,010405 organic chemistry ,business.industry ,Data Science ,Feature based analysis ,General Medicine ,Variance (accounting) ,Work in process ,0104 chemical sciences ,Poliovirus ,Fermentation ,Multivariate Analysis ,Regression Analysis ,Artificial intelligence ,business ,Raw data ,computer ,Software ,Biotechnology ,Research Paper - Abstract
Root cause analysis (RCA) is one of the most prominent tools used to comprehensively evaluate a biopharmaceutical production process. Despite of its widespread use in industry, the Food and Drug Administration has observed a lot of unsuitable approaches for RCAs within the last years. The reasons for those unsuitable approaches are the use of incorrect variables during the analysis and the lack in process understanding, which impede correct model interpretation. Two major approaches to perform RCAs are currently dominating the chemical and pharmaceutical industry: raw data analysis and feature-based approach. Both techniques are shown to be able to identify the significant variables causing the variance of the response. Although they are different in data unfolding, the same tools as principal component analysis and partial least square regression are used in both concepts. Within this article we demonstrate the strength and weaknesses of both approaches. We proved that a fusion of both results in a comprehensive and effective workflow, which not only increases better process understanding. We demonstrate this workflow along with an example. Hence, the presented workflow allows to save analysis time and to reduce the effort of data mining by easy detection of the most important variables within the given dataset. Subsequently, the final obtained process knowledge can be translated into new hypotheses, which can be tested experimentally and thereby lead to effectively improving process robustness.
- Published
- 2018
23. Automatic classification of histopathological diagnoses for building a large scale tissue catalogue
- Author
-
Robert Reihs, Stefan Sauer, Kurt Zatloukal, and Heimo Müller
- Subjects
Text mining ,Computer science ,Big data ,Biomedical Engineering ,Decision tree ,Bioengineering ,02 engineering and technology ,Ontology (information science) ,computer.software_genre ,Applied Microbiology and Biotechnology ,Set (abstract data type) ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,030212 general & internal medicine ,Medical diagnosis ,Biobank ,Original Paper ,Information retrieval ,business.industry ,Decision Trees ,Unstructured data ,Reference data ,Information extraction ,ComputingMethodologies_PATTERNRECOGNITION ,020201 artificial intelligence & image processing ,Automatic classification ,business ,computer ,Biotechnology - Abstract
In this paper an automatic classification system for pathological findings is presented. The starting point in our undertaking was a pathologic tissue collection with about 1.4 million tissue samples described by free text records over 23 years. Exploring knowledge out of this “big data” pool is a challenging task, especially when dealing with unstructured data spanning over many years. The classification is based on an ontology-based term extraction and decision tree build with a manually curated classification system. The information extracting system is based on regular expressions and a text substitution system. We describe the generation of the decision trees by medical experts using a visual editor. Also the evaluation of the classification process with a reference data set is described. We achieved an F-Score of 89,7% for ICD-10 and an F-Score of 94,7% for ICD-O classification. For the information extraction of the tumor staging and receptors we achieved am F-Score ranging from 81,8 to 96,8%.
- Published
- 2016
24. Multi-purpose, multi-level feature modeling of large-scale industrial software systems
- Author
-
Paul Grünbacher, Daniela Rabiser, Herbert Prähofer, Andreas Grimmer, Florian Angerer, Klaus Eder, Michael Petruzelka, and Mario Kromoser
- Subjects
Computer science ,Modeling language ,business.industry ,Special Section Paper ,Case study ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Modularity ,Automation ,Feature model ,Feature modeling ,Consistency (database systems) ,Feature (computer vision) ,Modelling and Simulation ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,Product management ,Large-scale software systems ,020201 artificial intelligence & image processing ,Software system ,Data mining ,business ,computer ,Software - Abstract
Feature models are frequently used to capture the knowledge about configurable software systems and product lines. However, feature modeling of large-scale systems is challenging as models are needed for diverse purposes. For instance, feature models can be used to reflect the perspectives of product management, technical solution architecture, or product configuration. Furthermore, models are required at different levels of granularity. Although numerous approaches and tools are available, it remains hard to define the purpose, scope, and granularity of feature models. This paper first reports results and experiences of an exploratory case study on developing feature models for two large-scale industrial automation software systems. We report results on the characteristics and modularity of the feature models, including metrics about model dependencies. Based on the findings from the study, we developed FORCE, a modeling language, and tool environment that extends an existing feature modeling approach to support models for different purposes and at multiple levels, including mappings to the code base. We demonstrate the expressiveness and extensibility of our approach by applying it to the well-known Pick and Place Unit example and an injection molding subsystem of an industrial product line. We further show how our approach supports consistency between different feature models. Our results and experiences show that considering the purpose and level of features is useful for modeling large-scale systems and that modeling dependencies between feature models is essential for developing a system-wide perspective.
- Published
- 2016
25. Privacy online: up, close and personal
- Author
-
Eneken Tikk
- Subjects
Information privacy ,020205 medical informatics ,Privacy by Design ,Responsibility ,Computer science ,Privacy policy ,Internet privacy ,Biomedical Engineering ,Bioengineering ,Information privacy law ,02 engineering and technology ,Computer security ,computer.software_genre ,Applied Microbiology and Biotechnology ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Data Protection Act 1998 ,030212 general & internal medicine ,Data protection ,Data processing ,Original Paper ,business.industry ,Privacy ,Security ,Normative ,business ,computer ,Personally identifiable information ,Biotechnology - Abstract
In the era of information, administration of personal data protection mingles with expectations of access to information as well as the overall sense of cyber (in)security. A failure to appropriately consider the system of data processing relationships easily reduces personal data protection to assurances in letter. The complexity of contemporary data transactions demands a systemic and structured normative approach to personal data protection. Any evaluation of relevant norms should not be isolated from factors that determine or condition their implementation. As privacy is an intrinsically subjective claim, enforcing data privacy is premised on data subject’s personal participation in the protection of her data.
- Published
- 2017
26. Towards infield, live plant phenotyping using a reduced-parameter CNN
- Author
-
John Atanbori, Tony P. Pridmore, and Andrew P. French
- Subjects
Computer science ,Population ,02 engineering and technology ,Machine learning ,computer.software_genre ,Convolutional neural network ,03 medical and health sciences ,0202 electrical engineering, electronic engineering, information engineering ,Segmentation ,education ,030304 developmental biology ,2. Zero hunger ,0303 health sciences ,education.field_of_study ,Original Paper ,Separable convolutions ,business.industry ,Lightweight deep convolutional neural networks ,Singular value decomposition ,Image segmentation ,G400 Computer Science ,15. Life on land ,Computer Science Applications ,Identification (information) ,Pixel-wise segmentation for plant phenotyping ,13. Climate action ,Hardware and Architecture ,Pattern recognition (psychology) ,Key (cryptography) ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Mobile device ,computer ,Software - Abstract
There is an increase in consumption of agricultural produce as a result of the rapidly growing human population, particularly in developing nations. This has triggered high-quality plant phenotyping research to help with the breeding of high-yielding plants that can adapt to our continuously changing climate. Novel, low-cost, fully automated plant phenotyping systems, capable of infield deployment, are required to help identify quantitative plant phenotypes. The identification of quantitative plant phenotypes is a key challenge which relies heavily on the precise segmentation of plant images. Recently, the plant phenotyping community has started to use very deep convolutional neural networks (CNNs) to help tackle this fundamental problem. However, these very deep CNNs rely on some millions of model parameters and generate very large weight matrices, thus making them difficult to deploy infield on low-cost, resource-limited devices. We explore how to compress existing very deep CNNs for plant image segmentation, thus making them easily deployable infield and on mobile devices. In particular, we focus on applying these models to the pixel-wise segmentation of plants into multiple classes including background, a challenging problem in the plant phenotyping community. We combined two approaches (separable convolutions and SVD) to reduce model parameter numbers and weight matrices of these very deep CNN-based models. Using our combined method (separable convolution and SVD) reduced the weight matrix by up to 95% without affecting pixel-wise accuracy. These methods have been evaluated on two public plant datasets and one non-plant dataset to illustrate generality. We have successfully tested our models on a mobile device.
- Published
- 2019
27. Matching events and activities by integrating behavioral aspects and label analysis
- Author
-
Jan Mendling, Claudio Di Ciccio, Thomas Baier, and Mathias Weske
- Subjects
Matching (statistics) ,Process (engineering) ,Business process ,Computer science ,102013 Human-computer interaction ,Process mining ,02 engineering and technology ,Constraint satisfaction ,computer.software_genre ,Conformance checking ,Business Process Model and Notation ,Business process discovery ,502050 Wirtschaftsinformatik ,020204 information systems ,102001 Artificial intelligence ,0202 electrical engineering, electronic engineering, information engineering ,ddc:00 ,Declare ,Business process intelligence ,102022 Softwareentwicklung ,Institut für Informatik und Computational Science ,Special Section Paper ,Natural language processing ,Business process modeling ,502050 Business informatics ,102022 Software development ,Event mapping ,Modeling and Simulation ,020201 artificial intelligence & image processing ,Data mining ,computer ,process mining / event mapping / business process intelligence / constraint satisfaction / DECLARE / natural language processing ,Software - Abstract
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during the execution of a process. These event data can be used to analyze the process using process mining techniques to discover the real process, measure conformance to a given process model, or to enhance existing models with performance information. Mapping the produced events to activities of a given process model is essential for conformance checking, annotation and understanding of process mining results. In order to accomplish this mapping with low manual effort, we developed a semi-automatic approach that maps events to activities using insights from behavioral analysis and label analysis. The approach extracts Declare constraints from both the log and the model to build matching constraints to efficiently reduce the number of possible mappings. These mappings are further reduced using techniques from natural language processing, which allow for a matching based on labels and external knowledge sources. The evaluation with synthetic and real-life data demonstrates the effectiveness of the approach and its robustness toward non-conforming execution logs.
- Published
- 2018
28. Development and application of a machine learning algorithm for classification of elasmobranch behaviour from accelerometry data
- Author
-
Samuel H. Gruber, Alexander C. Hansell, Lauran R. Brewster, Michael Elliott, Ian G. Cowx, Jonathan J. Dale, Nicholas M. Whitney, Tristan L. Guttridge, and Adrian C. Gleiss
- Subjects
0106 biological sciences ,Original Paper ,Ecology ,biology ,Artificial neural network ,business.industry ,010604 marine biology & hydrobiology ,Aquatic Science ,biology.organism_classification ,Logistic regression ,Headshaking ,Accelerometer ,Machine learning ,computer.software_genre ,010603 evolutionary biology ,01 natural sciences ,Random forest ,Negaprion brevirostris ,14. Life underwater ,Gradient boosting ,Artificial intelligence ,business ,computer ,Classifier (UML) ,Ecology, Evolution, Behavior and Systematics - Abstract
Discerning behaviours of free-ranging animals allows for quantification of their activity budget, providing important insight into ecology. Over recent years, accelerometers have been used to unveil the cryptic lives of animals. The increased ability of accelerometers to store large quantities of high resolution data has prompted a need for automated behavioural classification. We assessed the performance of several machine learning (ML) classifiers to discern five behaviours performed by accelerometer-equipped juvenile lemon sharks (Negaprion brevirostris) at Bimini, Bahamas (25°44′N, 79°16′W). The sharks were observed to exhibit chafing, burst swimming, headshaking, resting and swimming in a semi-captive environment and these observations were used to ground-truth data for ML training and testing. ML methods included logistic regression, an artificial neural network, two random forest models, a gradient boosting model and a voting ensemble (VE) model, which combined the predictions of all other (base) models to improve classifier performance. The macro-averaged F-measure, an indicator of classifier performance, showed that the VE model improved overall classification (F-measure 0.88) above the strongest base learner model, gradient boosting (0.86). To test whether the VE model provided biologically meaningful results when applied to accelerometer data obtained from wild sharks, we investigated headshaking behaviour, as a proxy for prey capture, in relation to the variables: time of day, tidal phase and season. All variables were significant in predicting prey capture, with predations most likely to occur during early evening and less frequently during the dry season and high tides. These findings support previous hypotheses from sporadic visual observations. Electronic supplementary material The online version of this article (10.1007/s00227-018-3318-y) contains supplementary material, which is available to authorized users.
- Published
- 2018
29. Tracing thyroid hormone-disrupting compounds: database compilation and structure-activity evaluation for an effect-directed analysis of sediment
- Author
-
Timo Hamers, Marja H. Lamoree, Jana M. Weiss, Patrik L. Andersson, Pim E.G. Leonards, Jin Zhang, Eszter Simon, Chemistry and Biology, and Amsterdam Global Change Institute
- Subjects
endocrine system ,Geologic Sediments ,Thyroid Hormones ,Transthyretin (TTR) ,Databases, Factual ,Sediment (wine) ,Endocrine Disruptors ,computer.software_genre ,Biochemistry ,Analytical Chemistry ,Database ,chemistry.chemical_compound ,Structure-Activity Relationship ,SDG 3 - Good Health and Well-being ,medicine ,Endocrine system ,Potency ,Humans ,biology ,Chemistry ,Thyroid ,Nonylphenol ,Triclosan ,Transthyretin ,medicine.anatomical_structure ,Effect-directed analysis (EDA) ,biology.protein ,Thyroid hormone-disrupting compound (THDC) ,Sediment ,Structure-activity relationship (SAR) ,computer ,Hormone ,Research Paper - Abstract
A variety of anthropogenic compounds has been found to be capable of disrupting the endocrine systems of organisms, in laboratory studies as well as in wildlife. The most widely described endpoint is estrogenicity, but other hormonal disturbances, e.g., thyroid hormone disruption, are gaining more and more attention. Here, we present a review and chemical characterization, using principal component analysis, of organic compounds that have been tested for their capacity to bind competitively to the thyroid hormone transport protein transthyretin (TTR). The database contains 250 individual compounds and technical mixtures, of which 144 compounds are defined as TTR binders. Almost one third of these compounds (n = 52) were even more potent than the natural hormone thyroxine (T4). The database was used as a tool to assist in the identification of thyroid hormone-disrupting compounds (THDCs) in an effect-directed analysis (EDA) study of a sediment sample. Two compounds could be confirmed to contribute to the detected TTR-binding potency in the sediment sample, i.e., triclosan and nonylphenol technical mixture. They constituted less than 1 % of the TTR-binding potency of the unfractionated extract. The low rate of explained activity may be attributed to the challenges related to identification of unknown contaminants in combination with the limited knowledge about THDCs in general. This study demonstrates the need for databases containing compound-specific toxicological properties. In the framework of EDA, such a database could be used to assist in the identification and confirmation of causative compounds focusing on thyroid hormone disruption. Electronic supplementary material The online version of this article (doi:10.1007/s00216-015-8736-9) contains supplementary material, which is available to authorized users.
- Published
- 2015
- Full Text
- View/download PDF
30. Design of computer big data processing system based on genetic algorithm
- Author
-
Chen, Song
- Published
- 2023
- Full Text
- View/download PDF
31. Application of computer image technology in 3D painting based on cloud computing
- Author
-
Li, Yumei
- Published
- 2023
- Full Text
- View/download PDF
32. Technical framework of energy-saving construction management of intelligent building based on computer vision algorithm
- Author
-
Ma, Weini
- Published
- 2023
- Full Text
- View/download PDF
33. Application of home nursing based on computer medical image detection in the treatment of open fracture wounds
- Author
-
Qiao, Linxi and Chen, Lin
- Published
- 2023
- Full Text
- View/download PDF
34. RETRACTED ARTICLE: System simulation of computer image recognition technology application by using improved neural network algorithm
- Author
-
Wang, Xin
- Published
- 2023
- Full Text
- View/download PDF
35. Application of computer-aided CAD in urban planning heritage protection
- Author
-
Wang, Song
- Published
- 2023
- Full Text
- View/download PDF
36. Simulation of computer image recognition technology based on image feature extraction
- Author
-
Ying, Weiqiang, Zhang, Lingyan, Luo, Shijian, Yao, Cheng, and Ying, Fangtian
- Published
- 2023
- Full Text
- View/download PDF
37. Improving Sentiment Analysis for Social Media Applications Using an Ensemble Deep Learning Language Model
- Author
-
Ahmed Alsayat
- Subjects
Word embedding ,Computer science ,Context (language use) ,Machine learning ,computer.software_genre ,Research Article-Computer Engineering and Computer Science ,Social media ,Sentiment analysis ,Ensemble algorithms ,Classifier (linguistics) ,Feature (machine learning) ,Data mining ,Multidisciplinary ,Pandemic ,business.industry ,Deep learning ,COVID-19 ,Coronavirus ,Statistical classification ,Language model ,Artificial intelligence ,business ,computer - Abstract
As data grow rapidly on social media by users' contributions, specially with the recent coronavirus pandemic, the need to acquire knowledge of their behaviors is in high demand. The opinions behind posts on the pandemic are the scope of the tested dataset in this study. Finding the most suitable classification algorithms for this kind of data is challenging. Within this context, models of deep learning for sentiment analysis can introduce detailed representation capabilities and enhanced performance compared to existing feature-based techniques. In this paper, we focus on enhancing the performance of sentiment classification using a customized deep learning model with an advanced word embedding technique and create a long short-term memory (LSTM) network. Furthermore, we propose an ensemble model that combines our baseline classifier with other state-of-the-art classifiers used for sentiment analysis. The contributions of this paper are twofold. (1) We establish a robust framework based on word embedding and an LSTM network that learns the contextual relations among words and understands unseen or rare words in relatively emerging situations such as the coronavirus pandemic by recognizing suffixes and prefixes from training data. (2) We capture and utilize the significant differences in state-of-the-art methods by proposing a hybrid ensemble model for sentiment analysis. We conduct several experiments using our own Twitter coronavirus hashtag dataset as well as public review datasets from Amazon and Yelp. For concluding results, a statistical study is carried out indicating that the performance of these proposed models surpasses other models in terms of classification accuracy.
- Published
- 2021
38. Visualizing the Microscopic World
- Author
-
Cerqueira, Nuno M. F. S. A., Fernandes, Pedro A., and Ramos, Maria João
- Published
- 2018
- Full Text
- View/download PDF
39. SARS-CoV-2: a systematic review of indoor air sampling for virus detection
- Author
-
Liane Yuri Kondo Nakada, José Roberto Guimarães, João Tito Borges, and Milena Guedes Maniero
- Subjects
Impactor ,Coronavirus disease 2019 (COVID-19) ,Indoor air ,Health, Toxicology and Mutagenesis ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Air sampler ,Sample (statistics) ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,Biological air sampler ,Environmental Factors and the Epidemics of COVID-19 ,Environmental Chemistry ,Humans ,Pandemics ,Selection (genetic algorithm) ,0105 earth and related environmental sciences ,Aerosols ,Impinger ,SARS-CoV-2 ,Sampling (statistics) ,COVID-19 ,General Medicine ,Pollution ,Cyclone ,Virus detection ,Air Pollution, Indoor ,Environmental science ,Data mining ,computer - Abstract
In a post-pandemic scenario, indoor air monitoring may be required seeking to safeguard public health, and therefore well-defined methods, protocols, and equipment play an important role. Considering the COVID-19 pandemic, this manuscript presents a literature review on indoor air sampling methods to detect viruses, especially SARS-CoV-2. The review was conducted using the following online databases: Web of Science, Science Direct, and PubMed, and the Boolean operators "AND" and "OR" to combine the following keywords: air sampler, coronavirus, COVID-19, indoor, and SARS-CoV-2. This review included 25 published papers reporting sampling and detection methods for SARS-CoV-2 in indoor environments. Most of the papers focused on sampling and analysis of viruses in aerosols present in contaminated areas and potential transmission to adjacent areas. Negative results were found in 10 studies, while 15 papers showed positive results in at least one sample. Overall, papers report several sampling devices and methods for SARS-CoV-2 detection, using different approaches for distance, height from the floor, flow rates, and sampled air volumes. Regarding the efficacy of each mechanism as measured by the percentage of investigations with positive samples, the literature review indicates that solid impactors are more effective than liquid impactors, or filters, and the combination of various methods may be recommended. As a final remark, determining the sampling method is not a trivial task, as the samplers and the environment influence the presence and viability of viruses in the samples, and thus a case-by-case assessment is required for the selection of sampling systems.
- Published
- 2021
40. Imaging-based patient-reported outcomes (PROs) database: How we do it
- Author
-
Soterios Gyftopoulos, Adam Jacobs, and Mohammad Samim
- Subjects
Diagnostic Imaging ,medicine.medical_specialty ,Quality management ,education ,Review Article ,computer.software_genre ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,PROMs ,0302 clinical medicine ,Patient-Centered Care ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Patient Reported Outcome Measures ,Patient-reported outcomes ,Database ,medicine.diagnostic_test ,business.industry ,Interventional radiology ,humanities ,Patient feedback ,Outcomes research ,030220 oncology & carcinogenesis ,Scale (social sciences) ,Anterior shoulder instability ,business ,Radiology ,computer - Abstract
Patient-reported outcomes (PROs) provide an essential understanding of the impact a condition or treatment has on a patient, while complementing other, more traditional outcomes information like survival and time to symptom resolution. PROs have become increasingly important in medicine with the push toward patient-centered care. The creation of a PROs database within an institution or practice provides a way to collect, understand, and use this kind of patient feedback to inform quality improvement and develop the evidence base for medical decision-making and on a larger scale could potentially help determine national standards of care and treatment guidelines. This paper provides a first-hand account of our experience setting up an imaging-based PROs database at our institution and is organized into steps the reader can follow for creating a PROs database of their own. Given the limited use of PROs within both diagnostic and interventional radiology, we hope our paper stimulates a new interest among radiologists who may have never considered outcomes work in the past.
- Published
- 2020
41. Role of computer-assisted surgery in osteotomies around the knee
- Author
-
Saragaglia, D., Chedal-Bornu, B., Rouchy, R. C., Rubens-Duval, B., Mader, R., and Pailhé, R.
- Published
- 2016
- Full Text
- View/download PDF
42. Current state of the art in total knee arthroplasty computer navigation
- Author
-
Picard, Frederic, Deep, Kamal, and Jenny, Jean Yves
- Published
- 2016
- Full Text
- View/download PDF
43. Relay Cost Bounding for Contactless EMV Payments
- Author
-
Chothia, T., Garcia, F.D., De Ruiter, J., Van Den Breekel, J.M., Thompson, M., Böhme, R., Okamoto, T., Mathematics and Computer Science, and Böhme, R.
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Cryptography ,Payment ,Computer security ,computer.software_genre ,Relay attack ,Payment card ,law.invention ,Payment protocol ,Relay ,law ,Mobile phone ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Overhead (computing) ,Lecture Notes in Computer Science ,Smart card ,Digital Security ,business ,computer ,media_common - Abstract
This paper looks at relay attacks against contactless payment cards, which could be used to wirelessly pickpocket money from victims. We discuss the two leading contactless EMV payment protocols (Visa’s payWave and MasterCard’s PayPass). Stopping a relay attack against cards using these protocols is hard: either the overhead of the communication is low compared to the (cryptographic) computation by the card or the messages can be cached before they are requested by the terminal. We propose a solution that fits within the EMV Contactless specification to make a payment protocol that is resistant to relay attacks from commercial off-the-shelf devices, such as mobile phones. This solution does not require significant changes to the cards and can easily be added to existing terminals. To prove that our protocol really does stop relay attacks, we develop a new method of automatically checking defences against relay attacks using the applied pi-calculus and the tool ProVerif.
- Published
- 2015
- Full Text
- View/download PDF
44. Pay to Win: Cheap, Cross-Chain Bribing Attacks on PoW Cryptocurrencies
- Author
-
Ittay Eyal, Edgar Weippl, Sarah Meiklejohn, Alexei Zamyatin, Nicholas Stifter, Itay Tsabary, Aljosha Judmayer, and Peter Gaži
- Subjects
Receipt ,Cryptocurrency ,Smart contract ,Computer science ,media_common.quotation_subject ,Python (programming language) ,Payment ,Computer security ,computer.software_genre ,Technical feasibility ,Collusion ,computer ,Database transaction ,media_common ,computer.programming_language - Abstract
In this paper we extend the attack landscape of bribing attacks on cryptocurrencies by presenting a new method, which we call Pay-To-Win (P2W). To the best of our knowledge, it is the first approach capable of facilitating double-spend collusion across different blockchains. Moreover, our technique can also be used to specifically incentivize transaction exclusion or (re)ordering. For our construction we rely on smart contracts to render the payment and receipt of bribes trustless for the briber as well as the bribee. Attacks using our approach are operated and financed out-of-band i.e., on a funding cryptocurrency, while the consequences are induced in a different target cryptocurrency. Hereby, the main requirement is that smart contracts on the funding cryptocurrency are able to verify consensus rules of the target. For a concrete instantiation of our P2W method, we choose Bitcoin as a target and Ethereum as a funding cryptocurrency. Our P2W method is designed in a way that reimburses collaborators even in the case of an unsuccessful attack. Interestingly, this actually renders our approach approximately one order of magnitude cheaper than comparable bribing techniques (e.g., the whale attack). We demonstrate the technical feasibility of P2W attacks through publishing all relevant artifacts of this paper, ranging from calculations of success probabilities to a fully functional proof-of-concept implementation, consisting of an Ethereum smart contract and a Python client.
- Published
- 2021
- Full Text
- View/download PDF
45. A Brief Evaluation of Freewheeling Motor at P4 Position: Retrofit Approach to Electrification
- Author
-
Jérôme Mortal and Ashwin Charles
- Subjects
Powertrain ,business.industry ,Computer science ,Automotive industry ,Residual value ,Co-simulation ,Automotive engineering ,Electrification ,Retrofitting ,Freewheel ,business ,MATLAB ,computer ,computer.programming_language - Abstract
Several approaches are taken towards the electrification of the future’s automotive fleet. This Paper explores a possibility for electrification of existing on-road vehicles with high residual value. Notion of this paper is that electrification of certain sectors of the vehicle fleet is economically feasible when existing vehicles are retrofitted with electric parts rather than being replaced with pure electric vehicles. Retrofit refers to addition of components which was not there during the time of manufacturing. The paper presents the benefits of retrofitting a “P0 hybrid” vehicle with a freewheeling motor at the P4 position. This Config is evaluated by a detailed simulation on CO2 savings and performance using MATLAB and AMESim. We try to contemplate the amount of modifications needed on the vehicle for retrofitting and summaries the benefits.
- Published
- 2021
- Full Text
- View/download PDF
46. DeFi as an Information Aggregator
- Author
-
Jiasun Li
- Subjects
Microeconomics ,Smart contract ,Computer science ,Process (engineering) ,Market clearing ,Financial market ,Investment (macroeconomics) ,Asset (computer security) ,computer.software_genre ,Private information retrieval ,computer ,News aggregator - Abstract
This paper aims to draw attention to the information aggregation role of DeFi, which has not received as much attention in community discussions as many other DeFi topics yet. A study in this direction seems important, however, given that DeFi intends to rebuild financial markets based on smart contracts, while a large literature in financial economics has studied information aggregation via the market. In those papers, investors submit demand schedules for a risky asset during the trading process: Equilibrium trading quantities are contingent on the realized price, which is an implicit function of all investors’ information, determined by market clearing. Similarly, when agents with dispersed private information interact in a more general setting, they may also want to have their actions contingent on others’, as the aggregate action profile in equilibrium is also an implicit function of everyone’s information. For example, investors in a risky project may want to have individual investment amounts contingent on their total investment amount. A well-designed smart contract that appropriately divides payoffs may thus induce contingent actions that efficiently use the aggregated information, leading to efficient allocations. Therefore, DeFi may improve the information aggregation role of financial markets, in addition to streamlining operations or cutting out the middle-man.
- Published
- 2021
- Full Text
- View/download PDF
47. Aus Wilhelms Brieftasche. Dichtung und Kredit in Goethes Roman Wilhelm Meisters Wanderjahre
- Author
-
Cornelia Zumbusch
- Subjects
Literature ,business.industry ,Poetics ,Debt ,media_common.quotation_subject ,Novella ,FAUST ,Art ,business ,computer ,computer.programming_language ,media_common ,Character design - Abstract
Goethe’s novellas and novels time and again associate love and guilt with economic debt. Against this background, the paper at hand explores microeconomic organizational forms of credit and indebtedness, liquidation and payback, which are recounted in entangled stories in the late novel Wilhelm Meisters Wanderjahre. Economic models and metaphors not only shape the plot, the character design and the depicted forms of communication, but also reflect the form of the novel. This paper refers to this as a poetics of commitment, which, as a prosaic variant, should be placed alongside the poetics of wastefulness developed in Faust II.
- Published
- 2020
- Full Text
- View/download PDF
48. Application of Density Clustering Algorithm Based on Greedy Strategy in Hot Spot Mining of Taxi Passengers
- Author
-
Jianglin Luo, Qingqing Wang, and Yiping Bao
- Subjects
Density distribution ,Computer science ,Taxis ,Hot spot (veterinary medicine) ,Noise (video) ,Data mining ,Cluster analysis ,computer.software_genre ,computer - Abstract
In this paper, the greedy strategy is used to improve the density clustering algorithm, which can separate the noise points and deal with the uneven density distribution. In order to further improve the efficiency of density clustering algorithm based on greedy strategy, in this paper, it is applied to mining hot spots of taxi passengers. Firstly, large-scale data are processed, and large-scale data sets are sampled by reservoir, and effective hot data are obtained. Then, the data of 8,000 taxis in an urban area during December 4–8, 2018 are clustered to verify the validity of the proposed algorithm.
- Published
- 2020
- Full Text
- View/download PDF
49. Orion: A Generic Model and Tool for Data Mining
- Author
-
Julien Soler, Cédric Buche, and Cindy Even
- Subjects
Computer science ,Control (management) ,InformationSystems_DATABASEMANAGEMENT ,Behavior Trees ,020206 networking & telecommunications ,Context (language use) ,02 engineering and technology ,computer.software_genre ,Range (mathematics) ,Unified Modeling Language ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,computer ,computer.programming_language - Abstract
This paper focuses on the design of autonomous behaviors based on humans behaviors observation. In this context, the contribution of the Orion model is to gather and to take advantage of two approaches: data mining techniques (to extract knowledge from the human) and behavior models (to control the autonomous behaviors). In this paper, the Orion model is described by UML diagrams. More than a model, Orion is an operational tool allowing to represent, transform, visualize and predict data; it also integrates operational standard behavioral models. Orion is illustrated to control a bot in the game Unreal Tournament. Thanks to Orion, we can collect data of low level behaviors through three scenarios performed by human players: movement, long range aiming and close combat. We can easily transform the data and use some data mining techniques to learn behaviors from human players observation. Orion allows us to build a complete behavior using an extension of a Behavior Tree integrating ad hoc features in order to manage aspects of behavior that we have not been able to learn automatically.
- Published
- 2020
- Full Text
- View/download PDF
50. MagiPlay: An Augmented Reality Serious Game Allowing Children to Program Intelligent Environments
- Author
-
Margherita Antona, Evropi Stefanidi, Dimitrios Arampatzis, George Papagiannakis, Asterios Leonidis, and Maria Korozi
- Subjects
Human–computer interaction ,Smart objects ,Computer science ,Computational thinking ,Intelligent environment ,Augmented reality ,Dialog system ,User interface ,computer.software_genre ,computer ,Mobile device ,Natural language - Abstract
A basic understanding of problem-solving and computational thinking is undoubtedly a benefit for all ages. At the same time, the proliferation of Intelligent Environments has raised the need for configuring their behaviors to address their users’ needs. This configuration can take the form of programming, and coupled with advances in Augmented Reality and Conversational Agents, can enable users to take control of their intelligent surroundings in an efficient and natural manner. Focusing on children, who can greatly benefit by being immersed in programming from an early age, this paper presents an authoring framework in the form of an Augmented Reality serious game, named MagiPlay, allowing children to manipulate and program their Intelligent Environment. This is achieved through a handheld device, which children can use to capture smart objects via its camera and subsequently create rules dictating their behavior. An intuitive user interface permits players to combine LEGO-like 3D bricks as a part of the rule-based creation process, aiming to make the experience more natural. Additionally, children can communicate with the system via natural language through a Conversational Agent, in order to configure the rules by talking with a human-like agent, while the agent also serves as a guide/helper for the player, providing context-sensitive tips for every part of the rule creation process. Finally, MagiPlay enables networked collaboration, to allow parental and teacher guidance and support. The main objective of this research work is to provide young learners with a fun and engaging way to program their intelligent surroundings. This paper describes the game logic of MagiPlay, its implementation details, and discusses the results of a statistically significant evaluation conducted with end-users, i.e. a group of children of seven to twelve years old.
- Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.