37 results on '"Expert analysis"'
Search Results
2. Approach to Estimate the Skills of an Operator During Human-Robot Cooperation
- Author
-
Adrian Couvent, Christophe Debain, and Nicolas Tricot
- Subjects
Operator (computer programming) ,Work (electrical) ,Computer science ,business.industry ,Expert analysis ,Artificial intelligence ,Machine learning ,computer.software_genre ,business ,computer ,Human–robot interaction ,Task (project management) - Abstract
This work presents a new approach to evaluate operators skills regarding their activities. This approach is based on an activity model composed of three primary activities. For each primary activities, an indicator has been proposed. The method has been applied in the case of a picking task. Results are compared with expert analysis and seem consistent. The approach shows there is no clear link between performance and skills.
- Published
- 2021
- Full Text
- View/download PDF
3. Analysis of the main criteria used in expert handwriting analysis of signatures
- Author
-
Ana Patrícia Carvalho de Melo, Byron Leite Dantas Bezerra, Celso Antônio Marcionilo Lopes Júnior, Fernanda Gabrielle Andrade Lima, Luciana Vaz de Oliveira Lucena, Murilo Campanhol Stodolni, Denise Costa Meneses, and Karina Paes Advíncula
- Subjects
Handwriting ,Multivariate statistics ,R language ,Relation (database) ,Computer science ,Expert analysis ,P1-1091 ,computer.software_genre ,Frequency ,Escrita Manual ,Set (psychology) ,Philology. Linguistics ,Expert Testimony ,General Environmental Science ,Univariate analysis ,business.industry ,Fonoaudiologia ,General Medicine ,Prova Pericial ,Otorhinolaryngology ,RF1-547 ,Speech, Language and Hearing Sciences ,General Earth and Planetary Sciences ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
Purpose: to analyze the criteria most used by experts in the handwriting analysis report. Methods: a descriptive, quantitative, inferential, and cross-sectional study, with statistical analysis of the results obtained with a form administered to the experts. The statistical calculations were made with R language, version 4.0.1, with statistical significance set at 5%. Results: the absolute frequency analysis indicated a greater occurrence of the use of initial and final pen strokes and handwriting progress, with a relative frequency above 70%. A detailed evaluation with univariate analysis showed that these criteria are not relevant to correct conclusions in the expert analysis report. It also pointed out that morphology is a relevant criterion to infer whether an evaluation is correct. The data showed that initial pen stroke, inclination, dynamism, and evolution, when observed in terms of multivariate modeling, were not significant, indicating that subjectivity is essential for the experts to make correct analyses. Conclusion: the most reported expert handwriting analysis criteria in relation to the experts’ correct analyses were not statistically relevant for the development of the analysis reports. RESUMO Objetivo: analisar os critérios mais utilizados pelos peritos na elaboração do laudo pericial grafotécnico. Métodos: o método utilizado foi descritivo, quantitativo, inferencial e transversal, com análise estatística dos resultados obtidos em formulário aplicado aos peritos. Foram realizados cálculos estatísticos a partir da linguagem R versão 4.0.1, com significância estatística fixada em 5%. Resultados: a análise da frequência absoluta indicou uma maior ocorrência de uso dos critérios ataques, remates e andamento gráfico, com frequência relativa acima de 70%. Uma avaliação detalhada por meio da análise univariada mostrou que esses critérios não possuem relevância para o acerto durante a conclusão do laudo pericial apontando que a morfologia é um critério relevante para inferir se uma avaliação está correta. Os dados mostram também que os critérios ataques, inclinação, dinamismo e evolução, ao serem observados no que tange a modelagem multivariada, não foram considerados significantes, indicando que a subjetividade precisa ser observada para acerto do avaliador. Conclusão: os critérios de análise grafotécnica mais informados em relação ao acerto dos avaliadores não apresentaram relevância estatística pelos peritos para elaboração do laudo pericial.
- Published
- 2021
- Full Text
- View/download PDF
4. On the Feasibility of Adversarial Sample Creation Using the Android System API
- Author
-
Davide Maiorca, Michele Scalas, Fabrizio Cara, and Giorgio Giacinto
- Subjects
021110 strategic, defence & security studies ,lcsh:T58.5-58.64 ,Computer science ,lcsh:Information technology ,0211 other engineering and technologies ,Expert analysis ,problem space ,020207 software engineering ,02 engineering and technology ,Adversarial machine learning ,computer.software_genre ,Computer security ,adversarial machine learning ,Adversarial system ,malware detection ,Android ,0202 electrical engineering, electronic engineering, information engineering ,Ransomware ,Malware ,evasion attack ,Android (operating system) ,computer ,Information Systems ,Problem space ,Interpretability - Abstract
Due to its popularity, the Android operating system is a critical target for malware attacks. Multiple security efforts have been made on the design of malware detection systems to identify potentially harmful applications. In this sense, machine learning-based systems, leveraging both static and dynamic analysis, have been increasingly adopted to discriminate between legitimate and malicious samples due to their capability of identifying novel variants of malware samples. At the same time, attackers have been developing several techniques to evade such systems, such as the generation of evasive apps, i.e., carefully-perturbed samples that can be classified as legitimate by the classifiers. Previous work has shown the vulnerability of detection systems to evasion attacks, including those designed for Android malware detection. However, most works neglected to bring the evasive attacks onto the so-called problem space, i.e., by generating concrete Android adversarial samples, which requires preserving the app&rsquo, s semantics and being realistic for human expert analysis. In this work, we aim to understand the feasibility of generating adversarial samples specifically through the injection of system API calls, which are typical discriminating characteristics for malware detectors. We perform our analysis on a state-of-the-art ransomware detector that employs the occurrence of system API calls as features of its machine learning algorithm. In particular, we discuss the constraints that are necessary to generate real samples, and we use techniques inherited from interpretability to assess the impact of specific API calls to evasion. We assess the vulnerability of such a detector against mimicry and random noise attacks. Finally, we propose a basic implementation to generate concrete and working adversarial samples. The attained results suggest that injecting system API calls could be a viable strategy for attackers to generate concrete adversarial samples. However, we point out the low suitability of mimicry attacks and the necessity to build more sophisticated evasion attacks.
- Published
- 2020
5. Retinal vascular tortuosity assessment: inter-intra expert analysis and correlation with computational measurements
- Author
-
Marcos Ortega, Stephanie Romeo, María D. Álvarez, José Rouco, Lucía Ramos, and Jorge Novo
- Subjects
Epidemiology ,Computer science ,Retinal circulation ,Expert analysis ,Health Informatics ,030204 cardiovascular system & hematology ,Fundus images ,computer.software_genre ,Tortuosity ,Image analysis ,Clinical biomarker ,Correlation ,03 medical and health sciences ,0302 clinical medicine ,Retinal Diseases ,Humans ,Diagnosis, Computer-Assisted ,Practice Patterns, Physicians' ,Observer Variation ,lcsh:R5-920 ,Ophthalmologists ,Reproducibility of Results ,Retinal Vessels ,Retinal vascular tortuosity ,Computer-aided diagnosis ,Ophthalmology ,Vascular tortuosity ,030221 ophthalmology & optometry ,Metric (unit) ,Data mining ,lcsh:Medicine (General) ,computer ,Kappa ,Research Article - Abstract
Background The retinal vascular tortuosity can be a potential indicator of relevant vascular and non-vascular diseases. However, the lack of a precise and standard guide for the tortuosity evaluation hinders its use for diagnostic and treatment purposes. This work aims to advance in the standardization of the retinal vascular tortuosity as a clinical biomarker with diagnostic potential, allowing, thereby, the validation of objective computational measurements on the basis of the entire spectrum of the expert knowledge. Methods This paper describes a multi-expert validation process of the computational vascular tortuosity measurements of reference. A group of five experts, covering the different clinical profiles of an ophthalmological service, and a four-grade scale from non-tortuous to severe tortuosity as well as non-tortuous / tortuous and asymptomatic / symptomatic binary classifications are considered for the analysis of the the multi-expert validation procedure. The specialists rating process comprises two rounds involving all the experts and a joint round to establish consensual rates. The expert agreement is analyzed throughout the rating procedure and, then, the consensual rates are set as the reference to validate the prognostic performance of four computational tortuosity metrics of reference. Results The Kappa indexes for the intra-rater agreement analysis were obtained between 0.35 and 0.83 whereas for the inter-rater agreement in the asymptomatic / symptomatic classification were between 0.22 and 0.76. The Area Under the Curve (AUC) for each expert against the consensual rates were placed between 0.61 and 0.83 whereas the prognostic performance of the best objective tortuosity metric was 0.80. Conclusions There is a high inter and intra-rater variability, especially for the case of the four grade scale. The prognostic performance of the tortuosity measurements is close to the experts’ performance, especially for Grisan measurement. However, there is a gap between the automatic effectiveness and the expert perception given the lack of clinical criteria in the computational measurements.
- Published
- 2018
- Full Text
- View/download PDF
6. Lorenzo v. SEC: the supreme court rules on scheme liability under the federal securities laws
- Author
-
Susan Hurd, Mel Gworek, and Evan Glustrom
- Subjects
Scheme (programming language) ,media_common.quotation_subject ,Liability ,Expert analysis ,Securities fraud ,Supreme court ,Shareholder ,Originality ,Law ,Value (economics) ,Business ,computer ,computer.programming_language ,media_common - Abstract
Purpose To analyze the impact of the Supreme Court’s decision in Lorenzo v. SEC. Design/methodology/approach Discusses the lead up to the decision, the arguments made by both sides, and the opinion of the Court, and makes predictions about the likely impact of the decision. Findings The holding is unlikely to have a significant impact on private securities litigation as shareholders, unlike the SEC, are required to prove reliance and, under the Lorenzo fact pattern, reliance cannot be shown. Originality/value Expert analysis and guidance from experienced securities litigation counsel.
- Published
- 2019
- Full Text
- View/download PDF
7. Multi-expert analysis and validation of objective vascular tortuosity measurements
- Author
-
Lucía Ramos, José Rouco, Stephanie Romeo, Marina Álvarez, Jorge Novo, and Marcos Ortega
- Subjects
Computer science ,0206 medical engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Expert analysis ,Retinal ,Image processing ,02 engineering and technology ,Retinal vascular tortuosity ,Fundus (eye) ,computer.software_genre ,020601 biomedical engineering ,Tortuosity ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,chemistry.chemical_compound ,0302 clinical medicine ,chemistry ,General Earth and Planetary Sciences ,Data mining ,computer ,General Environmental Science - Abstract
The retinal vascular tortuosity is a commonly used parameter for the early diagnosis of several diseases that affects the circulatory system. The manual analysis of fundus images for the tortuosity characterization is a time-consuming and subjective task that presents a high inter-rater variability. Thus, automatic image processing methods allow the efficient computation of objective and stable parameters for the issue. The validation of these methods is crucial to ensure an objective and reliable environment for the retinal experts. This paper describes a multi-expert analysis that measures the clinical performance as well as a validation procedure of the computational tortuosity module of the Sirius framework, a computer-aided diagnosis platform for analyzing retinal images.
- Published
- 2018
- Full Text
- View/download PDF
8. Entire Frequency Domain Analysis of Rodent EEG and EMG Recordings Using Relative Thresholds
- Author
-
Xin-Hong Xu, Wei-Min Qu, Zhi-Li Huang, Ming-Hui Yao, and Ming-Ming Yan
- Subjects
Speech recognition ,media_common.quotation_subject ,Fast Fourier transform ,Expert analysis ,Electroencephalography ,Non-rapid eye movement sleep ,03 medical and health sciences ,0302 clinical medicine ,medicine ,MATLAB ,computer.programming_language ,media_common ,medicine.diagnostic_test ,business.industry ,Eye movement ,Pattern recognition ,Psychiatry and Mental health ,030228 respiratory system ,Neurology ,Frequency domain ,Neurology (clinical) ,Artificial intelligence ,business ,Psychology ,computer ,psychological phenomena and processes ,030217 neurology & neurosurgery ,Vigilance (psychology) - Abstract
The aim of this work was to develop a simple computer-based sleep-scoring algorithm to detect the three vigilance states—rapid eye movement (REM) sleep, non-REM (NREM) sleep, and wakefulness—using the entire frequency domain with relative thresholds. A variety of frequencies and time-domain features were extracted from each 4-s epoch in retrospective 24-h sleep data sets from mice using an algorithm developed in Matlab version 7.0. This algorithm is composed of five steps: (1) determining the EMG–power ratio, (2) determining the three energy areas (high, middle, and low) using EMG–power ratio thresholds (e.g., 5.5 and 6), (3) determining the θ/δ ratio, (4) distinguishing wakefulness from NREM sleep using the θ/δ ratio in the middle-energy area, and (5) distinguishing REM from NREM sleep using the θ/δ ratio in the low-energy area. We were able to achieve a high degree (92%) of agreement between the results of this algorithm and the results of a waveform-recognition procedure. This algorithm should overcome the inconsistencies inherent in manual scoring and reduce the time required for expert analysis. This algorithm is a reliable and efficient tool for automated detection of the three vigilance states.
- Published
- 2017
- Full Text
- View/download PDF
9. Artificial Intelligence: Problems and Prospects of Development
- Author
-
Aleksey B. Simonov, Natalia V. Ketko, Aleksey G. Gagarin, Irina A. Tislenkova, and Natalia N. Skeeter
- Subjects
Structure (mathematical logic) ,System development ,Development (topology) ,Computer science ,business.industry ,Artificial systems ,Expert analysis ,Artificial intelligence ,computer.software_genre ,business ,computer ,Expert system - Abstract
The purpose of this article is to identify main problems and prospects of artificial systems advance and study algorithmic methods simulating human brain and overall structure in this type of systems. As methodology for solving these problems we have chosen the systematic approach and the expert analysis methods to determine the probability of having options for artificial intelligence systems development implemented.
- Published
- 2020
- Full Text
- View/download PDF
10. Extended Vulnerability Feature Extraction Based on Public Resources
- Author
-
Olga Sinelnikova and Yulia Tatarinova
- Subjects
Focus (computing) ,Computer science ,Common Vulnerabilities and Exposures ,Feature extraction ,Vulnerability ,Expert analysis ,computer.software_genre ,CVE ,vulnerability analysis ,Set (abstract data type) ,security risk assessment ,information and feature extraction ,Data mining ,Risk assessment ,Feature set ,computer - Abstract
The focus of this research is to define a framework that automatically analyses Common Vulnerabilities and Exposures (CVE) from public and disclosed resources and makes mapping to the target computer system. The current framework calculates risk assessment and estimates vulnerabilities impact according to the features of the target platform. In this paper, we describe the main vulnerability feature set, provide approaches for automatic extraction from databases and open resources. We evaluated and improved each obtaining approaches on the recent set of security vulnerabilities (2018 year database). Comparison obtained results with results of manual expert analysis is proved.
- Published
- 2019
11. Discovering Systematic Relations between Alarms for Alarm Flows Reduction
- Author
-
H. Sabot, Jean-Marc Faure, Jean-Jacques Lesage, Y. Laumonier, Laboratoire Universitaire de Recherche en Production Automatisée (LURPA), École normale supérieure - Cachan (ENS Cachan)-Université Paris-Sud - Paris 11 (UP11), General Electric Digital Foundry Europe, and SUPMECA - Institut supérieur de mécanique de Paris (SUPMECA)
- Subjects
0209 industrial biotechnology ,Computer science ,Industrial alarm systems ,Expert analysis ,Mutual dependency ,Petri nets ,02 engineering and technology ,Petri net ,Computer security ,computer.software_genre ,[SPI.AUTO]Engineering Sciences [physics]/Automatic ,Reduction (complexity) ,ALARM ,020901 industrial engineering & automation ,13. Climate action ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electric power industry ,Discrete Event Systems ,computer ,Alarm rationalization - Abstract
International audience; Alarm systems play an important role for the safe and efficient operation of modern industrial plants. However, in most of industrial alarm systems, alarm flows cannot always be correctly managed by the operators as they often turn into alarm floods, sequences of numerous alarms occurring in a short period of time. To reduce the alarm flows, this paper focuses on detection of redundant alarms that could be removed. This objective is met by, first, looking for two systematic causality relations between alarms, called domination and mutual dependency, in the alarm log. Once these relations discovered, alarms removal requires an expert analysis. To ease this analysis, the discovered relations are depicted in the form of Petri nets. The proposed method has been applied on a dataset coming from power industry and provided by General Electric. The results of this case study show the benefits of the approach.
- Published
- 2019
12. Mutual Information and Delay Embeddings in Polysomnography Studies
- Author
-
Piotr Paprzycki, Tomasz Rymarczyk, and Andres Vejar
- Subjects
Measurement method ,medicine.diagnostic_test ,business.industry ,Computer science ,Sleep apnea ,Expert analysis ,Polysomnography ,Mutual information ,medicine.disease ,Machine learning ,computer.software_genre ,Correlation ,Time delayed ,medicine ,Sleep (system call) ,Artificial intelligence ,business ,computer - Abstract
In polysomnography, multiple biosignals are acquired to conduct human sleep studies. The sensors and measurement methods correspond to qualitatively different communication channels and therefore represent integrative perspectives of the sleep dynamics. Many pathologies associated with sleep disorders can be studied ranging from sleeps apneas, parasomnias and bruxism to other neurological disorders like epilepsy or Parkinson’s disease. The polysomnography study is complex and require highly specialized expert analysis. In this work we use data from sleep apnea patients. Given the nonlinear nature of polysomnography biosignals, correlation based methods are not best suited for signal study or classification. A non-linear time-series analysis is performed on the polysomnographic data, using time delayed mutual information and delay-embeddings. Important characteristics of the respiratory dynamics are estimated by this procedure that enables signal comparison and parameter selection for the design of predictive models and machine learning algorithms.
- Published
- 2019
- Full Text
- View/download PDF
13. An analysis of the results of the inductive formation of diagnostic medical knowledge databases
- Author
-
M. V. Petryaeva, S. V. Smagin, Alexander Kleshchev, and M. Yu. Chernyakhovskaya
- Subjects
Medical knowledge ,Medical diagnostic ,Training set ,Database ,business.industry ,Computer science ,05 social sciences ,Intelligent decision support system ,Expert analysis ,Ontology (information science) ,050905 science studies ,computer.software_genre ,Software ,Acute appendicitis ,Data mining ,0509 other social sciences ,050904 information & library sciences ,business ,General Economics, Econometrics and Finance ,computer - Abstract
This paper proposes a problem-oriented method for the objective formation of easily interpretable knowledge databases for intelligent systems. We describe the InForMedKB software complex, which is designed for the inductive formation of medical diagnostics knowledge databases; it was used to perform the proposed method. Expert analysis of the results of using the developed software complex, viz., the inductively formed Acute Appendicitis database of medical diagnostic knowledge for a mathematical dependence model with parameters, which is near real-life ontology of medical diagnostics, is given.
- Published
- 2016
- Full Text
- View/download PDF
14. Method of the Reflexive Analysis of Expert Data
- Author
-
N.A. Isaeva and V.B. Gusev
- Subjects
Measure (data warehouse) ,Transitive relation ,Computer science ,business.industry ,Expert analysis ,computer.software_genre ,Object (computer science) ,Logical consequence ,Reflexivity ,Artificial intelligence ,Closing (morphology) ,business ,computer ,Mutual influence ,Natural language processing - Abstract
The calculation procedures for estimation of factor interaction lasting effects on the basis of expert data are developed. The measure assessments of influence some factors on others are interpreting as the objects of multi-valued logic. The object of examination is the method of expert analysis, which uses reflexive procedures of multi-valued logical conclusion for obtaining the transitive closing of the mutual influence assessments of the factors in question.
- Published
- 2017
- Full Text
- View/download PDF
15. Myocardial infarction detection system from PTB diagnostic ECG database using Fuzzy inference system for S-T waves
- Author
-
A N Ardan, M Olivia, S M Titin, Miftahul Ma'arif, and Z H Aisyah
- Subjects
History ,Database ,Heart disease ,business.industry ,Expert analysis ,Disease ,medicine.disease ,computer.software_genre ,Computer Science Applications ,Education ,Coronary arteries ,medicine.anatomical_structure ,Fuzzy inference system ,T wave ,medicine ,cardiovascular diseases ,Myocardial infarction ,business ,computer - Abstract
Heart disease was the one of major health problem in the world, which caused many deaths. Myocardial infarction (MI) is one type of heart disease which caused by a blockage in the coronary arteries. This disease could be detected by reading electrocardiogram (ECG) wave result. Knowledge and expert analysis were required to read PQRST-wave in Electrocardiograms. Fuzzy inference system was used in this detection system because it flexibility on linguistic variables. Fuzzy inference system could be performed after discovery S and T peaks. The characteristics of Myocardial Infarction could be seen through the condition of S and T wave. Detection system test was conducted on databases that obtained from Physionet bank, Physikalisch-Technische Bundesanstalt (PTB) diagnostic ECG database that were collected from 52 healthy patients and 148 diagnosed MI patients. The result of this research showed that test of detection system had sensitivity level of 73%.
- Published
- 2019
- Full Text
- View/download PDF
16. Evaluation of the Accuracy of Computer Automated Analysis of Esophageal 24- hour Impedance pH Studies
- Author
-
Stephen A Petty, Amine Hila, Mouhamed Amr Sabouni, Alon Yarkoni, Mohamad Ghalayini, Sayf Bala, and Nasser Hajar
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Concordance ,Reflux ,Expert analysis ,computer.software_genre ,03 medical and health sciences ,0302 clinical medicine ,Software ,Scientific Equipment ,030225 pediatrics ,Medicine ,Analysis software ,030211 gastroenterology & hepatology ,Medical physics ,Data mining ,business ,Esophageal pH monitoring ,computer ,Reliability (statistics) - Abstract
Background: Esophageal pH monitoring in conjunction with multichannel intraluminal impedance (MII-pH) is now considered the most accurate method for detection and characterization of gastro-esophageal reflux (GER), with higher sensitivity and specificity in detecting reflux than esophageal pH monitoring alone. Aims: One possibly limiting factor for using MII-pH testing is the time required to analyze the results. Automatic interpretation softwares have been produced to reduce this, in this study, we assessed the reliability of two 24 hour MII-pH analysis softwares compared to the interpretation provided by an expert. Methods: We performed a retrospective review of 200 MII-pH studies done on patients with reflux symptoms between September 2009 and September 2014. The studies were split into two groups of 100 patients: one group’s testing was performed using MMS equipment and software, and the other group used Sandhill Scientific equipment and software. All tracings were additionally analyzed by an expert and the interpretations were compared. Results: Our data indicated a strong correlation between the expert’s analysis and both automatic softwares in all positions, Demeester score, reflux episodes and symptoms index (p
- Published
- 2017
- Full Text
- View/download PDF
17. Genetic Algorithm Optimisation of An Agent-Based Model for Simulating a Retail Market
- Author
-
Andrew J. Evans, Alison J. Heppenstall, and Mark Birkin
- Subjects
Agent-based model ,Engineering ,business.industry ,Spatial interaction ,05 social sciences ,Geography, Planning and Development ,Retail market ,0211 other engineering and technologies ,0507 social and economic geography ,Expert analysis ,021107 urban & regional planning ,Regression analysis ,02 engineering and technology ,Machine learning ,computer.software_genre ,Nonlinear system ,Genetic algorithm ,Artificial intelligence ,business ,050703 geography ,computer ,General Environmental Science - Abstract
Traditionally, researchers have used elaborate regression models to simulate the retail petrol market. Such models are limited in their ability to model individual behaviour and geographical influences. Heppenstall et al presented a novel agent-based framework for modelling individual petrol stations as agents and integrated important additional system behaviour through the use of established methodologies such as spatial interaction models. The parameters for this model were initially determined by the use of real data analysis and experimentation. This paper explores the parameterisation and verification of the model through data analysis and by use of a genetic algorithm (GA). The results show that a GA can be used to produce not just an optimised match, but results that match those derived by expert analysis through rational exploration. This may suggest that despite the apparent nonlinear and complex nature of the system, there are a limited number of optimal or near optimal behaviours given its constraints, and that both user-driven and GA solutions converge on them.
- Published
- 2007
- Full Text
- View/download PDF
18. Structural Analysis of Technical-Tactical Elements in Table Tennis and their Role in Different Playing Zones
- Author
-
Miran Kondrič, Lidija Petrinović, and Goran Munivrana
- Subjects
Kruskal-Wallis test ,racquet sports ,motor skills ,expert analysis ,Kruskal–Wallis one-way analysis of variance ,Computer science ,Section III – Sports Training ,Expert analysis ,Physical Therapy, Sports Therapy and Rehabilitation ,computer.software_genre ,World class ,Group (periodic table) ,Physiology (medical) ,Table (database) ,Data mining ,lcsh:Sports medicine ,Arithmetic ,lcsh:RC1200-1245 ,computer ,Research Article - Abstract
For the purpose of determining the overall structure of technical-tactical elements in table tennis and evaluating their role in different playing zones around the table, a new measuring instrument (a questionnaire) was formulated that took advantage of the expert knowledge of top, world class table tennis coaches. The results of the hierarchical taxonomic (cluster) analysis showed that the overall structure of the technical-tactical elements forming the table tennis technique could be divided into three basic groups; a group of technical-tactical elements (A) used in the phase of preparing one’s own and disabling the opponent’s attack; a group of technical-tactical elements (B) used in the phase of attack and counterattack; and a group of technical-tactical elements (C) used in the phase of defense. The differences among the obtained groups of table tennis elements were determined by applying the Kruskal-Wallis test, while relations between the groups and their role in different playing zones around the table were analyzed by comparing the average values of the experts’ scores.
- Published
- 2015
19. Computational Methods for RNA Structure Validation and Improvement
- Author
-
Swati Jain, David S. Richardson, and Jane S. Richardson
- Subjects
business.industry ,RNA ,Expert analysis ,Structure validation ,Nanotechnology ,Biology ,Machine learning ,computer.software_genre ,Automation ,Variety (cybernetics) ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial intelligence ,Rna folding ,High current ,Nucleic acid structure ,business ,computer - Abstract
With increasing recognition of the roles RNA molecules and RNA/protein complexes play in an unexpected variety of biological processes, understanding of RNA structure-function relationships is of high current importance. To make clean biological interpretations from three-dimensional structures, it is imperative to have high-quality, accurate RNA crystal structures available, and the community has thoroughly embraced that goal. However, due to the many degrees of freedom inherent in RNA structure (especially for the backbone), it is a significant challenge to succeed in building accurate experimental models for RNA structures. This chapter describes the tools and techniques our research group and our collaborators have developed over the years to help RNA structural biologists both evaluate and achieve better accuracy. Expert analysis of large, high-resolution, quality-conscious RNA datasets provides the fundamental information that enables automated methods for robust and efficient error diagnosis in validating RNA structures at all resolutions. The even more crucial goal of correcting the diagnosed outliers has steadily developed toward highly effective, computationally based techniques. Automation enables solving complex issues in large RNA structures, but cannot circumvent the need for thoughtful examination of local details, and so we also provide some guidance for interpreting and acting on the results of current structure validation for RNA.
- Published
- 2015
- Full Text
- View/download PDF
20. Intelligent Screening Systems for Cervical Cancer
- Author
-
Yessi Jusman, Siew-Cheok Ng, and Noor Azuan Abu Osman
- Subjects
medicine.medical_specialty ,Screening techniques ,Digital data ,Expert analysis ,lcsh:Medicine ,Uterine Cervical Neoplasms ,Image processing ,Feature selection ,Review Article ,Machine learning ,computer.software_genre ,lcsh:Technology ,General Biochemistry, Genetics and Molecular Biology ,medicine ,Image Processing, Computer-Assisted ,Humans ,Diagnosis, Computer-Assisted ,lcsh:Science ,General Environmental Science ,Gynecology ,Cervical cancer ,Cervical screening ,business.industry ,lcsh:T ,Second opinion ,lcsh:R ,General Medicine ,medicine.disease ,lcsh:Q ,Female ,Artificial intelligence ,business ,computer - Abstract
Advent of medical image digitalization leads to image processing and computer-aided diagnosis systems in numerous clinical applications. These technologies could be used to automatically diagnose patient or serve as second opinion to pathologists. This paper briefly reviews cervical screening techniques, advantages, and disadvantages. The digital data of the screening techniques are used as data for the computer screening system as replaced in the expert analysis. Four stages of the computer system are enhancement, features extraction, feature selection, and classification reviewed in detail. The computer system based on cytology data and electromagnetic spectra data achieved better accuracy than other data.
- Published
- 2014
21. Search in games with incomplete information: a case study using Bridge card play
- Author
-
Ian Frank and David Basin
- Subjects
Linguistics and Language ,Theoretical computer science ,Computer science ,Computational Mechanics ,Combinatorial game theory ,Expert analysis ,Machine learning ,computer.software_genre ,Bridge (nautical) ,Language and Linguistics ,Extensive-form game ,Bayesian game ,Simple (abstract algebra) ,Search algorithm ,Complete information ,Artificial Intelligence ,Game tree search ,Computer Science (miscellaneous) ,Game tree ,Game theory ,Equilibrium point ,business.industry ,Computer bridge ,Normal-form game ,Computer Graphics and Computer-Aided Design ,Human-Computer Interaction ,Repeated game ,State (computer science) ,Artificial intelligence ,business ,computer ,Computer Bridge ,Incomplete information - Abstract
We examine search algorithms in games with incomplete information, formalising a best defence model of such games based on the assumptions typically made when incomplete information problems are analysed in expert texts. We show that equilibrium point strategies for optimal play exist for this model, and define an algorithm capable of computing such strategies. Using this algorithm as a reference we then analyse search architectures that have been proposed for the incomplete information game of Bridge. These architectures select strategies by analysing some statistically significant collection of complete information sub-games. Our model allows us to clearly state the limitations of such architectures in producing expert analysis, and to precisely formalise and distinguish the problems that lead to sub-optimality. We illustrate these problems with simple game trees and with actual play situations from Bridge itself.
- Published
- 1998
- Full Text
- View/download PDF
22. Using Video Capture and Image Analysis to Quantify Apparel Fit
- Author
-
Susan P. Ashdown and Inez L. Kohn
- Subjects
010302 applied physics ,High contrast ,Polymers and Plastics ,Computer science ,business.industry ,Video capture ,Interface (computing) ,Expert analysis ,Objective method ,02 engineering and technology ,021001 nanoscience & nanotechnology ,Clothing ,Machine learning ,computer.software_genre ,01 natural sciences ,Image (mathematics) ,0103 physical sciences ,Chemical Engineering (miscellaneous) ,Artificial intelligence ,Dimension (data warehouse) ,0210 nano-technology ,business ,computer - Abstract
The apparel industry relies on subjective traditional fit measures of expert analysis and subject responses and objective body dimension measurements to quantify the fit and size of garment prototypes. In this study, an objective method for the analysis of fit is validated by comparison with traditional measures. Assessment of apparel fit for all measures focuses on the effect of postural differences and garment fit for mature women between the ages of 55 and 65. The objective method uses slashed garments and image analysis tools. Results of the analysis of high contrast video-captured images of subjects dressed in test jackets are compared to traditional fit measures. Expert analysts and the image analysis method developed for this study are both capable of defining the complex interactions of the garment/body interface.
- Published
- 1998
- Full Text
- View/download PDF
23. Sample Captures
- Author
-
Robert Shimonski
- Subjects
Dynamic Host Configuration Protocol ,computer.internet_protocol ,Computer science ,Network packet ,Control flow graph ,Expert analysis ,Sample (statistics) ,Data mining ,Bootstrap Protocol ,computer.software_genre ,computer - Abstract
In this chapter we will expand on what we learned in the Filters chapter by covering some advanced problems, how to solve them using Wireshark, and the more complex use of analysis by applying more filters and reviewing expert analysis reports.
- Published
- 2013
- Full Text
- View/download PDF
24. Automated Detection of P. falciparum Using Machine Learning Algorithms with Quantitative Phase Images of Unstained Cells
- Author
-
Han Sang Park, Adam Wax, Jen Tsan Ashley Chi, Matthew T. Rinehart, and Katelyn A. Walzer
- Subjects
0301 basic medicine ,Plasmodium ,Pathology ,Erythrocytes ,lcsh:Medicine ,computer.software_genre ,01 natural sciences ,Machine Learning ,Automation ,Symmetry ,Animal Cells ,Red Blood Cells ,Medicine and Health Sciences ,lcsh:Science ,Analysis method ,Protozoans ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Malarial Parasites ,Morphological descriptors ,3. Good health ,Physical Sciences ,Phase imaging ,Cellular Types ,Algorithm ,Algorithms ,Research Article ,Computer and Information Sciences ,medicine.medical_specialty ,Plasmodium falciparum ,Geometry ,Expert analysis ,Biology ,Research and Analysis Methods ,Machine learning ,Phase image ,010309 optics ,Machine Learning Algorithms ,03 medical and health sciences ,Artificial Intelligence ,Parasite Groups ,parasitic diseases ,0103 physical sciences ,Parasitic Diseases ,medicine ,Humans ,Trophozoites ,Blood Cells ,business.industry ,lcsh:R ,Organisms ,Biology and Life Sciences ,Cell Biology ,Tropical Diseases ,Linear discriminant analysis ,biology.organism_classification ,Parasitic Protozoans ,Malaria ,030104 developmental biology ,Blood smear ,Cognitive Science ,lcsh:Q ,Parasitology ,Artificial intelligence ,business ,Apicomplexa ,computer ,Mathematics ,Neuroscience - Abstract
Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7% in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5%) than either LDC (99.0%) or LR (99.1%) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98%. Discrimination of infection stage was less accurate, producing high specificity (99.8%) but only 45.0%-66.8% sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection without staining or expert analysis.
- Published
- 2016
- Full Text
- View/download PDF
25. GcLite: An Expert Tool for Analyzing Garbage Collection Behavior
- Author
-
Vasileios Angelopoulos, Pattrick O'Sullivan, John Murphy, and Trevor Parsons
- Subjects
Engineering ,Program testing ,Database ,Java ,business.industry ,Expert analysis ,computer.software_genre ,Storage management ,System monitoring ,Technical skills ,Software engineering ,business ,computer ,computer.programming_language ,Garbage collection ,Heap (data structure) - Abstract
Creating fine tuned and stable systems is very important and requires use of a list of testing tools that analyze various resources (like GC logs, heap dumps, native memory, etc). Due to the nature of those tools, this kind of analysis can only be performed by a small group of expert users that have high technical skills. In this paper we present an approach for expert tool development in the field of performance testing. The result of this approach is the creation of GcLite tool, an expert tool for analyzing garbage collection logs. A case study was carried out in a real industry environment showing the benefits to a number of testing teams. The benefit of the tool is that it allows a wider range of testers to carry out expert analysis.
- Published
- 2012
- Full Text
- View/download PDF
26. Troubleshooting Assistance Services in Community Wireless Networks
- Author
-
Filip Maly and Pavel Kriz
- Subjects
Article Subject ,Computer Networks and Communications ,Computer science ,Wireless network ,Expert analysis ,Troubleshooting ,Computer security ,computer.software_genre ,lcsh:QA75.5-76.95 ,World Wide Web ,Use case ,lcsh:Electronic computers. Computer science ,State (computer science) ,computer ,Information Systems ,Network analysis - Abstract
We have identified new services intended for users and administrators of community wireless networks. Troubleshooting assistance services will assist the users during solution of communication problems, gathering data for expert analysis, informing the user about the state of the network (including outages), and so forth. Network administrators will be provided with a unique tool supporting the network analysis, operation, and development. We have mainly focused on the use cases and prerequirements—the problem of topology discovery.
- Published
- 2012
- Full Text
- View/download PDF
27. Web-based remote monitoring of live EEG
- Author
-
Ruairi O'Reilly, John P. Morrison, Philip D. Healy, and Geraldine B. Boylan
- Subjects
Web browser ,medicine.diagnostic_test ,Multimedia ,business.industry ,Computer science ,Expert analysis ,Electroencephalography ,computer.software_genre ,Eeg recording ,Data acquisition ,medicine ,Web application ,The Internet ,Plug-in ,business ,computer - Abstract
In a critical care setting, delays in the diagnosis of neurological conditions can have a significant impact on patient outcomes. Recording the electrical activity of the brain (EEG) is often used to diagnose and monitor neurological conditions. A trained neurophysiologist is then required to analyse these signals. However, in many cases this expertise is not available on-site. A web-based remote monitoring system for EEG data is presented that reduces the delays that can arise when off-site expert analysis is required. The system allows EEG data to be streamed to a remote location in near-real-time and viewed while acquisition is ongoing. Data streamed earlier may also be reviewed, providing the user with a continually updating view of the entire EEG recording. All communications are performed using web technologies in order to minimise issues with firewalls and to enable analysis on most web-connected PCs. The only tool required for viewing EEG data is a modern web browser with the commonly-available Adobe Flash plugin installed. Since these are ubiquitous, data analysis is not limited by geographic location. Moreover, multiple users can independently view the data simultaneously
- Published
- 2010
- Full Text
- View/download PDF
28. Application of Association Rules Data Mining in Effect Rules Discovery of Healthy Housing
- Author
-
Liang Wang, Chong-chong Yu, and Jie Liu
- Subjects
Prediction algorithms ,Association rule learning ,Construction industry ,Knowledge extraction ,Computer science ,Expert analysis ,Human factors and ergonomics ,Algorithm design ,Data mining ,computer.software_genre ,Data science ,computer - Published
- 2009
- Full Text
- View/download PDF
29. Entropy Based Anomaly Detection Applied to Space Shuttle Main Engines
- Author
-
Adrian Agogino and Kagan Tumer
- Subjects
Engineering ,business.industry ,Expert analysis ,Space Shuttle ,Sensor fusion ,computer.software_genre ,User input ,Fault detection and isolation ,Anomaly detection ,Data mining ,Aerospace systems ,Cluster analysis ,business ,computer ,Simulation - Abstract
Automated model-free anomaly and fault detection using large collections of sensor suites is vital to increasing safety and reducing maintenance costs of complex aerospace systems, such as the space shuttle main engine. Current anomaly and fault detection methods are deficient in that they either require a huge amounts of laborious expert analysis or rely on models that fail to capture unmodelled anomalies. To overcome these deficiencies, model-free statistical approaches to this analysis are needed that do not require significant user input. This paper presents two general automated analysis methods that detect anomalies in sensor data taken from large sets of sensors. The first approach uses entropy analysis over the entire set of sensors at once to detect anomalies that have broad system-wide impact. The global nature of this approach reduces its sensitivity to faulty sensors. The second approach uses automated clustering of sensors combined with intra-cluster entropy analysis to detect anomalies and faults that have more local impact. Results derived from the application of these approaches to sensor data recorded from test-stand runs of the space shuttle main engine show that they can be effective in finding faults and anomalies. With test-stand data consisting time-series derived from 147 sensors, the system-wide approach was able to reveal an anomalous mixture ratio programmed by the test-engineers, but not revealed to the authors. Using similar data from a different engine test, the localized clustering approach revealed a fault in the high pressure fuel turbo-pump early in the test-run and subsequent cascaded faults later in the test run. In addition the clustering approach was able to separate sensors that contained little analytic value from more important sensors, potentially reducing the burden of subsequent expert analysis.
- Published
- 2006
- Full Text
- View/download PDF
30. Navy Health Care Executive Competencies
- Author
-
Stephen A Marty
- Subjects
business.industry ,Delphi method ,Core competency ,Expert analysis ,Navy ,Interpersonal relationship ,Social skills ,Nursing ,Health care ,Medicine ,business ,computer ,Delphi ,computer.programming_language - Abstract
The purpose of this paper is to update the core competencies and associated skills, knowledge, and abilities (SKAs) required by Navy health care executives. Three waves of the Delphi technique were employed. In Wave I, senior Navy health care executives identified the five most important competencies and their associated SKAs believed to be required for Navy health care executives over the next decade. An expert panel of senior health care executives reviewed and sorted the identified competencies from Wave I into six domain categories and gave each domain an appropriate title. From the expert analysis, the researcher developed a questionnaire for use in Delphi waves II and III. In Wave II, senior executives from Wave I rated the competencies from each domain. During Wave III, junior Navy health care executives completed the same questionnaire given to the senior executives. Results indicated that competencies surrounding interpersonal skills and understanding the environment emerged as most critical for Navy health care executives into the next decade. In addition, statistically significant differences in opinions emerged between groups and among 20 of the 100 individual SKAs rated indicating that senior and junior health care executives have very real differences in opinion regarding required executive skills.
- Published
- 2006
- Full Text
- View/download PDF
31. Obstetrical Decision-riaking Based On Predictive Expert Analysis
- Author
-
J.R. Searle, Lawrence D. Devoe, Amparo Alonso-Betanzos, and V. Moret-Borrillo
- Subjects
medicine.medical_specialty ,business.industry ,Medicine ,Expert analysis ,Medical physics ,Data mining ,Outcome prediction ,business ,computer.software_genre ,computer ,Health informatics ,Outcome (game theory) ,Expert system - Abstract
This article describes the validation results of the prognostic module for the obstetrical expert system NST-EXPERT. The validation method consists of calculating the percentages of man/machtne agreement for the expert system and a team of three perinatologists. This comparison was carried out for the following three categories: man neonatal outcome prediction vs. machine neonatal outcome prediction, madmachine outcome prediction vs. real outcome and man/machine bad outcome prediction vs. real bad outcome. The results obtained suggest that the use of prognostic
- Published
- 2005
- Full Text
- View/download PDF
32. Digital learning material for model building in molecular biology
- Author
-
Fred Janssen, Ton Bisseling, Tinri Aegerter-Wilmsen, and Rob Hartog
- Subjects
Science instruction ,Logical reasoning ,Computer science ,Molecular biology ,EPS-4 ,General Engineering ,Educational technology ,Expert analysis ,Toegepaste Informatiekunde ,Science education ,Model building ,Education ,Computer ,Mathematics education ,Laboratorium voor Moleculaire Biologie ,Laboratory of Molecular Biology ,Digital learning ,Information Technology ,Curriculum - Abstract
Building models to describe processes forms an essential part of molecular biology research. However, in molecular biology curricula little attention is generally being paid to the development of this skill. In order to provide students the opportunity to improve their model building skills, we decided to develop a number of digital cases about developmental biology. In these cases the students are guided to build a model according to a method that is based on expert analysis and historical data; they first build a simplified model based on the wild-type only and then they extend this model step by step based on experimental results. After each extension, the biological implications of the extension are evaluated. The first case was evaluated three times during a regular course at Wageningen University, The Netherlands and once at the University of Zurich, Switzerland. The analysis of audiotapes revealed that students did indeed engage in the reasoning processes, which are typical for model building. Furthermore, exam results seem to suggest that working with the case indeed facilitates model building in analogical situations and the students judged working with the case positively.
- Published
- 2005
33. Accuracy and consistency of grass pollen identification by human analysts using electron micrographs of surface ornamentation
- Author
-
Shivangi Tiwari, Cassandra J. Wesseln, Surangi W. Punyasena, Sarah J. Baker, Dunia H. Urrego, Jacklyn Rodriguez, Derek S. Haselhorst, Jessica L. Thorn, Claire M. Belcher, and Luke Mander
- Subjects
0106 biological sciences ,010506 paleontology ,Context (language use) ,Plant Science ,Biology ,medicine.disease_cause ,computer.software_genre ,010603 evolutionary biology ,01 natural sciences ,expert analysis ,Consistency (statistics) ,lcsh:Botany ,Pollen ,Grass pollen ,Statistics ,medicine ,Limited capacity ,Taxonomic rank ,palynology ,lcsh:QH301-705.5 ,Ecology, Evolution, Behavior and Systematics ,automation ,0105 earth and related environmental sciences ,lcsh:QK1-989 ,Identification (information) ,classification ,lcsh:Biology (General) ,Electron micrographs ,identification ,Data mining ,computer - Abstract
• Premise of the study: Humans frequently identify pollen grains at a taxonomic rank above species. Grass pollen is a classic case of this situation, which has led to the development of computational methods for identifying grass pollen species. This paper aims to provide context for these computational methods by quantifying the accuracy and consistency of human identification.\ud \ud • Methods: We measured the ability of nine human analysts to identify 12 species of grass pollen using scanning electron microscopy images. These are the same images that were used in computational identifications. We have measured the coverage, accuracy, and consistency of each analyst, and investigated their ability to recognize duplicate images.\ud \ud • Results: Coverage ranged from 87.5% to 100%. Mean identification accuracy ranged from 46.67% to 87.5%. The identification consistency of each analyst ranged from 32.5% to 87.5%, and each of the nine analysts produced considerably different identification schemes. The proportion of duplicate image pairs that were missed ranged from 6.25% to 58.33%.\ud \ud • Discussion: The identification errors made by each analyst, which result in a decline in accuracy and consistency, are likely related to psychological factors such as the limited capacity of human memory, fatigue and boredom, recency effects, and positivity bias.
- Published
- 2014
- Full Text
- View/download PDF
34. System analysis of actual applied tasks for ground-based and aerospace monitoring of natural-technological objects in ELRI-184 project
- Subjects
Engineering ,Scope (project management) ,Computer Networks and Communications ,business.industry ,Applied Mathematics ,Expert analysis ,Monitoring system ,Image processing ,computer.software_genre ,Thematic map ,Artificial Intelligence ,Control and Systems Engineering ,Systems engineering ,Data mining ,business ,Aerospace ,computer - Abstract
The article focuses on system analysis of thematic tasks, which are solved on the basis of Aerospace image processing and ground-based measurement processing. This article discusses the use of aerospace surveillance in dealing with topical application tasks that are included in the scope of interest of the ELRI-184 project «Integrated Intelligent Platform for Monitoring the Cross-Border Natural-Technological Systems». The directions of particular utility in border areas of the project participants are also considered in this article. Numerical values of the indicators based on expert analysis and simulation, that describe the monitoring system were received.
- Published
- 2014
- Full Text
- View/download PDF
35. EASY--an Expert Analysis SYstem for interpreting database search outputs
- Author
-
Julian N. Selley, Teresa K. Attwood, and J. Swift
- Subjects
Statistics and Probability ,Protein family ,Databases, Factual ,Computer science ,Expert analysis ,Computational Biology ,Proteins ,Sequence alignment ,Expert Systems ,Proteomics ,computer.software_genre ,Biochemistry ,Computer Science Applications ,Variety (cybernetics) ,Computational Mathematics ,Protein sequencing ,Computational Theory and Mathematics ,Sequence Analysis, Protein ,Database search engine ,Data mining ,Molecular Biology ,computer ,Software - Abstract
Summary: With the ever-increasing need to handle large volumes of sequence data efficiently and reliably, we have developed the EASY system for performing combined protein sequence and pattern database searches. EASY runs searches simultaneously and distils results into a concise 1-line diagnosis. By bringing together results of several different analyses, EASY provides a rapid means of evaluating biological significance, minimising the risk of inferring false relationships, for example from relying exclusively on top BLAST hits. The program has been tested using a variety of protein families and was instrumental in resolving family assignments in a major update of the PRINTS database. Availability: http://www.bioinf.man.ac.uk/dbbrowser/easy/ Contact: selley@bioinf.man.ac.uk * To whom correspondence should be addressed.
- Published
- 2001
36. Apparatus and method for expert analysis of metal failure with automated visual aids
- Author
-
Gurvinder P. Singh
- Subjects
Knowledge base ,Artificial Intelligence ,Computer science ,business.industry ,General Engineering ,Expert analysis ,Data mining ,computer.software_genre ,business ,computer ,Expert system ,Computer Science Applications - Abstract
Apparatus and a method for providing a micro-computer based expert system having a knowledge base of failure analysis, as it pertains to metallic components. The apparatus and method includes interactive initialization procedure which includes communications between the user and the knowledge base. The system and method incorporates automated visual aids for the analysis of metal failure.
- Published
- 1992
- Full Text
- View/download PDF
37. Automated Message Filtering System in Online Social Network
- Author
-
V. Indragandhi, V. Vijayakumar, R. Logesh, and V. Subramaniyaswamy
- Subjects
Information retrieval ,Social network ,Text mining ,business.industry ,Computer science ,Offensive ,Rule-based system ,Computer security ,computer.software_genre ,OSN (Online Social Network) ,Text classification ,General Earth and Planetary Sciences ,Expert analysis ,business ,computer ,General Environmental Science - Abstract
In this generation, using online social network (OSN) is an unavoidable powerful weapon to exhibit peoples’ views and ideas. The users depending upon their interests can select the persons who must post/comment messages in their wall. The present excavation in OSN user wall is “No filtering of abusive messages”. That is the selected persons can post any sort of messages in their wall. So in this paper, we propose a filtered wall to permeate offensive messages using rule based and text classification techniques. We have evaluated the performance using metrics, from which it is shown that proposed method is better.
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.