48 results
Search Results
2. Insider employee-led cyber fraud (IECF) in Indian banks: from identification to sustainable mitigation planning.
- Author
-
Roy, Neha Chhabra and Prabhakaran, Sreeleakha
- Subjects
- *
BANKING laws , *FRAUD prevention , *CORRUPTION , *ORGANIZATIONAL behavior , *RISK assessment , *DATA security , *RANDOM forest algorithms , *COMPUTERS , *FOCUS groups , *DATA security failures , *INTERVIEWING , *DEBT , *QUESTIONNAIRES , *ARTIFICIAL intelligence , *LOGISTIC regression analysis , *IDENTITY theft , *SECURITY systems , *FINANCIAL stress , *RESEARCH methodology , *CONCEPTUAL structures , *JOB stress , *ARTIFICIAL neural networks , *MACHINE learning , *ALGORITHMS - Abstract
This paper explores the different insider employee-led cyber frauds (IECF) based on the recent large-scale fraud events of prominent Indian banking institutions. Examining the different types of fraud and appropriate control measures will protect the banking industry from fraudsters. In this study, we identify and classify Cyber Fraud (CF), map the severity of the fraud on a scale of priority, test the mitigation effectiveness, and propose optimal mitigation measures. The identification and classification of CF losses were based on a literature review and focus group discussions with risk and vigilance officers and cyber cell experts. The CF was analyzed using secondary data. We predicted and prioritized CF based on machine learning-derived Random Forest (RF). An efficient fraud mitigation model was developed based on an offender-victim-centric approach. Mitigation is advised both before and after fraud occurs. Through the findings of this research, banks and fraud investigators can prevent CF by detecting it quickly and controlling it on time. This study proposes a structured, sustainable CF mitigation plan that protects banks, employees, regulators, customers, and the economy, thus saving time, resources, and money. Further, these mitigation measures will improve the reputation of the Indian banking industry and ensure its survival. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. A video-based real-time adaptive vehicle-counting system for urban roads.
- Author
-
Liu, Fei, Zeng, Zhiyuan, and Jiang, Rong
- Subjects
TRAFFIC flow ,CITY traffic ,ROADS ,COMPUTER vision ,TRAFFIC congestion ,DEVELOPING countries - Abstract
In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
4. تشخیص سلول های پیش سرطانی دهانه رحم با استفاده ا ز طبقه بندی ترکیبی برروی تصاوی ر پاپ اسمیر.
- Author
-
مرضیه لطفی and محمدرضا مومن زاد
- Subjects
COMPUTERS ,PAP test ,ARTIFICIAL intelligence ,EARLY detection of cancer ,MACHINE learning ,CANCER ,RISK assessment ,DESCRIPTIVE statistics ,CERVIX uteri tumors ,PRECANCEROUS conditions ,ALGORITHMS ,DISEASE risk factors - Abstract
Background. Cervical cancer begins in superficial cells and over time can invade deeper tissues and surrounding tissues. This paper presents a creative idea of using an ensemble classification algorithm that improves the predictive performance of an artificial intelligence system based on cervical cancer screening. This study aimed to classify Pap-smear images by different machine learning methods to achieve high accuracy detection. Methods. This study was performed on 917 Pap-smear images from the Herlev public database. In the feature extraction stage, 20 geometric features and 76 texture features were extracted. After that, using ensemble classification method, the images were classified into two categories (i.e., normal and abnormal) and then into seven categories (i.e., superficial epithelial, intermediate epithelial, columnar epithelial, mild dysplasia, moderate dysplasia, severe dysplasia and carcinoma) and the accuracy of the proposed method was evaluated. Results. The algorithm in the ensemble classification was able to achieve accuracy of 99.9% with a processing time of 0.028 second in the two-class classification and accuracy of 76.5% with a processing time of 0.033 second in the seven-class classification. Conclusion. Based on the results, the designed algorithm can be used as a computer aided diagnostic tool to increase the accuracy and speed of predicting the risk of cervical cancer. Practical Implications. Cervical cancer is one of the most common cancers among women. Early diagnosis of the disease can save various costs and prevent the patients’ frequent visits to medical centers. This research proposed an artificial intelligence method for automatic classification of cervical cells and improving the accuracy of diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Research Results from Amirkabir University of Technology Update Understanding of Machine Learning (Data driven models to predict pore pressure using drilling and petrophysical data)
- Subjects
Oil well drilling ,Artificial intelligence ,Technical institutes ,Machine learning ,Algorithms ,Algorithm ,Artificial intelligence ,Computers - Abstract
2022 NOV 8 (VerticalNews) -- By a News Reporter-Staff News Editor at Information Technology Newsweekly -- A new study on artificial intelligence is now available. According to news originating from [...]
- Published
- 2022
6. Research in the Area of Machine Learning Reported from China Research Institute of Radiowave Propagation (Automatic GNSS Ionospheric Scintillation Detection with Radio Occultation Data Using Machine Learning Algorithm)
- Subjects
Artificial intelligence ,Ionosphere ,Machine learning ,Data mining ,Algorithms ,Data warehousing/data mining ,Algorithm ,Artificial intelligence ,Computers - Abstract
2024 JAN 23 (VerticalNews) -- By a News Reporter-Staff News Editor at Information Technology Newsweekly -- A new study on artificial intelligence is now available. According to news reporting out [...]
- Published
- 2024
7. Reports Summarize Machine Learning Study Results from National University of Defense Technology (CTTGAN: Traffic Data Synthesizing Scheme Based on Conditional GAN)
- Subjects
Artificial intelligence ,Machine learning ,Data mining ,Algorithms ,Military electronics industry ,Data warehousing/data mining ,Algorithm ,Artificial intelligence ,Computers - Abstract
2022 AUG 9 (VerticalNews) -- By a News Reporter-Staff News Editor at Information Technology Newsweekly -- Investigators publish new report on artificial intelligence. According to news reporting from Hefei, People's [...]
- Published
- 2022
8. A comprehensive review of machine learning algorithms and their application in geriatric medicine: present and future.
- Author
-
Woodman, Richard J. and Mangoni, Arduino A.
- Subjects
COMPUTERS ,MACHINE learning ,ARTIFICIAL intelligence ,AGING ,ACCESS to information ,HEALTH ,INFORMATION resources ,DECISION making in clinical medicine ,ALGORITHMS ,ELDER care - Abstract
The increasing access to health data worldwide is driving a resurgence in machine learning research, including data-hungry deep learning algorithms. More computationally efficient algorithms now offer unique opportunities to enhance diagnosis, risk stratification, and individualised approaches to patient management. Such opportunities are particularly relevant for the management of older patients, a group that is characterised by complex multimorbidity patterns and significant interindividual variability in homeostatic capacity, organ function, and response to treatment. Clinical tools that utilise machine learning algorithms to determine the optimal choice of treatment are slowly gaining the necessary approval from governing bodies and being implemented into healthcare, with significant implications for virtually all medical disciplines during the next phase of digital medicine. Beyond obtaining regulatory approval, a crucial element in implementing these tools is the trust and support of the people that use them. In this context, an increased understanding by clinicians of artificial intelligence and machine learning algorithms provides an appreciation of the possible benefits, risks, and uncertainties, and improves the chances for successful adoption. This review provides a broad taxonomy of machine learning algorithms, followed by a more detailed description of each algorithm class, their purpose and capabilities, and examples of their applications, particularly in geriatric medicine. Additional focus is given on the clinical implications and challenges involved in relying on devices with reduced interpretability and the progress made in counteracting the latter via the development of explainable machine learning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Reports from Ferdowsi University of Mashhad Describe Recent Advances in Machine Learning (A reinforcement learning algorithm for scheduling parallel processors with identical speedup functions)
- Subjects
Artificial intelligence ,Multiprocessing ,Machine learning ,Data mining ,Algorithms ,Data warehousing/data mining ,Algorithm ,Artificial intelligence ,Multiprocessing ,Computers - Abstract
2023 SEP 12 (VerticalNews) -- By a News Reporter-Staff News Editor at Information Technology Newsweekly -- Current study results on artificial intelligence have been published. According to news reporting from [...]
- Published
- 2023
10. Algorithmic Literacy and the Role for Libraries.
- Author
-
Ridley, Michael and Pawlick-Potts, Danica
- Subjects
COMPUTERS ,LIBRARIES ,ARTIFICIAL intelligence ,PUBLIC libraries ,DECISION making ,PROBLEM solving ,SOFTWARE analytics ,INFORMATION literacy ,MACHINE learning ,ALGORITHMS - Abstract
Artificial intelligence (AI) is powerful, complex, ubiquitous, often opaque, sometimes invisible, and increasingly consequential in our everyday lives. Navigating the effects of AI as well as utilizing it in a responsible way requires a level of awareness, understanding, and skill that is not provided by current digital literacy or information literacy regimes. Algorithmic literacy addresses these gaps. In arguing for a role for libraries in algorithmic literacy, the authors provide a working definition, a pressing need, a pedagogical strategy, and two specific contributions that are unique to libraries. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. Improved support vector machine classification algorithm based on adaptive feature weight updating in the Hadoop cluster environment.
- Author
-
Cao, Jianfang, Wang, Min, Li, Yanfei, and Zhang, Qi
- Subjects
SUPPORT vector machines ,CLASSIFICATION algorithms ,VERNACULAR architecture ,PARALLEL programming ,IMAGE processing ,PHYSICAL sciences - Abstract
An image classification algorithm based on adaptive feature weight updating is proposed to address the low classification accuracy of the current single-feature classification algorithms and simple multifeature fusion algorithms. The MapReduce parallel programming model on the Hadoop platform is used to perform an adaptive fusion of hue, local binary pattern (LBP) and scale-invariant feature transform (SIFT) features extracted from images to derive optimal combinations of weights. The support vector machine (SVM) classifier is then used to perform parallel training to obtain the optimal SVM classification model, which is then tested. The Pascal VOC 2012, Caltech 256 and SUN databases are adopted to build a massive image library. The speedup, classification accuracy and training time are tested in the experiment, and the results show that a linear growth tendency is present in the speedup of the system in a cluster environment. In consideration of the hardware costs, time, performance and accuracy, the algorithm is superior to mainstream classification algorithms, such as the power mean SVM and convolutional neural network (CNN). As the number and types of images both increase, the classification accuracy rate exceeds 95%. When the number of images reaches 80,000, the training time of the proposed algorithm is only 1/5 that of traditional single-node architecture algorithms. This result reflects the effectiveness of the algorithm, which provides a basis for the effective analysis and processing of image big data. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
12. Finite Automata Approach to Computing All Seeds of Strings with the Smallest Hamming Distance.
- Author
-
Guth, Ondřej and Melichar, Bořivoj
- Subjects
SEQUENTIAL machine theory ,ALGORITHMS ,MATHEMATICAL models ,ROBOTICS ,COMPUTERS ,ARTIFICIAL intelligence ,INFORMATION storage & retrieval systems ,INFORMATION theory ,MACHINE learning - Abstract
Seed is a type of a regularity of strings. A restricted approximate seed w of string T is a factor of T such that w covers a superstring of T under some distance rule. In this paper, the problem of all restricted seeds with the smallest Hamming distance is studied and a polynomial time and space algorithm for solving the problem is presented. It searches for all restricted approximate seeds of a string with given limited approximation using Hamming distance and it computes the smallest distance for each found seed. The solution is based on a finite (suffix) automata approach that provides a straightforward way to design algorithms to many problems in stringology. Therefore, it is shown that the set of problems solvable using finite automata includes the one studied in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2009
13. Data from Al-Azhar University Broaden Understanding of Machine Learning (Meta-Heuristic Optimization Algorithm-Based Hierarchical Intrusion Detection System)
- Subjects
Artificial intelligence ,Security software ,Mathematical optimization ,Cyberterrorism ,Machine learning ,Algorithms ,Network security software ,Algorithm ,Artificial intelligence ,Computers ,News, opinion and commentary ,Al-Azhar University - Abstract
2023 JAN 11 (VerticalNews) -- By a News Reporter-Staff News Editor at Computer Weekly News -- Investigators publish new report on artificial intelligence. According to news reporting from Cairo, Egypt, [...]
- Published
- 2023
14. YITU Attended ICCV 2019 with Self-developed Cloud AI Chip, Showcasing Integrated Software and Hardware Strength
- Subjects
Algorithms ,Computer vision ,Artificial intelligence ,Machine learning ,Natural language processing ,Editors ,Algorithm ,Artificial intelligence ,Computers ,News, opinion and commentary - Abstract
2019 NOV 20 (VerticalNews) -- By a News Reporter-Staff News Editor at Computer Weekly News -- 'The peak of artificial intelligence algorithm performance has shifted from academia to industry,' said [...]
- Published
- 2019
15. Artificial Intelligence and Machine Learning Based Intervention in Medical Infrastructure: A Review and Future Trends.
- Author
-
Kumar, Kamlesh, Kumar, Prince, Deb, Dipankar, Unguresan, Mihaela-Ligia, and Muresan, Vlad
- Subjects
HEALTH care industry ,DEEP learning ,PUBLIC health surveillance ,COMPUTERS ,ARTIFICIAL intelligence ,MACHINE learning ,SURVEYS ,COST analysis ,TUBERCULOSIS ,POLYPS ,ALGORITHMS - Abstract
People in the life sciences who work with Artificial Intelligence (AI) and Machine Learning (ML) are under increased pressure to develop algorithms faster than ever. The possibility of revealing innovative insights and speeding breakthroughs lies in using large datasets integrated on several levels. However, even if there is more data at our disposal than ever, only a meager portion is being filtered, interpreted, integrated, and analyzed. The subject of this technology is the study of how computers may learn from data and imitate human mental processes. Both an increase in the learning capacity and the provision of a decision support system at a size that is redefining the future of healthcare are enabled by AI and ML. This article offers a survey of the uses of AI and ML in the healthcare industry, with a particular emphasis on clinical, developmental, administrative, and global health implementations to support the healthcare infrastructure as a whole, along with the impact and expectations of each component of healthcare. Additionally, possible future trends and scopes of the utilization of this technology in medical infrastructure have also been discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Studies from Faculty of Sciences and Technics Reveal New Findings on Machine Learning (Intrusion Detection System Using machine learning Algorithms)
- Subjects
Artificial intelligence ,Detectors ,Machine learning ,Data mining ,Social networks ,Algorithms ,Data warehousing/data mining ,Algorithm ,Artificial intelligence ,Computers - Abstract
2022 JUL 5 (VerticalNews) -- By a News Reporter-Staff News Editor at Information Technology Newsweekly -- Data detailed on artificial intelligence have been presented. According to news reporting originating from [...]
- Published
- 2022
17. Machine Learning in Medicine.
- Author
-
Deo, Rahul C.
- Subjects
- *
MACHINE learning , *DIGITAL resources in medicine , *ARTIFICIAL intelligence , *COMPUTERS in medicine , *PROGNOSIS , *DATA analysis , *LITERATURE reviews , *ALGORITHMS , *LEARNING strategies , *MEDICINE - Abstract
Spurred by advances in processing power, memory, storage, and an unprecedented wealth of data, computers are being asked to tackle increasingly complex learning tasks, often with astonishing success. Computers have now mastered a popular variant of poker, learned the laws of physics from experimental data, and become experts in video games - tasks that would have been deemed impossible not too long ago. In parallel, the number of companies centered on applying complex data analysis to varying industries has exploded, and it is thus unsurprising that some analytic companies are turning attention to problems in health care. The purpose of this review is to explore what problems in medicine might benefit from such learning approaches and use examples from the literature to introduce basic concepts in machine learning. It is important to note that seemingly large enough medical data sets and adequate learning algorithms have been available for many decades, and yet, although there are thousands of papers applying machine learning algorithms to medical data, very few have contributed meaningfully to clinical care. This lack of impact stands in stark contrast to the enormous relevance of machine learning to many other industries. Thus, part of my effort will be to identify what obstacles there may be to changing the practice of medicine through statistical learning approaches, and discuss how these might be overcome. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
18. XAOM: A method for automatic alignment and orientation of radiographs for computer-aided medical diagnosis
- Author
-
Franko Hržić, Erich Sorantin, Ivan Štajduhar, and Sebastian Tschauner
- Subjects
0301 basic medicine ,Computer science ,Health Informatics ,Convolutional neural network ,Hough transform ,law.invention ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,law ,Image Processing, Computer-Assisted ,Humans ,Preprocessor ,Segmentation ,Diagnosis, Computer-Assisted ,Child ,Computers ,business.industry ,Orientation (computer vision) ,X-ray image ,Image alignment and rotation ,Data preprocessing ,Deep CNN ,Pattern recognition ,Computer Science Applications ,030104 developmental biology ,Test set ,Body region ,Neural Networks, Computer ,Artificial intelligence ,Data pre-processing ,business ,Algorithms ,030217 neurology & neurosurgery - Abstract
Background and objectives Computer-aided diagnosis relies on machine learning algorithms that require filtered and preprocessed data as the input. Aligning the image in the desired direction is an additional manual step in post-processing, commonly overlooked due to workload issues. Several state-of-the-art approaches for fracture detection and disease-struck region segmentation benefit from correctly oriented images, thus requiring such preprocessing of X-ray images. Furthermore, it is desirable to have archived studies in a standardized format. Radiograph hanging protocols also differ from case to case, which means that images are not always aligned and oriented correctly. As a solution, the paper proposes XAOM, an X-ray Alignment and Orientation Method for images from 21 different body regions. Methods Typically, other methods are crafted for this purpose to suit a specific body region and form of usage. In contrast, the method proposed in this paper is comprehensive and easily tuned to align and orient X-ray images of any body region. XAOM consists of two stages. For the first stage of the method, aligning X-ray images, we experimented with the following approaches: Hough transform, Fast line detection algorithm, and Principal Component Analysis method. For the second stage, we have experimented with the adaptations of several well known convolutional neural network topologies for correctly predicting image orientation: LeNet5, AlexNet, VGG16, VGG19, and ResNet50. Results In the first stage, the PCA-based approach performed best. The average difference between the angle detected by the algorithm and the angle marked by the experts on the test set containing 200 pediatric X-ray images was 1.65 ∘ , while the median value was 0.11 ∘ . In the second stage, the VGG16-based network topology achieved the best accuracy of 0.993 on a test set containing 4,221 images. Conclusion XAOM is highly accurate at aligning and orienting pediatric X-ray images of 21 common body regions according to a set standard. The proposed method is also robust and can be easily adjusted to the different alignment and rotation criteria. Availability The Python source code of the best performing implementation of XAOM is publicly available at https://github.com/fhrzic/XAOM .
- Published
- 2021
19. Reports Outline Artificial Intelligence Study Results from Arak University (Using Memetic Algorithm for Robustness Testing of Contract-based Software Models)
- Subjects
Algorithms ,Artificial intelligence ,Machine learning ,Algorithm ,Contract agreement ,Artificial intelligence ,Computers ,News, opinion and commentary - Abstract
2020 SEP 2 (VerticalNews) -- By a News Reporter-Staff News Editor at Computer Weekly News -- Current study results on Machine Learning - Artificial Intelligence have been published. According to [...]
- Published
- 2020
20. Data from King's College London Advance Knowledge in Artificial Intelligence (Learning the Language of Software Errors)
- Subjects
Artificial intelligence ,Machine learning ,Bugs (Software) ,Algorithms ,Robots ,Editors ,Artificial intelligence ,Computers ,News, opinion and commentary - Abstract
2020 MAY 20 (VerticalNews) -- By a News Reporter-Staff News Editor at Computer Weekly News -- Current study results on Machine Learning - Artificial Intelligence have been published. According to [...]
- Published
- 2020
21. Researchers' Work from International Business Machines Corporation (IBM) Focuses on Artificial Intelligence (Artificial intelligence approaches to predicting and detecting cognitive decline in older adults: A conceptual review)
- Subjects
International Business Machines Corp. -- Company forecasts ,Artificial intelligence ,Machine learning ,Natural language processing ,Computer industry ,Algorithms ,Cognition ,Mental health ,Neurophysiology ,Technology ,Editors ,Microcomputer industry ,Artificial intelligence ,Company business forecast/projection ,Computer industry ,Computers - Abstract
2020 FEB 11 (VerticalNews) -- By a News Reporter-Staff News Editor at Information Technology Newsweekly -- Investigators publish new report on Artificial Intelligence. According to news originating from Tokyo, Japan, [...]
- Published
- 2020
22. Xanadu awarded DARPA grant to further advance quantum machine learning
- Subjects
United States. Defense Advanced Research Projects Agency ,Artificial intelligence ,Machine learning ,Software ,Collaborative learning ,Algorithms ,Research funding ,Software quality ,Artificial intelligence ,Computers ,News, opinion and commentary - Abstract
2019 DEC 4 (VerticalNews) -- By a News Reporter-Staff News Editor at Computer Weekly News -- Xanadu, a full-stack quantum computing and advanced AI company developing quantum hardware and software [...]
- Published
- 2019
23. A video-based real-time adaptive vehicle-counting system for urban roads
- Author
-
Zhiyuan Zeng, Fei Liu, and Rong Jiang
- Subjects
Computer science ,Image Processing ,Computer Vision ,Intelligence ,Social Sciences ,lcsh:Medicine ,Transportation ,02 engineering and technology ,Field (computer science) ,Machine Learning ,0202 electrical engineering, electronic engineering, information engineering ,Psychology ,lcsh:Science ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Transportation Infrastructure ,Traffic flow ,Motor Vehicles ,Physical Sciences ,Line (geometry) ,Engineering and Technology ,020201 artificial intelligence & image processing ,Algorithms ,Research Article ,Computer and Information Sciences ,Real-time computing ,Image processing ,Research and Analysis Methods ,Civil Engineering ,Machine Learning Algorithms ,Artificial Intelligence ,Computer Systems ,Cities ,Developing Countries ,Video based ,Models, Statistical ,Vehicle counting ,Computers ,Urbanization ,020208 electrical & electronic engineering ,lcsh:R ,Cognitive Psychology ,Biology and Life Sciences ,Noise Reduction ,Roads ,Traffic congestion ,Signal Processing ,Cognitive Science ,lcsh:Q ,State (computer science) ,Mathematics ,Neuroscience - Abstract
In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios.
- Published
- 2017
24. DeepFocus: Detection of out-of-focus regions in whole slide digital images using deep learning.
- Author
-
Senaras, Caglar, Niazi, M. Khalid Khan, Lozanski, Gerard, and Gurcan, Metin N.
- Subjects
DIGITAL images ,DEEP learning ,MEDICAL imaging systems ,IMAGE analysis ,DATA flow computing - Abstract
The development of whole slide scanners has revolutionized the field of digital pathology. Unfortunately, whole slide scanners often produce images with out-of-focus/blurry areas that limit the amount of tissue available for a pathologist to make accurate diagnosis/prognosis. Moreover, these artifacts hamper the performance of computerized image analysis systems. These areas are typically identified by visual inspection, which leads to a subjective evaluation causing high intra- and inter-observer variability. Moreover, this process is both tedious, and time-consuming. The aim of this study is to develop a deep learning based software called, DeepFocus, which can automatically detect and segment blurry areas in digital whole slide images to address these problems. DeepFocus is built on TensorFlow, an open source library that exploits data flow graphs for efficient numerical computation. DeepFocus was trained by using 16 different H&E and IHC-stained slides that were systematically scanned on nine different focal planes, generating 216,000 samples with varying amounts of blurriness. When trained and tested on two independent datasets, DeepFocus resulted in an average accuracy of 93.2% (± 9.6%), which is a 23.8% improvement over an existing method. DeepFocus has the potential to be integrated with whole slide scanners to automatically re-scan problematic areas, hence improving the overall image quality for pathologists and image analysis algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
25. ScoreCentre: a computer program to assist with collection and calculation of BBB locomotor scale data
- Author
-
Richard Evans and M. Davies
- Subjects
Blinding ,Scale (ratio) ,Computer science ,Standardized test ,Motor Activity ,Machine learning ,computer.software_genre ,User-Computer Interface ,Double-Blind Method ,Microcomputers ,Rating scale ,Bbb score ,Humans ,Longitudinal Studies ,Simulation ,Spinal Cord Injuries ,Observer Variation ,Computer program ,business.industry ,Computers ,General Neuroscience ,Data Collection ,Usability ,Data Display ,Observational study ,Artificial intelligence ,business ,computer ,Algorithms ,Locomotion ,Software - Abstract
The Basso, Beattie and Bresnahan (BBB) Locomotor Rating Scale is a standardized assessment scale for use in experimental spinal cord injury (SCI) research. This paper describes a computer program, ScoreCentre, which aims to simplify the recording and handling of BBB locomotor scale data. The program assists with the recording of observational data from open-field testing and then automatically calculates BBB scores. Possible errors associated with data entry and manual calculation of scores are thus essentially eliminated. In addition, significant time is saved by the automated derivation of scores and subscores and elimination of the need to manually transfer data from paper records to a computer. ScoreCentre can also be used as a training aid, to help familiarize users with the BBB scale and to explore how changes in the observations impact on overall BBB score. ScoreCentre includes simple experiment management functions such as control of trial blinding, administration of drugs in a blinded fashion and longitudinal data analysis. ScoreCentre provides all the advantages of electronic records, such as ease of use, analysis and archiving, and allows the elimination of paper records if appropriate. When paper records are required, for example for archiving and auditing, they can be automatically produced by ScoreCentre. ScoreCentre will assist with both the learning and use of the BBB locomotor scale, thus facilitating the use of this standardized outcome measure in SCI research. ScoreCentre is available to download from www.rmeonline.net/scorecentre.
- Published
- 2010
26. Distributed optimization of multi-class SVMs.
- Author
-
Alber, Maximilian, Zimmert, Julian, Dogan, Urun, and Kloft, Marius
- Subjects
SUPPORT vector machines ,MATHEMATICAL optimization ,THEORY of distributions (Functional analysis) ,COMPUTER algorithms ,QUADRATIC programming - Abstract
Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way. Given enough computational resources, one-vs.-rest SVMs can thus be trained on data involving a large number of classes. The same cannot be stated, however, for the so-called all-in-one SVMs, which require solving a quadratic program of size quadratically in the number of classes. We develop distributed algorithms for two all-in-one SVM formulations (Lee et al. and Weston and Watkins) that parallelize the computation evenly over the number of classes. This allows us to compare these models to one-vs.-rest SVMs on unprecedented scale. The results indicate superior accuracy on text classification data. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
27. Schizophrenia: A Survey of Artificial Intelligence Techniques Applied to Detection and Classification
- Author
-
Candice Ke En Ang, U. Rajendra Acharya, Joel Weijia Lai, and Kang Hao Cheong
- Subjects
Computer science ,Process (engineering) ,Health, Toxicology and Mutagenesis ,Schizophrenia (object-oriented programming) ,Review ,03 medical and health sciences ,0302 clinical medicine ,Software ,Health care ,Daily living ,Humans ,business.industry ,Computers ,Public Health, Environmental and Occupational Health ,Cognition ,artificial intelligence ,machine Learning ,Mental health ,030227 psychiatry ,schizophrenia ,Medicine ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Algorithms ,mental health - Abstract
Artificial Intelligence in healthcare employs machine learning algorithms to emulate human cognition in the analysis of complicated or large sets of data. Specifically, artificial intelligence taps on the ability of computer algorithms and software with allowable thresholds to make deterministic approximate conclusions. In comparison to traditional technologies in healthcare, artificial intelligence enhances the process of data analysis without the need for human input, producing nearly equally reliable, well defined output. Schizophrenia is a chronic mental health condition that affects millions worldwide, with impairment in thinking and behaviour that may be significantly disabling to daily living. Multiple artificial intelligence and machine learning algorithms have been utilized to analyze the different components of schizophrenia, such as in prediction of disease, and assessment of current prevention methods. These are carried out in hope of assisting with diagnosis and provision of viable options for individuals affected. In this paper, we review the progress of the use of artificial intelligence in schizophrenia.
- Published
- 2021
28. Machine Learning Associated With Respiratory Oscillometry: A Computer-Aided Diagnosis System for the Detection of Respiratory Abnormalities in Systemic Sclerosis
- Author
-
Agnaldo José Lopes, Jorge Amaral, Pedro Lopes de Melo, Domingos Savio Mattos de Andrade, and Luigi Maciel Ribeiro
- Subjects
Male ,Clinical decision support system ,computer.software_genre ,030218 nuclear medicine & medical imaging ,Machine Learning ,0302 clinical medicine ,Diagnosis, Computer-Assisted ,AdaBoost ,Respiratory system ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,General Medicine ,Middle Aged ,Random forest ,Forced oscillation technique ,lcsh:R855-855.5 ,Systemic sclerosis ,Female ,Algorithms ,Adult ,Spirometry ,lcsh:Medical technology ,Biometry ,Adolescent ,Biomedical Engineering ,Decision tree ,Feature selection ,Diagnostic of respiratory diseases ,Machine learning ,Biomaterials ,Young Adult ,03 medical and health sciences ,Artificial Intelligence ,Oscillometry ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,System identification techniques ,Aged ,Scleroderma, Systemic ,Computers ,business.industry ,Research ,Respiration Disorders ,Respiratory oscillometry ,030228 respiratory system ,Computer-aided diagnosis ,Artificial intelligence ,business ,computer - Abstract
IntroductionThe use of machine learning (ML) methods would improve the diagnosis of respiratory changes in systemic sclerosis (SSc). This paper evaluates the performance of several ML algorithms associated with the respiratory oscillometry analysis to aid in the diagnostic of respiratory changes in SSc. We also find out the best configuration for this task.MethodsOscillometric and spirometric exams were performed in 82 individuals, including controls (n = 30) and patients with systemic sclerosis with normal (n = 22) and abnormal (n = 30) spirometry. Multiple instance classifiers and different supervised machine learning techniques were investigated, including k-Nearest Neighbors (KNN), Random Forests (RF), AdaBoost with decision trees (ADAB), and Extreme Gradient Boosting (XGB).Results and discussionThe first experiment of this study showed that the best oscillometric parameter (BOP) was dynamic compliance, which provided moderate accuracy (AUC = 0.77) in the scenario control group versus patients with sclerosis and normal spirometry (CGvsPSNS). In the scenario control group versus patients with sclerosis and altered spirometry (CGvsPSAS), the BOP obtained high accuracy (AUC = 0.94). In the second experiment, the ML techniques were used. In CGvsPSNS, KNN achieved the best result (AUC = 0.90), significantly improving the accuracy in comparison with the BOP (p p ConclusionsOscillometric principles combined with machine learning algorithms provide a new method for diagnosing respiratory changes in patients with systemic sclerosis. The present study's findings provide evidence that this combination may help in the early diagnosis of respiratory changes in these patients.
- Published
- 2021
29. Small Network for Lightweight Task in Computer Vision: A Pruning Method Based on Feature Representation
- Author
-
Shufang Lu, Ge Yisu, and Fei Gao
- Subjects
Speedup ,General Computer Science ,Article Subject ,Computer science ,General Mathematics ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Inference ,Neurosciences. Biological psychiatry. Neuropsychiatry ,02 engineering and technology ,Machine learning ,computer.software_genre ,Convolutional neural network ,Image (mathematics) ,Task (project management) ,0202 electrical engineering, electronic engineering, information engineering ,Pruning (decision trees) ,Representation (mathematics) ,Computers ,business.industry ,General Neuroscience ,General Medicine ,Data Compression ,020202 computer hardware & architecture ,Feature (computer vision) ,020201 artificial intelligence & image processing ,Neural Networks, Computer ,Artificial intelligence ,business ,computer ,Algorithms ,Research Article ,RC321-571 - Abstract
Many current convolutional neural networks are hard to meet the practical application requirement because of the enormous network parameters. For accelerating the inference speed of networks, more and more attention has been paid to network compression. Network pruning is one of the most efficient and simplest ways to compress and speed up the networks. In this paper, a pruning algorithm for the lightweight task is proposed, and a pruning strategy based on feature representation is investigated. Different from other pruning approaches, the proposed strategy is guided by the practical task and eliminates the irrelevant filters in the network. After pruning, the network is compacted to a smaller size and is easy to recover accuracy with fine-tuning. The performance of the proposed pruning algorithm is validated on the acknowledged image datasets, and the experimental results prove that the proposed algorithm is more suitable to prune the irrelevant filters for the fine-tuning dataset.
- Published
- 2021
- Full Text
- View/download PDF
30. Addressless: A new internet server model to prevent network scanning
- Author
-
Xing Li, Shanshan Hao, Weng Zhe, Liu Renjie, Chang Deliang, and Congxiao Bao
- Subjects
FOS: Computer and information sciences ,Computer and Information Sciences ,Computer science ,Entropy ,Science ,Encryption ,Information Storage and Retrieval ,Cryptography ,Research and Analysis Methods ,Computer Science - Networking and Internet Architecture ,Machine Learning ,Machine Learning Algorithms ,Artificial Intelligence ,Prototypes ,Humans ,Isolation (database systems) ,Computer Networks ,Computer Security ,Networking and Internet Architecture (cs.NI) ,Internet ,Multidisciplinary ,IPv6 address ,Computers ,business.industry ,Physics ,Applied Mathematics ,Simulation and Modeling ,Load balancing (computing) ,IPv6 ,Technology Development ,Physical Sciences ,Thermodynamics ,Engineering and Technology ,Medicine ,The Internet ,business ,Mathematics ,Network Analysis ,Algorithms ,Software ,Research Article ,Computer network - Abstract
Eliminating unnecessary exposure is a principle of server security. The huge IPv6 address space enhances security by making scanning infeasible, however, with recent advances of IPv6 scanning technologies, network scanning is again threatening server security. In this paper, we propose a new model named addressless server, which separates the server into an entrance module and a main service module, and assigns an IPv6 prefix instead of an IPv6 address to the main service module. The entrance module generates a legitimate IPv6 address under this prefix by encrypting the client address, so that the client can access the main server on a destination address that is different in each connection. In this way, the model provides isolation to the main server, prevents network scanning, and minimizes exposure. Moreover it provides a novel framework that supports flexible load balancing, high-availability, and other desirable features. The model is simple and does not require any modification to the client or the network. We implement a prototype and experiments show that our model can prevent the main server from being scanned at a slight performance cost.
- Published
- 2021
31. A Cybersecure P300-Based Brain-to-Computer Interface against Noise-Based and Fake P300 Cyberattacks
- Author
-
Giovanni Mezzina, Daniela DE VENUTO, and Valerio Francesco Annese
- Subjects
cybersecurity ,brainjacking ,Computers ,Chemical technology ,Brain ,Electroencephalography ,TP1-1185 ,Event-Related Potentials, P300 ,Biochemistry ,Article ,Atomic and Molecular Physics, and Optics ,Analytical Chemistry ,machine learning ,classification ,Artificial Intelligence ,Brain-Computer Interfaces ,P300 ,brain-to-computer interface ,BCI ,EEG ,hacking ,Electrical and Electronic Engineering ,Instrumentation ,Algorithms - Abstract
In a progressively interconnected world where the Internet of Things (IoT), ubiquitous computing, and artificial intelligence are leading to groundbreaking technology, cybersecurity remains an underdeveloped aspect. This is particularly alarming for brain-to-computer interfaces (BCIs), where hackers can threaten the user’s physical and psychological safety. In fact, standard algorithms currently employed in BCI systems are inadequate to deal with cyberattacks. In this paper, we propose a solution to improve the cybersecurity of BCI systems. As a case study, we focus on P300-based BCI systems using support vector machine (SVM) algorithms and EEG data. First, we verified that SVM algorithms are incapable of identifying hacking by simulating a set of cyberattacks using fake P300 signals and noise-based attacks. This was achieved by comparing the performance of several models when validated using real and hacked P300 datasets. Then, we implemented our solution to improve the cybersecurity of the system. The proposed solution is based on an EEG channel mixing approach to identify anomalies in the transmission channel due to hacking. Our study demonstrates that the proposed architecture can successfully identify 99.996% of simulated cyberattacks, implementing a dedicated counteraction that preserves most of BCI functions.
- Published
- 2021
32. Autonomous drone hunter operating by deep learning and all-onboard computations in GPS-denied environments
- Author
-
Robert Kwiatkowski, Richard Kennedy, Arjun Mangla, Xiaotian Hu, Tomer Aharoni, Philippe M. Wyder, Edith O. A. Comas, Yan-Song Chen, Tzu-Chan Chuang, Rafael J. Pelles, Zhiyao Xiong, Adrian J. Lasrado, Hod Lipson, and Zixi Huang
- Subjects
0209 industrial biotechnology ,Computer science ,Computer Vision ,02 engineering and technology ,Machine Learning ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Image Processing, Computer-Assisted ,Multidisciplinary ,Artificial neural network ,Applied Mathematics ,Simulation and Modeling ,Flight Testing ,Cameras ,Optical Equipment ,Physical Sciences ,Global Positioning System ,Medicine ,Engineering and Technology ,020201 artificial intelligence & image processing ,Algorithms ,Research Article ,Computer and Information Sciences ,Neural Networks ,Imaging Techniques ,Science ,Computation ,Real-time computing ,Aerospace Engineering ,Equipment ,Color ,Research and Analysis Methods ,Machine Learning Algorithms ,Deep Learning ,Artificial Intelligence ,Computer Imaging ,SIMPLE (military communications protocol) ,business.industry ,Computers ,Deep learning ,Biology and Life Sciences ,Frame rate ,Drone ,Target Detection ,Geographic Information Systems ,Artificial intelligence ,business ,Mathematics ,Neuroscience - Abstract
This paper proposes a UAV platform that autonomously detects, hunts, and takes down other small UAVs in GPS-denied environments. The platform detects, tracks, and follows another drone within its sensor range using a pre-trained machine learning model. We collect and generate a 58,647-image dataset and use it to train a Tiny YOLO detection algorithm. This algorithm combined with a simple visual-servoing approach was validated on a physical platform. Our platform was able to successfully track and follow a target drone at an estimated speed of 1.5 m/s. Performance was limited by the detection algorithm's 77% accuracy in cluttered environments and the frame rate of eight frames per second along with the field of view of the camera.
- Published
- 2019
33. Cellular computational generalized neuron network for frequency situational intelligence in a multi-machine power system
- Author
-
Yawei Wei and Ganesh K. Venayagamoorthy
- Subjects
Neurons ,Brownout ,Computers ,Computer science ,020209 energy ,Cognitive Neuroscience ,Distributed computing ,Intelligence ,Intelligent decision support system ,02 engineering and technology ,Perceptron ,Grid ,Cascading failure ,Machine Learning ,Electric power system ,SCADA ,Computer Systems ,Artificial Intelligence ,Multilayer perceptron ,0202 electrical engineering, electronic engineering, information engineering ,Neural Networks, Computer ,Algorithms ,Simulation - Abstract
To prevent large interconnected power system from a cascading failure, brownout or even blackout, grid operators require access to faster than real-time information to make appropriate just-in-time control decisions. However, the communication and computational system limitations of currently used supervisory control and data acquisition (SCADA) system can only deliver delayed information. However, the deployment of synchrophasor measurement devices makes it possible to capture and visualize, in near-real-time, grid operational data with extra granularity. In this paper, a cellular computational network (CCN) approach for frequency situational intelligence (FSI) in a power system is presented. The distributed and scalable computing unit of the CCN framework makes it particularly flexible for customization for a particular set of prediction requirements. Two soft-computing algorithms have been implemented in the CCN framework: a cellular generalized neuron network (CCGNN) and a cellular multi-layer perceptron network (CCMLPN), for purposes of providing multi-timescale frequency predictions, ranging from 16.67 ms to 2 s. These two developed CCGNN and CCMLPN systems were then implemented on two different scales of power systems, one of which installed a large photovoltaic plant. A real-time power system simulator at weather station within the Real-Time Power and Intelligent Systems (RTPIS) laboratory at Clemson, SC, was then used to derive typical FSI results.
- Published
- 2017
34. Overview of Smart Aquaculture System: Focusing on Applications of Machine Learning and Computer Vision.
- Author
-
Vo, Thi Thu Em, Ko, Hyeyoung, Huh, Jun-Ho, and Kim, Yonghoon
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,AQUACULTURE ,ALGORITHMS ,COMPUTERS ,COMPUTER vision - Abstract
Smart aquaculture is nowadays one of the sustainable development trends for the aquaculture industry in intelligence and automation. Modern intelligent technologies have brought huge benefits to many fields including aquaculture to reduce labor, enhance aquaculture production, and be friendly to the environment. Machine learning is a subdivision of artificial intelligence (AI) by using trained algorithm models to recognize and learn traits from the data it watches. To date, there are several studies about applications of machine learning for smart aquaculture including measuring size, weight, grading, disease detection, and species classification. This review provides and overview of the development of smart aquaculture and intelligent technology. We summarized and collected 100 articles about machine learning in smart aquaculture from nearly 10 years about the methodology, results as well as the recent technology that should be used for development of smart aquaculture. We hope that this review will give readers interested in this field useful information. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. TKRD: Trusted kernel rootkit detection for cybersecurity of VMs based on machine learning and memory forensic analysis
- Author
-
Xiao Wang, Jian Biao Zhang, Ai Zhang, and Jinchang Ren
- Subjects
QA75 ,Software_OPERATINGSYSTEMS ,Support Vector Machine ,Computer science ,TK ,Decision Making ,Decision tree ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Computer security ,Machine learning ,Machine Learning ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,False Positive Reactions ,Computer Security ,business.industry ,Computers ,Applied Mathematics ,05 social sciences ,Rootkit ,Bayes Theorem ,General Medicine ,Cloud Computing ,Random forest ,Support vector machine ,Computational Mathematics ,Virtual machine ,Modeling and Simulation ,Kernel (statistics) ,Area Under Curve ,Malware ,020201 artificial intelligence & image processing ,Artificial intelligence ,General Agricultural and Biological Sciences ,business ,computer ,050203 business & management ,Algorithms - Abstract
The promotion of cloud computing makes the virtual machine (VM) increasingly a target of malware attacks in cybersecurity such as those by kernel rootkits. Memory forensic, which observes the malicious tracks from the memory aspect, is a useful way for malware detection. In this paper, we propose a novel TKRD method to automatically detect kernel rootkits in VMs from private cloud, by combining VM memory forensic analysis with bio-inspired machine learning technology. Malicious features are extracted from the memory dumps of the VM through memory forensic analysis method. Based on these features, various machine learning classifiers are trained including Decision tree, Rule based classifiers, Bayesian and Support vector machines (SVM). The experiment results show that the Random Forest classifier has the best performance which can effectively detect unknown kernel rootkits with an Accuracy of 0.986 and an AUC value (the area under the receiver operating characteristic curve) of 0.998.
- Published
- 2019
36. Machine Learning in Medicine
- Author
-
Rahul C. Deo
- Subjects
Clinical Sciences ,Cardiorespiratory Medicine and Haematology ,Machine learning ,computer.software_genre ,Article ,Machine Learning ,Physiology (medical) ,computers ,Behavioral and Social Science ,Health care ,risk factors ,Humans ,Medicine ,Relevance (information retrieval) ,Clinical care ,Physical law ,business.industry ,Statistical learning ,artificial intelligence ,Cardiovascular System & Hematology ,statistics ,Public Health and Health Services ,prognosis ,Generic health relevance ,Artificial intelligence ,Cardiology and Cardiovascular Medicine ,business ,computer ,Algorithms - Abstract
Spurred by advances in processing power, memory, storage, and an unprecedented wealth of data, computers are being asked to tackle increasingly complex learning tasks, often with astonishing success. Computers have now mastered a popular variant of poker, learned the laws of physics from experimental data, and become experts in video games − tasks that would have been deemed impossible not too long ago. In parallel, the number of companies centered on applying complex data analysis to varying industries has exploded, and it is thus unsurprising that some analytic companies are turning attention to problems in health care. The purpose of this review is to explore what problems in medicine might benefit from such learning approaches and use examples from the literature to introduce basic concepts in machine learning. It is important to note that seemingly large enough medical data sets and adequate learning algorithms have been available for many decades, and yet, although there are thousands of papers applying machine learning algorithms to medical data, very few have contributed meaningfully to clinical care. This lack of impact stands in stark contrast to the enormous relevance of machine learning to many other industries. Thus, part of my effort will be to identify what obstacles there may be to changing the practice of medicine through statistical learning approaches, and discuss how these might be overcome.
- Published
- 2015
37. IOPA: I/O-aware parallelism adaption for parallel programs
- Author
-
Tao Liu, Yi Liu, Chen Qian, and Depei Qian
- Subjects
Optimization ,Computer and Information Sciences ,Computer science ,Interface (computing) ,Operating Systems ,Image Processing ,lcsh:Medicine ,02 engineering and technology ,Parallel computing ,Thread (computing) ,Research and Analysis Methods ,01 natural sciences ,Bottleneck ,Machine Learning ,Machine Learning Algorithms ,Artificial Intelligence ,0103 physical sciences ,Differential Equations ,0202 electrical engineering, electronic engineering, information engineering ,Computer Networks ,lcsh:Science ,010302 applied physics ,Input/output ,Multidisciplinary ,Computing Systems ,Computers ,Applied Mathematics ,Simulation and Modeling ,lcsh:R ,Partial Differential Equations ,Physical Sciences ,Signal Processing ,Parallelism (grammar) ,Bandwidth (Computing) ,Engineering and Technology ,020201 artificial intelligence & image processing ,lcsh:Q ,Mathematics ,Algorithms ,Software ,Research Article - Abstract
With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads.
- Published
- 2017
38. Feature selection using mutual information based uncertainty measures for tumor classification
- Author
-
Lin Sun and Jiucheng Xu
- Subjects
Cancer classification ,Support Vector Machine ,Databases, Factual ,Biomedical Engineering ,Feature selection ,Machine learning ,computer.software_genre ,Biomaterials ,Predictive Value of Tests ,Neoplasms ,Humans ,Entropy (information theory) ,Greedy algorithm ,Time complexity ,Mathematics ,Computers ,business.industry ,Gene Expression Profiling ,Conditional mutual information ,Uncertainty ,Computational Biology ,Reproducibility of Results ,General Medicine ,Mutual information ,Programming Languages ,Rough set ,Artificial intelligence ,Data mining ,business ,computer ,Algorithms - Abstract
Feature selection is a key problem in tumor classification and related tasks. This paper presents a tumor classification approach with neighborhood rough set-based feature selection. First, some uncertainty measures such as neighborhood entropy, conditional neighborhood entropy, neighborhood mutual information and neighborhood conditional mutual information, are in- troduced to evaluate the relevance between genes and related decision in neighborhood rough set. Then some important proper- ties and propositions of these measures are investigated, and the relationships among these measures are established as well. By using improved minimal-Redundancy-Maximal-Relevancy, combined with sequential forward greedy search strategy, a novel feature selection algorithm with low time complexity is proposed. Finally, several cancer classification tasks are demonstrated using the proposed approach. Experimental results show that the proposed algorithm is efficient and effective.
- Published
- 2014
39. Development of a FPGA based fuzzy neural network system for early diagnosis of critical health condition of a patient
- Author
-
Hiranmay Saha and Shubhajit Roy Chowdhury
- Subjects
Computer science ,Critical Illness ,Interface (computing) ,Health Informatics ,Interval (mathematics) ,Machine learning ,computer.software_genre ,Sensitivity and Specificity ,Fuzzy logic ,Fuzzy Logic ,Humans ,False Positive Reactions ,Diagnosis, Computer-Assisted ,Medical diagnosis ,Field-programmable gate array ,False Negative Reactions ,Artificial neural network ,Computers ,business.industry ,Prognosis ,Computer Science Applications ,Early Diagnosis ,Function model ,Neural Networks, Computer ,State (computer science) ,Artificial intelligence ,business ,computer ,Algorithms - Abstract
The paper describes the design and training of a fuzzy neural network used for early diagnosis of a patient through an FPGA based implementation of a smart instrument. The system employs a fuzzy interface cascaded with a feed-forward neural network. In order to obtain an optimum decision regarding the future pathophysiological state of a patient, the optimal weights of the synapses between the neurons have been determined by using inverse delayed function model of neurons. The neurons that are considered in the proposed network are devoid of self connections instead of commonly used self connected neurons. The current work also find out the optimal number of neurons in the hidden layer for accurate diagnosis as against the available number of CLB in the FPGA. The system has been trained and tested with renal data of patients taken at 10 days interval of time. Applying the methodology, the chance of attainment of critical renal condition of a patient has been predicted with an accuracy of 95.2%, 30 days ahead of actually attaining the critical condition. The system has also been tested for pathophysiological state prediction of patients at multiple time steps ahead and the prediction at the next instant of time stands out to be the most accurate.
- Published
- 2010
40. Continuous robust sound event classification using time-frequency features and deep learning
- Author
-
Huy Phan, Ian McLoughlin, Yan Song, Zhipeng Xie, Wei Xiao, and Haomin Zhang
- Subjects
Computer science ,Speech recognition ,Markov models ,lcsh:Medicine ,Social Sciences ,Otology ,02 engineering and technology ,Field (computer science) ,Machine Learning ,Mathematical and Statistical Techniques ,Hearing ,Medicine and Health Sciences ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Psychology ,Hidden Markov models ,lcsh:Science ,Hidden Markov model ,Sound (geography) ,Multidisciplinary ,geography.geographical_feature_category ,Event (computing) ,Physics ,Audiology ,Sound ,Background Noise (Acoustics) ,Physical Sciences ,Engineering and Technology ,Sensory Perception ,0305 other medical science ,Algorithms ,Research Article ,Computer and Information Sciences ,Ambient noise level ,Research and Analysis Methods ,030507 speech-language pathology & audiology ,03 medical and health sciences ,Humans ,geography ,Computers ,business.industry ,Deep learning ,lcsh:R ,Biology and Life Sciences ,Probability theory ,020206 networking & telecommunications ,Acoustics ,Models, Theoretical ,Convolution ,Otorhinolaryngology ,Speech Signal Processing ,Signal Processing ,lcsh:Q ,Artificial intelligence ,business ,Mathematical Functions ,Mathematics ,Neuroscience - Abstract
The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.
- Published
- 2017
41. Driving Profile Modeling and Recognition Based on Soft Computing Approach
- Author
-
Abdul Wahab, Chai Quek, Kazuya Takeda, and Chin Keong Tan
- Subjects
Male ,Automobile Driving ,Biometry ,Computer Networks and Communications ,Computer science ,Feature extraction ,Normal Distribution ,Models, Psychological ,Machine learning ,computer.software_genre ,Fuzzy logic ,Fuzzy Logic ,Artificial Intelligence ,Surveys and Questionnaires ,Task Performance and Analysis ,Pressure ,Humans ,Soft computing ,Behavior ,Sex Characteristics ,Adaptive neuro fuzzy inference system ,Artificial neural network ,Computers ,business.industry ,System identification ,Recognition, Psychology ,General Medicine ,Fuzzy control system ,Mixture model ,Computer Science Applications ,Multilayer perceptron ,Female ,Neural Networks, Computer ,Artificial intelligence ,business ,Automobiles ,computer ,Algorithms ,Software - Abstract
Advancements in biometrics-based authentication have led to its increasing prominence and are being incorporated into everyday tasks. Existing vehicle security systems rely only on alarms or smart card as forms of protection. A biometric driver recognition system utilizing driving behaviors is a highly novel and personalized approach and could be incorporated into existing vehicle security system to form a multimodal identification system and offer a greater degree of multilevel protection. In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. Feature extraction techniques based on Gaussian mixture models (GMMs) are proposed and implemented. Features extracted from the accelerator and brake pedal pressure were then used as inputs to a fuzzy neural network (FNN) system to ascertain the identity of the driver. Two fuzzy neural networks, namely, the evolving fuzzy neural network (EFuNN) and the adaptive network-based fuzzy inference system (ANFIS), are used to demonstrate the viability of the two proposed feature extraction techniques. The performances were compared against an artificial neural network (NN) implementation using the multilayer perceptron (MLP) network and a statistical method based on the GMM. Extensive testing was conducted and the results show great potential in the use of the FNN for real-time driver identification and verification. In addition, the profiling of driver behaviors has numerous other potential applications for use by law enforcement and companies dealing with buses and truck drivers.
- Published
- 2009
42. Texture Segmentation by Genetic Programming
- Author
-
Andy Song and Vic Ciesielski
- Subjects
Time Factors ,Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale-space segmentation ,Genetic programming ,Machine learning ,computer.software_genre ,Evolutionary computation ,Pattern Recognition, Automated ,Image texture ,Artificial Intelligence ,Image Processing, Computer-Assisted ,Computer Simulation ,Segmentation ,Vision, Ocular ,Models, Statistical ,Models, Genetic ,Contextual image classification ,Computers ,business.industry ,Computational Biology ,Pattern recognition ,Image segmentation ,Computational Mathematics ,Neural Networks, Computer ,Artificial intelligence ,business ,computer ,Algorithms ,Software - Abstract
This paper describes a texture segmentation method using genetic programming (GP), which is one of the most powerful evolutionary computation algorithms. By choosing an appropriate representation texture, classifiers can be evolved without computing texture features. Due to the absence of time-consuming feature extraction, the evolved classifiers enable the development of the proposed texture segmentation algorithm. This GP based method can achieve a segmentation speed that is significantly higher than that of conventional methods. This method does not require a human expert to manually construct models for texture feature extraction. In an analysis of the evolved classifiers, it can be seen that these GP classifiers are not arbitrary. Certain textural regularities are captured by these classifiers to discriminate different textures. GP has been shown in this study as a feasible and a powerful approach for texture classification and segmentation, which are generally considered as complex vision tasks.
- Published
- 2008
43. SVS: Data and knowledge integration in computational biology
- Author
-
Annalisa Barla, Grzegorz Zycinski, and Alessandro Verri
- Subjects
Databases, Factual ,Microarray ,Computer science ,Knowledge engineering ,Feature selection ,Genomics ,Mass spectrometry ,computer.software_genre ,Machine learning ,Mass Spectrometry ,DNA sequencing ,Artificial Intelligence ,Knowledge integration ,Gene expression ,Data Mining ,Humans ,Oligonucleotide Array Sequence Analysis ,Models, Statistical ,Computers ,business.industry ,Microarray analysis techniques ,Gene Expression Profiling ,Computational Biology ,Parkinson Disease ,Modular design ,Gene expression profiling ,Schema (genetic algorithms) ,Programming Languages ,Database theory ,Artificial intelligence ,Data mining ,business ,computer ,Algorithms ,Software ,Data integration - Abstract
In this paper we present a framework for structured variable selection (SVS). The main concept of the proposed schema is to take a step towards the integration of two different aspects of data mining: database and machine learning perspective. The framework is flexible enough to use not only microarray data, but other high-throughput data of choice (e.g. from mass spectrometry, microarray, next generation sequencing). Moreover, the feature selection phase incorporates prior biological knowledge in a modular way from various repositories and is ready to host different statistical learning techniques. We present a proof of concept of SVS, illustrating some implementation details and describing current results on high-throughput microarray data.
- Published
- 2011
44. Design and on-line evaluation of type-2 fuzzy logic system-based framework for handling uncertainties in BCI classification
- Author
-
TM McGinnity, Girijesh Prasad, and Pawel Herman
- Subjects
Computer science ,Machine learning ,computer.software_genre ,Fuzzy logic ,Feedback ,Pattern Recognition, Automated ,User-Computer Interface ,Software ,Motor imagery ,Fuzzy Logic ,Robustness (computer science) ,Computer Systems ,Humans ,Brain–computer interface ,Signal processing ,business.industry ,Computers ,Stroke Rehabilitation ,Brain ,Reproducibility of Results ,Electroencephalography ,Signal Processing, Computer-Assisted ,Equipment Design ,Support vector machine ,Artificial intelligence ,business ,computer ,Classifier (UML) ,Algorithms - Abstract
The practical applicability of brain-computer interface (BCI) technology is limited due to its insufficient reliability and robustness. One of the major problems in this regard is the extensive variability and inconsistency of brain signal patterns, observed especially in electroencephalogram (EEG). This paper presents a fuzzy logic (FL) approach to the problem of handling of the resultant uncertainty effects. In particular, it outlines the design of a novel type-2 FL system (T2FLS) classifier within the framework of an EEG-based BCI, and examines its on-line applicability in the presence of short-and long-term nonstationarities of spectral EEG correlates of motor imagery (imagination of left vs. right hand movement). The developed system is shown to effectively cope with real-time constraints. In addition, a comparative post hoc analysis has revealed that the proposed T2FLS classifier outperforms conventional BCI methods, like LDA and SVM, in terms of the maximum classification accuracy (CA) rates by a relatively small, yet statistically significant, margin.
- Published
- 2009
45. Automated design of image operators that detect interest points
- Author
-
Leonardo Trujillo and Gustavo Olague
- Subjects
Feature extraction ,Genetic programming ,Image processing ,Machine learning ,computer.software_genre ,Pattern Recognition, Automated ,Automation ,Operator (computer programming) ,Artificial Intelligence ,Image Processing, Computer-Assisted ,Humans ,Computer Simulation ,Vision, Ocular ,Feature detection (computer vision) ,Mathematics ,Electronic Data Processing ,Models, Statistical ,business.industry ,Computers ,Search engine indexing ,Computational Biology ,Interest point detection ,Computational Mathematics ,Feature (computer vision) ,Artificial intelligence ,business ,computer ,Algorithms - Abstract
This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research.
- Published
- 2008
46. Knowledge and intelligent computing system in medicine
- Author
-
Babita Pandey and Ravi Bhushan Mishra
- Subjects
Medical Records Systems, Computerized ,Computer science ,Logic ,Medical Informatics Computing ,Health Informatics ,Context (language use) ,computer.software_genre ,Machine learning ,Fuzzy logic ,Domain (software engineering) ,Software ,Fuzzy Logic ,Artificial Intelligence ,Genetic algorithm ,Humans ,Medical diagnosis ,Problem Solving ,Artificial neural network ,business.industry ,Computers ,Reproducibility of Results ,Expert system ,Computer Science Applications ,Artificial intelligence ,Neural Networks, Computer ,business ,computer ,Algorithms - Abstract
Knowledge-based systems (KBS) and intelligent computing systems have been used in the medical planning, diagnosis and treatment. The KBS consists of rule-based reasoning (RBR), case-based reasoning (CBR) and model-based reasoning (MBR) whereas intelligent computing method (ICM) encompasses genetic algorithm (GA), artificial neural network (ANN), fuzzy logic (FL) and others. The combination of methods in KBS such as CBR-RBR, CBR-MBR and RBR-CBR-MBR and the combination of methods in ICM is ANN-GA, fuzzy-ANN, fuzzy-GA and fuzzy-ANN-GA. The combination of methods from KBS to ICM is RBR-ANN, CBR-ANN, RBR-CBR-ANN, fuzzy-RBR, fuzzy-CBR and fuzzy-CBR-ANN. In this paper, we have made a study of different singular and combined methods (185 in number) applicable to medical domain from mid 1970s to 2008. The study is presented in tabular form, showing the methods and its salient features, processes and application areas in medical domain (diagnosis, treatment and planning). It is observed that most of the methods are used in medical diagnosis very few are used for planning and moderate number in treatment. The study and its presentation in this context would be helpful for novice researchers in the area of medical expert system.
- Published
- 2008
47. ESVM: evolutionary support vector machine for automatic feature selection and classification of microarray data
- Author
-
Fang-Lin Chang and Hui-Ling Huang
- Subjects
Statistics and Probability ,Optimal design ,Computer science ,Feature selection ,Machine learning ,computer.software_genre ,General Biochemistry, Genetics and Molecular Biology ,Chromosomes ,Extractor ,Evolution, Molecular ,Automation ,Animals ,Control parameters ,Oligonucleotide Array Sequence Analysis ,Models, Statistical ,Models, Genetic ,business.industry ,Computers ,Applied Mathematics ,Small number ,Systems Biology ,Estimator ,Computational Biology ,Pattern recognition ,General Medicine ,Models, Theoretical ,Biological Evolution ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Gene Expression Regulation ,Modeling and Simulation ,Artificial intelligence ,business ,Classifier (UML) ,computer ,Algorithms ,Software - Abstract
An optimal design of support vector machine (SVM)-based classifiers for prediction aims to optimize the combination of feature selection, parameter setting of SVM, and cross-validation methods. However, SVMs do not offer the mechanism of automatic internal relevant feature detection. The appropriate setting of their control parameters is often treated as another independent problem. This paper proposes an evolutionary approach to designing an SVM-based classifier (named ESVM) by simultaneous optimization of automatic feature selection and parameter tuning using an intelligent genetic algorithm, combined with k-fold cross-validation regarded as an estimator of generalization ability. To illustrate and evaluate the efficiency of ESVM, a typical application to microarray classification using 11 multi-class datasets is adopted. By considering model uncertainty, a frequency-based technique by voting on multiple sets of potentially informative features is used to identify the most effective subset of genes. It is shown that ESVM can obtain a high accuracy of 96.88% with a small number 10.0 of selected genes using 10-fold cross-validation for the 11 datasets averagely. The merits of ESVM are three-fold: (1) automatic feature selection and parameter setting embedded into ESVM can advance prediction abilities, compared to traditional SVMs; (2) ESVM can serve not only as an accurate classifier but also as an adaptive feature extractor; (3) ESVM is developed as an efficient tool so that various SVMs can be used conveniently as the core of ESVM for bioinformatics problems.
- Published
- 2006
48. CL.E.KMODES: A Modified k-Modes Clustering Algorithm
- Author
-
Mastrogiannis, N., Giannikos, I., Boutsinas, B., and Antzoulatos, G.
- Published
- 2009
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.