62 results on '"Mohamed Hamed N. Taha"'
Search Results
2. DeepDate: A deep fusion model based on whale optimization and artificial neural network for Arabian date classification
- Author
-
Nour Eldeen Mahmoud Khalifa, Jiaji Wang, Mohamed Hamed N. Taha, and Yudong Zhang
- Subjects
Medicine ,Science - Published
- 2024
3. A COVID-19 Infection Prediction Model in Egypt Based on Deep Learning Using Population Mobility Reports
- Author
-
Nour Eldeen Khalifa, Ahmed A. Mawgoud, Amr Abu-Talleb, Mohamed Hamed N. Taha, and Yu-Dong Zhang
- Subjects
Deep learning ,COVID-19 ,Prediction model ,Regression model ,Egypt ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract The rapidly spreading COVID-19 disease had already infected more than 190 countries. As a result of this scenario, nations everywhere monitored confirmed cases of infection, cures, and fatalities and made predictions about what the future would hold. In the event of a pandemic, governments had set limit rules for the spread of the virus and save lives. Multiple computer methods existed for forecasting epidemic time series. Deep learning was one of the most promising methods for time-series prediction. In this research, we propose a model for predicting the spread of COVID-19 in Egypt based on deep learning sequence-to-sequence regression, which makes use of data on the population mobility reports. The presented model utilized a new combined dataset from two different sources. The first source is Google population mobility reports, and the second source is the number of infected cases reported daily “world in data” website. The suggested model could predict new cases of COVID-19 infection within 3–7 days with the least amount of prediction error. The proposed model achieved 96.69% accuracy for 3 days of prediction. This study is noteworthy since it is one of the first trials to estimate the daily influx of new COVID-19 infections using population mobility data instead of daily infection rates.
- Published
- 2023
- Full Text
- View/download PDF
4. A deep learning based steganography integration framework for ad-hoc cloud computing data security augmentation using the V-BOINC system
- Author
-
Ahmed A. Mawgoud, Mohamed Hamed N. Taha, Amr Abu-Talleb, and Amira Kotb
- Subjects
Ad-hoc system ,Cloud computing ,Steganography ,Cloud security ,Deep learning ,Encryption ,Computer engineering. Computer hardware ,TK7885-7895 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract In the early days of digital transformation, the automation, scalability, and availability of cloud computing made a big difference for business. Nonetheless, significant concerns have been raised regarding the security and privacy levels that cloud systems can provide, as enterprises have accelerated their cloud migration journeys in an effort to provide a remote working environment for their employees, primarily in light of the COVID-19 outbreak. The goal of this study is to come up with a way to improve steganography in ad hoc cloud systems by using deep learning. This research implementation is separated into two sections. In Phase 1, the “Ad-hoc Cloud System” idea and deployment plan were set up with the help of V-BOINC. In Phase 2, a modified form of steganography and deep learning were used to study the security of data transmission in ad-hoc cloud networks. In the majority of prior studies, attempts to employ deep learning models to augment or replace data-hiding systems did not achieve a high success rate. The implemented model inserts data images through colored images in the developed ad hoc cloud system. A systematic steganography model conceals from statistics lower message detection rates. Additionally, it may be necessary to incorporate small images beneath huge cover images. The implemented ad-hoc system outperformed Amazon AC2 in terms of performance, while the execution of the proposed deep steganography approach gave a high rate of evaluation for concealing both data and images when evaluated against several attacks in an ad-hoc cloud system environment.
- Published
- 2022
- Full Text
- View/download PDF
5. Within the Protection of COVID-19 Spreading: A Face Mask Detection Model Based on the Neutrosophic RGB with Deep Transfer Learning
- Author
-
Nour Eldeen Khalifa, Mohamed Loey, Ripon K. Chakrabortty, and Mohamed Hamed N. Taha
- Subjects
neutrosophic rgb ,covid-19 ,classical machine learning ,deep learning ,face mask detection ,Mathematics ,QA1-939 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
COVID-19's fast spread in 2020 compelled the World Health Organization (WHO) to declare COVID-19 a worldwide pandemic. According to the WHO, one of the preventative countermeasures against this type of virus is to use face masks in public places. This paper proposes a face mask detection model by extracting features based on the neutrosophic RGB with deep transfer learning. The suggested model is divided into three steps, the first step is the conversion to the neutrosophic RGB domain. This work is considered one of the first trails of applying neutrosophic RGB conversion to image domain, as it was commonly used in the conversion of grayscale images. The second step is the feature extraction using Alexnet, which has been small number of layers. The detection model is created in the third step using two traditional machine learning algorithms: decision trees classifier and Support Vector Machine (SVM). The Simulated Masked Face dataset (SMF) and the Real-World Mask Face dataset (RMF) are merged to a single dataset with two categories (a face with a mask, and a face without a mask). According to the experimental results, the SVM classifier with the True (T) neutrosophic domain achieved the highest testing accuracy with 98.37%.
- Published
- 2022
- Full Text
- View/download PDF
6. Artificial Intelligence Technique for Gene Expression by Tumor RNA-Seq Data: A Novel Optimized Deep Learning Approach
- Author
-
Nour Eldeen M. Khalifa, Mohamed Hamed N. Taha, Dalia Ezzat Ali, Adam Slowik, and Aboul Ella Hassanien
- Subjects
Cancer ,RNA sequence ,deep convolutional neural network ,gene expression data ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Cancer is one of the most feared and aggressive diseases in the world and is responsible for more than 9 million deaths universally. Staging cancer early increases the chances of recovery. One staging technique is RNA sequence analysis. Recent advances in the efficiency and accuracy of artificial intelligence techniques and optimization algorithms have facilitated the analysis of human genomics. This paper introduces a novel optimized deep learning approach based on binary particle swarm optimization with decision tree (BPSO-DT) and convolutional neural network (CNN) to classify different types of cancer based on tumor RNA sequence (RNA-Seq) gene expression data. The cancer types that will be investigated in this research are kidney renal clear cell carcinoma (KIRC), breast invasive carcinoma (BRCA), lung squamous cell carcinoma (LUSC), lung adenocarcinoma (LUAD) and uterine corpus endometrial carcinoma (UCEC). The proposed approach consists of three phases. The first phase is preprocessing, which at first optimize the high-dimensional RNA-seq to select only optimal features using BPSO-DT and then, converts the optimized RNA-Seq to 2D images. The second phase is augmentation, which increases the original dataset of 2086 samples to be 5 times larger. The selection of the augmentations techniques was based achieving the least impact on manipulating the features of the images. This phase helps to overcome the overfitting problem and trains the model to achieve better accuracy. The third phase is deep CNN architecture. In this phase, an architecture of two main convolutional layers for featured extraction and two fully connected layers is introduced to classify the 5 different types of cancer according to the availability of images on the dataset. The results and the performance metrics such as recall, precision and F1 score show that the proposed approach achieved an overall testing accuracy of 96.90%. The comparative results are introduced, and the proposed method outperforms those in related works in terms of testing accuracy for 5 classes of cancer. Moreover, the proposed approach is less complex and consume less memory.
- Published
- 2020
- Full Text
- View/download PDF
7. Breast and Colon Cancer Classification from Gene Expression Profiles Using Data Mining Techniques
- Author
-
Mohammed Loey, Mohammed Wajeeh Jasim, Hazem M. EL-Bakry, Mohamed Hamed N. Taha, and Nour Eldeen M. Khalifa
- Subjects
machine learning ,cancer diagnosis ,grey wolf optimization algorithm ,support vector machine ,information gain ,feature selection ,Mathematics ,QA1-939 - Abstract
Early detection of cancer increases the probability of recovery. This paper presents an intelligent decision support system (IDSS) for the early diagnosis of cancer based on gene expression profiles collected using DNA microarrays. Such datasets pose a challenge because of the small number of samples (no more than a few hundred) relative to the large number of genes (in the order of thousands). Therefore, a method of reducing the number of features (genes) that are not relevant to the disease of interest is necessary to avoid overfitting. The proposed methodology uses the information gain (IG) to select the most important features from the input patterns. Then, the selected features (genes) are reduced by applying the grey wolf optimization (GWO) algorithm. Finally, the methodology employs a support vector machine (SVM) classifier for cancer type classification. The proposed methodology was applied to two datasets (Breast and Colon) and was evaluated based on its classification accuracy, which is the most important performance measure in disease diagnosis. The experimental results indicate that the proposed methodology is able to enhance the stability of the classification accuracy as well as the feature selection.
- Published
- 2020
- Full Text
- View/download PDF
8. Optimization of Task Scheduling in Cloud Computing Using the RAO-3 Algorithm.
- Author
-
Ahmed Rabie Fayed, Nour Eldeen Mahmoud Khalifa, Mohamed Hamed N. Taha, and Amira Kotb
- Published
- 2023
- Full Text
- View/download PDF
9. COVID-19 Chest X-rays Classification Through the Fusion of Deep Transfer Learning and Machine Learning Methods.
- Author
-
Nour Eldeen Mahmoud Khalifa, Mohamed Hamed N. Taha, Ripon K. Chakrabortty, and Mohamed Loey
- Published
- 2022
- Full Text
- View/download PDF
10. Detection of Coronavirus (COVID-19) Associated Pneumonia Based on Generative Adversarial Networks and a Fine-Tuned Deep Transfer Learning Model Using Chest X-ray Dataset.
- Author
-
Nour Eldeen Mahmoud Khalifa, Mohamed Hamed N. Taha, Aboul Ella Hassanien, and Sally M. Elghamrawy
- Published
- 2022
- Full Text
- View/download PDF
11. Statistical Insights and Association Mining for Terrorist Attacks in Egypt.
- Author
-
Nour Eldeen Mahmoud Khalifa, Mohamed Hamed N. Taha, Sarah Hamed N. Taha, and Aboul Ella Hassanien
- Published
- 2019
- Full Text
- View/download PDF
12. An Optimized Deep Convolutional Neural Network to Identify Nanoscience Scanning Electron Microscope Images Using Social Ski Driver Algorithm.
- Author
-
Dalia Ezzat, Mohamed Hamed N. Taha, and Aboul Ella Hassanien
- Published
- 2019
- Full Text
- View/download PDF
13. Cyber Security Risks in MENA Region: Threats, Challenges and Countermeasures.
- Author
-
Ahmed A. Mawgoud, Mohamed Hamed N. Taha, Nour Eldeen Mahmoud Khalifa, and Mohamed Loey
- Published
- 2019
- Full Text
- View/download PDF
14. Towards Objective-Dependent Performance Analysis on Online Sentiment Review.
- Author
-
Doaa Mohey El-Din, Mohamed Hamed N. Taha, and Nour Eldeen Mahmoud Khalifa
- Published
- 2019
- Full Text
- View/download PDF
15. A deep learning semantic segmentation architecture for COVID-19 lesions discovery in limited chest CT datasets.
- Author
-
Nour Eldeen Mahmoud Khalifa, Gunasekaran Manogaran, Mohamed Hamed N. Taha, and Mohamed Loey
- Published
- 2022
- Full Text
- View/download PDF
16. Automatic Counting and Visual Multi-tracking System for Human Sperm in Microscopic Video Frames.
- Author
-
Nour Eldeen Mahmoud Khalifa, Mohamed Hamed N. Taha, and Aboul Ella Hassanien
- Published
- 2018
- Full Text
- View/download PDF
17. Aquarium Family Fish Species Identification System Using Deep Neural Networks.
- Author
-
Nour Eldeen Mahmoud Khalifa, Mohamed Hamed N. Taha, and Aboul Ella Hassanien
- Published
- 2018
- Full Text
- View/download PDF
18. Deep bacteria: robust deep learning data augmentation design for limited bacterial colony dataset.
- Author
-
Nour Eldeen Mahmoud Khalifa, Mohamed Hamed N. Taha, Aboul Ella Hassanien, and Ahmed Abdelmonem Hemedan
- Published
- 2019
- Full Text
- View/download PDF
19. Detection of Coronavirus (COVID-19) Associated Pneumonia based on Generative Adversarial Networks and a Fine-Tuned Deep Transfer Learning Model using Chest X-ray Dataset.
- Author
-
Nour Eldeen Mahmoud Khalifa, Mohamed Hamed N. Taha, Aboul Ella Hassanien, and Sally M. Elghamrawy
- Published
- 2020
20. QoS Provision for Controlling Energy Consumption in Ad-hoc Wireless Sensor Networks.
- Author
-
Ahmed A. Mawgoud, Mohamed Hamed N. Taha, and Nour Eldeen Mahmoud Khalifa
- Published
- 2020
21. Breast and Colon Cancer Classification from Gene Expression Profiles Using Data Mining Techniques.
- Author
-
Mohamed Loey Ramadan AbdElNabi, Mohammed Wajeeh Jasim, Hazem M. El-Bakry, Mohamed Hamed N. Taha, and Nour Eldeen Mahmoud Khalifa
- Published
- 2020
- Full Text
- View/download PDF
22. Cyber-Physical Systems for Industrial Transformation
- Author
-
Gunasekaran Manogaran, Nour Eldeen Mahmoud Khalifa, Mohamed Loey, and Mohamed Hamed N. Taha
- Published
- 2023
- Full Text
- View/download PDF
23. Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks.
- Author
-
Nour Eldeen Mahmoud Khalifa, Mohamed Hamed N. Taha, Aboul Ella Hassanien, and I. M. Selim
- Published
- 2017
24. Artificial Intelligence Technique for Gene Expression by Tumor RNA-Seq Data: A Novel Optimized Deep Learning Approach
- Author
-
Aboul Ella Hassanien, Nour Eldeen M. Khalifa, Adam Slowik, Dalia Ezzat Ali, and Mohamed Hamed N. Taha
- Subjects
General Computer Science ,Computer science ,Decision tree ,RNA sequence ,02 engineering and technology ,Overfitting ,Convolutional neural network ,03 medical and health sciences ,deep convolutional neural network ,gene expression data ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Carcinoma ,General Materials Science ,030304 developmental biology ,Cancer staging ,Cancer ,Renal clear cell carcinoma ,0303 health sciences ,Kidney ,Invasive carcinoma ,business.industry ,Deep learning ,General Engineering ,medicine.disease ,medicine.anatomical_structure ,Adenocarcinoma ,020201 artificial intelligence & image processing ,Artificial intelligence ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,F1 score ,business ,lcsh:TK1-9971 - Abstract
Cancer is one of the most feared and aggressive diseases in the world and is responsible for more than 9 million deaths universally. Staging cancer early increases the chances of recovery. One staging technique is RNA sequence analysis. Recent advances in the efficiency and accuracy of artificial intelligence techniques and optimization algorithms have facilitated the analysis of human genomics. This paper introduces a novel optimized deep learning approach based on binary particle swarm optimization with decision tree (BPSO-DT) and convolutional neural network (CNN) to classify different types of cancer based on tumor RNA sequence (RNA-Seq) gene expression data. The cancer types that will be investigated in this research are kidney renal clear cell carcinoma (KIRC), breast invasive carcinoma (BRCA), lung squamous cell carcinoma (LUSC), lung adenocarcinoma (LUAD) and uterine corpus endometrial carcinoma (UCEC). The proposed approach consists of three phases. The first phase is preprocessing, which at first optimize the high-dimensional RNA-seq to select only optimal features using BPSO-DT and then, converts the optimized RNA-Seq to 2D images. The second phase is augmentation, which increases the original dataset of 2086 samples to be 5 times larger. The selection of the augmentations techniques was based achieving the least impact on manipulating the features of the images. This phase helps to overcome the overfitting problem and trains the model to achieve better accuracy. The third phase is deep CNN architecture. In this phase, an architecture of two main convolutional layers for featured extraction and two fully connected layers is introduced to classify the 5 different types of cancer according to the availability of images on the dataset. The results and the performance metrics such as recall, precision and F1 score show that the proposed approach achieved an overall testing accuracy of 96.90%. The comparative results are introduced, and the proposed method outperforms those in related works in terms of testing accuracy for 5 classes of cancer. Moreover, the proposed approach is less complex and consume less memory.
- Published
- 2020
25. Blockchain Technology and Machine Learning for Fake News Detection
- Author
-
Mohamed Loey, Mohamed Hamed N. Taha, and Nour Eldeen M. Khalifa
- Published
- 2022
- Full Text
- View/download PDF
26. Steganography Adaptation Model for Data Security Enhancement in Ad-Hoc Cloud Based V-BOINC Through Deep Learning
- Author
-
Ahmed A. Mawgoud, Mohamed Hamed N. Taha, and Amira Kotb
- Published
- 2022
- Full Text
- View/download PDF
27. Retraction Note: A deep learning model and machine learning methods for the classification of potential coronavirus treatments on a single human cell
- Author
-
Mohamed Loey, Nour Eldeen M. Khalifa, Gunasekaran Manogaran, and Mohamed Hamed N. Taha
- Subjects
Image domain ,Materials science ,Treatment classification ,business.industry ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Deep learning ,Pooling ,Decision tree ,Bioengineering ,General Chemistry ,Human cell ,Condensed Matter Physics ,Machine learning ,computer.software_genre ,Atomic and Molecular Physics, and Optics ,Support vector machine ,Modeling and Simulation ,General Materials Science ,Artificial intelligence ,business ,computer - Abstract
Coronavirus pandemic is burdening healthcare systems around the world to the full capacity they can accommodate. There is an overwhelming need to find a treatment for this virus as early as possible. Computer algorithms and deep learning can participate positively by finding a potential treatment for SARS-CoV-2. In this paper, a deep learning model and machine learning methods for the classification of potential coronavirus treatments on a single human cell will be presented. The dataset selected in this work is a subset of the publicly online datasets available on RxRx.ai. The objective of this research is to automatically classify a single human cell according to the treatment type and the treatment concentration level. A DCNN model and a methodology are proposed throughout this work. The methodical idea is to convert the numerical features from the original dataset to the image domain and then fed them up into a DCNN model. The proposed DCNN model consists of three convolutional layers, three ReLU layers, three pooling layers, and two fully connected layers. The experimental results show that the proposed DCNN model for treatment classification (32 classes) achieved 98.05% in testing accuracy if it is compared with classical machine learning such as support vector machine, decision tree, and ensemble. In treatment concentration level prediction, the classical machine learning (ensemble) algorithm achieved 98.5% in testing accuracy while the proposed DCNN model achieved 98.2%. The performance metrics strengthen the obtained results from the conducted experiments for the accuracy of treatment classification and treatment concentration level prediction.
- Published
- 2021
- Full Text
- View/download PDF
28. A Proposed Load Balancing Algorithm Over Cloud Computing (Balanced Throttled)
- Author
-
Hesham N. Elmahdy, Shereen Yousef Mohamed, Mohamed Hamed N. Taha, Hany Harb, and Blue Eyes Intelligence Engineering and Sciences Publication(BEIESP)
- Subjects
(Amlb Algorithm, Balanced Throttled Load Balancing Algorithm, Cloud Analyst Simulator, Cloud Computing, Load Balancing, Round Robin Algorithm, Throttled Load Balancing Algorithm.) ,100.1/ijrte.B61010710221 ,Computer science ,business.industry ,Management of Technology and Innovation ,Distributed computing ,2277-3878 ,General Engineering ,Cloud computing ,Load balancing (computing) ,business - Abstract
Cloud computing refers to the services and applications that are accessible throughout the world from data centers. All services and applications are available online. Virtual machine migration is an important part of virtualization which is considered as essential part in cloud computing environment. Virtual Machine Migration means transferring a running Virtual Machine with all its applications and the operating system state as it is to target destination machine where it continues to run as if nothing happened. It makes balancing between servers. This improves the performance by redistributing the workload among available servers. There are many algorithms of load balancing classified into two types: static load balancing algorithms and dynamic load balancing algorithms. This paper presents the algorithm (Balanced Throttled Load Balancing Algorithm- BTLB). It compares the results of the BTLB with round robin algorithm, AMLB algorithm and throttled load balancing algorithm. The results of these four algorithms would be presented in this paper. The proposed algorithm shows the improvement in response time (75 µs). Cloud analyst simulator is used to evaluate the results. BTLB was developed and tested using Java.
- Published
- 2021
29. Deep Transfer Learning Models for Medical Diabetic Retinopathy Detection
- Author
-
Hamed Nasr Eldin T. Mohamed, Mohamed Hamed N. Taha, Nour Eldeen M. Khalifa, and Mohamed Loey
- Subjects
Computer science ,0211 other engineering and technologies ,Convolutional Neural Network ,02 engineering and technology ,Overfitting ,Machine learning ,computer.software_genre ,Convolutional neural network ,Diabetic Eye Disease ,Machine Learning ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,021110 strategic, defence & security studies ,Original Paper ,Diabetic Retinopathy ,business.industry ,Deep learning ,General Medicine ,Diabetic retinopathy ,medicine.disease ,Deep Transfer Learning ,020201 artificial intelligence & image processing ,Artificial intelligence ,Transfer of learning ,business ,F1 score ,computer - Abstract
Introduction Diabetic retinopathy (DR) is the most common diabetic eye disease worldwide and a leading cause of blindness. The number of diabetic patients will increase to 552 million by 2034, as per the International Diabetes Federation (IDF). Aim With advances in computer science techniques, such as artificial intelligence (AI) and deep learning (DL), opportunities for the detection of DR at the early stages have increased. This increase means that the chances of recovery will increase and the possibility of vision loss in patients will be reduced in the future. Methods In this paper, deep transfer learning models for medical DR detection were investigated. The DL models were trained and tested over the Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 dataset. According to literature surveys, this research is considered one the first studies to use of the APTOS 2019 dataset, as it was freshly published in the second quarter of 2019. The selected deep transfer models in this research were AlexNet, Res-Net18, SqueezeNet, GoogleNet, VGG16, and VGG19. These models were selected, as they consist of a small number of layers when compared to larger models, such as DenseNet and InceptionResNet. Data augmentation techniques were used to render the models more robust and to overcome the overfitting problem. Results The testing accuracy and performance metrics, such as the precision, recall, and F1 score, were calculated to prove the robustness of the selected models. The AlexNet model achieved the highest testing accuracy at 97.9%. In addition, the achieved performance metrics strengthened our achieved results. Moreover, AlexNet has a minimum number of layers, which decreases the training time and the computational complexity.
- Published
- 2019
30. Deep Iris: Deep Learning for Gender Classification Through Iris Patterns
- Author
-
Hamed Nasr Eldin T. Mohamed, Mohamed Hamed N. Taha, Aboul Ella Hassanien, and Nour Eldeen M. Khalifa
- Subjects
Computer science ,Iris recognition ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Overfitting ,01 natural sciences ,Convolutional neural network ,Field (computer science) ,Deep Neural ,Deep Learning ,gender-identification ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Segmentation ,Original Paper ,010308 nuclear & particles physics ,business.industry ,Deep learning ,Soft biometrics ,Pattern recognition ,General Medicine ,Soft Biometrics ,Deep Convolutional Neural Network ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
Introduction: One attractive research area in the computer science field is soft biometrics. Aim: To Identify a person’s gender from an iris image when such identification is related to security surveillance systems and forensics applications. Methods: In this paper, a robust iris gender-identification method based on a deep convolutional neural network is introduced. The proposed architecture segments the iris from a background image using the graph-cut segmentation technique. The proposed model contains 16 subsequent layers; three are convolutional layers for feature extraction with different convolution window sizes, followed by three fully connected layers for classification. Results: The original dataset consists of 3,000 images, 1,500 images for men and 1,500 images for women. The augmentation techniques adopted in this research overcome the overfitting problem and make the proposed architecture more robust and immune from simply memorizing the training data. In addition, the augmentation process not only increased the number of dataset images to 9,000 images for the training phase, 3,000 images for the testing phase and 3,000 images for the verification phase but also led to a significant improvement in testing accuracy, where the proposed architecture achieved 98.88%. A comparison is presented in which the testing accuracy of the proposed approach was compared with the testing accuracy of other related works using the same dataset. Conclusion: The proposed architecture outperformed the other related works in terms of testing accuracy.
- Published
- 2019
31. Cyber-Physical Systems for Industrial Transformation : Fundamentals, Standards, and Protocols
- Author
-
Gunasekaran Manogaran, Nour Eldeen Mahmoud Khalifa, Mohamed Loey, Mohamed Hamed N. Taha, Gunasekaran Manogaran, Nour Eldeen Mahmoud Khalifa, Mohamed Loey, and Mohamed Hamed N. Taha
- Subjects
- Cooperating objects (Computer systems), Internet of things
- Abstract
This book investigates the fundaments, standards, and protocols of Cyber-Physical Systems (CPS) in the industrial transformation environment. It facilitates a fusion of both technologies in the creation of reliable and robust applications. Cyber-Physical Systems for Industrial Transformation: Fundamentals, Standards, and Protocols explores emerging technologies such as artificial intelligence, data science, blockchain, robotic process automation, virtual reality, edge computing, and 5G technology to highlight current and future opportunities to transition CPS to become more robust and reliable. The book showcases the real-time sensing, processing, and actuation software and discusses fault-tolerant and cybersecurity as well. This book brings together undergraduates, postgraduates, academics, researchers, and industry individuals that are interested in exploring new ideas, techniques, and tools related to CPS and Industry 4.0.
- Published
- 2023
32. Convolutional Neural Network with Batch Normalization for Classification of Endoscopic Gastrointestinal Diseases
- Author
-
Aboul Ella Hassanien, Dalia Ezzat, Mohamed Hamed N. Taha, and Heba M. Afify
- Subjects
Normalization (statistics) ,Computational complexity theory ,Computer science ,business.industry ,Activation function ,Training phase ,Pattern recognition ,Artificial intelligence ,business ,Convolutional neural network ,Exponential linear units - Abstract
In this paper, an approach for classifying gastrointestinal (GI) diseases from endoscopic images is proposed. The proposed approach is built using a convolutional neural network (CNN) with batch normalization (BN) and an exponential linear unit (ELU) as the activation function. The proposed approach consists of eight layers (six convolutional and two fully connected layers) and is used to identify eight types of GI diseases in version two of the Kvasir dataset. The proposed approach was compared with other CNN architectures (VGG16, VGG19, and Inception-v3) using five elements (number of convolutional layers, number of total parameters of the convolutional layers, number of epochs, validation accuracy and test accuracy). The proposed approach achieved good results compared to the compared architectures. It achieved a validation accuracy of 88%, which is superior to other architectures and a test accuracy of 87%, which outperforms the Inception-v3 architecture. Therefore, the proposed approach has less trained images and less computational complexity in the training phase.
- Published
- 2020
- Full Text
- View/download PDF
33. Artificial Intelligence in Potato Leaf Disease Classification: A Deep Learning Approach
- Author
-
Lobna M. Abou El-Maged, Nour Eldeen M. Khalifa, Mohamed Hamed N. Taha, and Aboul Ella Hassanien
- Subjects
Computer science ,business.industry ,Deep learning ,Leaf disease ,Feature extraction ,Blight ,Artificial intelligence ,business ,Convolutional neural network - Abstract
Potato leaf blight is one of the most devastating global plant diseases because it affects the productivity and quality of potato crops and adversely affects both individual farmers and the agricultural industry. Advances in the early classification and detection of crop blight using artificial intelligence technologies have increased the opportunity to enhance and expand plant protection. This paper presents an architecture proposed for potato leaf blight classification. This architecture depends on deep convolutional neural network. The training dataset of potato leaves contains three categories: healthy leaves, early blight leaves, and late blight leaves. The proposed architecture depends on 14 layers, including two main convolutional layers for feature extraction with different convolution window sizes followed by two fully connected layers for classification. In this paper, augmentation processes were applied to increase the number of dataset images from 1,722 to 9,822 images, which led to a significant improvement in the overall testing accuracy. The proposed architecture achieved an overall mean testing accuracy of 98%. More than 6 performance metrics were applied in this research to ensure the accuracy and validity of the presented results. The testing accuracy of the proposed approach was compared with that of related works, and the proposed architecture achieved improved accuracy compared to the related works.
- Published
- 2020
- Full Text
- View/download PDF
34. Fighting against COVID-19: A novel deep learning model based on YOLO-v2 with ResNet-50 for medical face mask detection
- Author
-
Nour Eldeen M. Khalifa, Mohamed Loey, Gunasekaran Manogaran, and Mohamed Hamed N. Taha
- Subjects
Computer science ,Feature extraction ,Geography, Planning and Development ,0211 other engineering and technologies ,Transportation ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Medical masked face ,Article ,ResNet ,Computer vision ,YOLO ,021108 energy ,0105 earth and related environmental sciences ,Civil and Structural Engineering ,business.industry ,Renewable Energy, Sustainability and the Environment ,Deep learning ,Detector ,Process (computing) ,COVID-19 ,Object (computer science) ,Object detection ,Face (geometry) ,Artificial intelligence ,business ,Transfer of learning - Abstract
Highlights • A novel deep learning model for medical face mask detection. • The model can help governments to prevent the COVID-19 transmission. • Two medical face mask datasets have been tested. • The YOLO-v2 with ResNet-50 model achieves high average precision., Deep learning has shown tremendous potential in many real-life applications in different domains. One of these potentials is object detection. Recent object detection which is based on deep learning models has achieved promising results concerning the finding of an object in images. The objective of this paper is to annotate and localize the medical face mask objects in real-life images. Wearing a medical face mask in public areas, protect people from COVID-19 transmission among them. The proposed model consists of two components. The first component is designed for the feature extraction process based on the ResNet-50 deep transfer learning model. While the second component is designed for the detection of medical face masks based on YOLO v2. Two medical face masks datasets have been combined in one dataset to be investigated through this research. To improve the object detection process, mean IoU has been used to estimate the best number of anchor boxes. The achieved results concluded that the adam optimizer achieved the highest average precision percentage of 81% as a detector. Finally, a comparative result with related work has been presented at the end of the research. The proposed detector achieved higher accuracy and precision than the related work.
- Published
- 2020
35. RETRACTED ARTICLE: A deep learning model and machine learning methods for the classification of potential coronavirus treatments on a single human cell
- Author
-
Nour Eldeen M. Khalifa, Mohamed Hamed N. Taha, Mohamed Loey, and Gunasekaran Manogaran
- Subjects
Materials science ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Pooling ,Decision tree ,Bioengineering ,02 engineering and technology ,010402 general chemistry ,Machine learning ,computer.software_genre ,01 natural sciences ,Modelling and Simulation ,General Materials Science ,Image domain ,Treatment classification ,business.industry ,Deep learning ,General Chemistry ,Human cell ,021001 nanoscience & nanotechnology ,Condensed Matter Physics ,Atomic and Molecular Physics, and Optics ,0104 chemical sciences ,Support vector machine ,Modeling and Simulation ,Artificial intelligence ,0210 nano-technology ,business ,computer - Abstract
Coronavirus pandemic is burdening healthcare systems around the world to the full capacity they can accommodate. There is an overwhelming need to find a treatment for this virus as early as possible. Computer algorithms and deep learning can participate positively by finding a potential treatment for SARS-CoV-2. In this paper, a deep learning model and machine learning methods for the classification of potential coronavirus treatments on a single human cell will be presented. The dataset selected in this work is a subset of the publicly online datasets available on RxRx.ai. The objective of this research is to automatically classify a single human cell according to the treatment type and the treatment concentration level. A DCNN model and a methodology are proposed throughout this work. The methodical idea is to convert the numerical features from the original dataset to the image domain and then fed them up into a DCNN model. The proposed DCNN model consists of three convolutional layers, three ReLU layers, three pooling layers, and two fully connected layers. The experimental results show that the proposed DCNN model for treatment classification (32 classes) achieved 98.05% in testing accuracy if it is compared with classical machine learning such as support vector machine, decision tree, and ensemble. In treatment concentration level prediction, the classical machine learning (ensemble) algorithm achieved 98.5% in testing accuracy while the proposed DCNN model achieved 98.2%. The performance metrics strengthen the obtained results from the conducted experiments for the accuracy of treatment classification and treatment concentration level prediction.
- Published
- 2020
- Full Text
- View/download PDF
36. An Artificial Intelligence Authentication Framework to Secure Internet of Educational Things
- Author
-
Nour Eldeen M. Khalifa, Mohamed Hamed N. Taha, and Ahmed A. Mawgoud
- Subjects
Security analysis ,Computer science ,business.industry ,Hash function ,Provisioning ,Computer security model ,Computer security ,computer.software_genre ,Software ,The Internet ,business ,computer ,Electrical efficiency ,Efficient energy use - Abstract
Due to the expansion of the Internet of Educational Things (IoET) systems, the importance of using security provisioning became a mandatory issue for supporting various software applications in connected heterogeneous devices. Traditional authentication methods are not efficient in dealing with the security risks in IoT because of their dependence on both complex computations and static security mechanisms. Moreover, the isolation of security mechanisms in each layer while ignoring an integration methodology for overall system protection will lead to the rise of both communication latency and security risks. In this paper, we propose an authentication model to utilize the IoT resources in educational environment while putting energy efficiency into consideration, and provide a mutual verification technique based on a hash function that there is no limitations on the operation performance, compute and network. This research verifies the security and power efficiency of this model through security analysis and performance evaluation, this is mainly done through comparing our proposed method with already existing models. The proposed model has a huge value in its applicability as an authentication security model for IoET environment.
- Published
- 2020
- Full Text
- View/download PDF
37. Robust Deep Transfer Models for Fruit and Vegetable Classification: A Step Towards a Sustainable Dietary
- Author
-
Mohamed Hamed N. Taha, Mourad R. Mouhamed, Aboul Ella Hassanien, and Nour Eldeen M. Khalifa
- Subjects
Computational complexity theory ,business.industry ,Computer science ,Small number ,Overfitting ,Machine learning ,computer.software_genre ,Convolutional neural network ,Class (biology) ,Software ,Transfer (computing) ,Artificial intelligence ,Transfer model ,business ,computer - Abstract
Sustainable dietary plays an essential role in protecting the environment to be healthier. Moreover, it protects human life and health in its widest sense. Fruits and vegetables are basic components of sustainable dietary as it is considered one of the main sources of healthy food for humans. The classifications of fruits and vegetables are most helpful for dietary assessment and guidance which will reflect in increasing the awareness of sustainable dietary for consumers. In this chapter, a robust deep transfer model based on deep convolutional neural networks for fruits and vegetable classification is introduced. This presented model is considered a first step to build a useful mobile software application that will help in raising the awareness of sustainable dietary. Three deep transfer models were selected for experiments in this research and they are Alexnet, Squeeznet, and Googlenet. They were selected as they contained a small number of layers which will decrease the computational complexity. The dataset used in this research is FruitVeg-81 which contains 15,737 images. The number of extracted classes from the dataset is 96 class by expanding three layers of classifications from the original dataset. Augmentation technique (rotation) was adopted in this research to reduce the overfitting and increase the number of images to be 11 times larger than the original dataset. The experiment results show that the Googlenet achieves the highest testing accuracy with 99.82%. Moreover, it achieved the highest precision, recall, and F1 performance score if it is compared with other models. Finally, A comparison results were carried out at the end of the research with related work which used the same dataset FruitVeg-81. The presented work achieved a superior result than the related work in terms of testing accuracy.
- Published
- 2020
- Full Text
- View/download PDF
38. A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the COVID-19 pandemic
- Author
-
Gunasekaran Manogaran, Mohamed Hamed N. Taha, Mohamed Loey, and Nour Eldeen M. Khalifa
- Subjects
Coronavirus disease 2019 (COVID-19) ,Computer science ,Feature extraction ,Decision tree ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,Article ,Deep transfer learning ,Masked face ,Component (UML) ,0202 electrical engineering, electronic engineering, information engineering ,Classical machine learning ,Electrical and Electronic Engineering ,Instrumentation ,business.industry ,Applied Mathematics ,020208 electrical & electronic engineering ,010401 analytical chemistry ,Process (computing) ,COVID-19 ,Condensed Matter Physics ,0104 chemical sciences ,Support vector machine ,Face (geometry) ,Artificial intelligence ,Transfer of learning ,business ,computer - Abstract
Highlights • A hybrid deep and machine learning model proposed for face mask detection. • The model can impede the Coronavirus transmission, specially COVID-19. • Three face mask datasets have experimented with this research. • The introduced model achieves high performance in the experimental study., The coronavirus COVID-19 pandemic is causing a global health crisis. One of the effective protection methods is wearing a face mask in public areas according to the World Health Organization (WHO). In this paper, a hybrid model using deep and classical machine learning for face mask detection will be presented. The proposed model consists of two components. The first component is designed for feature extraction using Resnet50. While the second component is designed for the classification process of face masks using decision trees, Support Vector Machine (SVM), and ensemble algorithm. Three face masked datasets have been selected for investigation. The Three datasets are the Real-World Masked Face Dataset (RMFD), the Simulated Masked Face Dataset (SMFD), and the Labeled Faces in the Wild (LFW). The SVM classifier achieved 99.64% testing accuracy in RMFD. In SMFD, it achieved 99.49%, while in LFW, it achieved 100% testing accuracy.
- Published
- 2020
39. EMPIRICAL STUDY AND ENHANCEMENT ON DEEP TRANSFER LEARNING FOR SKIN LESIONS DETECTION
- Author
-
NOUR ELDEEN M. KHALIFA, Loey, Mohamed, Mawgoud, Ahmed A., and MOHAMED HAMED N. TAHA
- Abstract
Skin cancer is the most common type of cancer. One in every three cancers diagnosed is a skin cancer according to skin cancer foundation statistics globally. The early detection of this type of cancer would help in raising the opportunities of curing it. The advances in computer algorithms such as deep learning would help doctors to detect and diagnose skin cancer automatically in early stages. This paper introduces an empirical study and enhancement on deep transfer learning for skin lesions detection. The study selects different pre-trained deep convolutional neural network models such as resnet18, squeezenet, google net, vgg16, and vgg19 to be applied into two different datasets. The datasets are MODE-NODE and ISIC skin lesion datasets. Data augmentation techniques have been adopted in this study to enlarge the total number of images in the datasets to be 5 times larger than the original datasets. The adopted augmentation techniques make the DCNN models more robust and prevent overfitting. Moreover, seven accredited performance matrices in deep learning have been used to conclude an optimal selection of the most appropriate DCNN model that fits the nature of the skin lesions datasets. The study concludes that vgg19 is the most appropriate DCNN according to testing accuracy measurement and achieved 98.8%. The seven performance matrices strengthen this result. Also, a comparative result was introduced with related works. The vgg19 overcomes the related work in terms of testing accuracy and the performance matrices on both datasets. Finally, the vgg19 model was trained on a smaller number of images than the related work by 10 times, which proved that the choice of data augmentation techniques played an important role in achieving better results. That would reflect on reducing the training time, memory consumption and the calculation complexity.
- Published
- 2020
- Full Text
- View/download PDF
40. The Detection of COVID-19 in CT Medical Images: A Deep Learning Approach
- Author
-
Nour Eldeen M. Khalifa, Sarah Hamed N. Taha, Mohamed Hamed N. Taha, and Aboul Ella Hassanien
- Subjects
Coronavirus disease 2019 (COVID-19) ,Computer science ,business.industry ,Deep learning ,Transfer (computing) ,CPU time ,Pattern recognition ,Artificial intelligence ,F1 score ,business ,Transfer of learning ,Class (biology) ,Image (mathematics) - Abstract
The COVID-19 coronavirus is one of the latest viruses that hit the earth in the new century. It was declared as a pandemic by the World Health Organization in 2020. In this chapter, a model for the detection of COVID-19 virus from CT chest medical images will be presented. The proposed model is based on Generative Adversarial Networks (GAN), and a fine-tuned deep transfer learning model. GAN is used to generate more images from the available dataset. While deep transfer models are used to classify the COVID-19 virus from the normal class. The original dataset consists of 746 images. The is divided into two parts; 90% for the training and validation phase, while 10% for the testing phase. The 90% then is divided into 80% percent for the training and 20% percent for the validation after using GAN as image augmenter. The proposed GAN architecture raises the number of images in the training and validation phase to be 10 times larger than the original dataset. The deep transfer models which are selected for experimental trials are Resnet50, Shufflenet, and Mobilenet. They were selected as they include a medium number of layers on their architectures if they are com-pared with large deep transfer models such as DenseNet, and Inception-ResNet. This will reflect on the performance of the proposed model in terms of reducing training time, memory and CPU usage. The experimental trials show that Shufflenet is selected to be the optimal deep transfer learning in the proposed model as it achieves the highest possible for testing accuracy and performance metrics. Shufflenet achieves an overall testing accuracy with 84.9, and 85.33% in all performance metrics which include recall, precision, and F1 score.
- Published
- 2020
- Full Text
- View/download PDF
41. Transfer Learning with a Fine-Tuned CNN Model for Classifying Augmented Natural Images
- Author
-
Siddhartha Bhattacharyya, Snasel Vaclav, Aboul Ella Hassanien, Dalia Ezzat, and Mohamed Hamed N. Taha
- Subjects
Contextual image classification ,Computer science ,business.industry ,Feature extraction ,Pattern recognition ,Artificial intelligence ,Overfitting ,Object (computer science) ,Transfer of learning ,business ,Convolutional neural network ,Dropout (neural networks) ,Test data - Abstract
Convolutional neural network has proven to be highly efficient in image classification, but this approach has some disadvantages. Notably, a large number of images are required for training, and great time is need for training to achieve high degree of accuracy in the classification. Transfer learning with a pre-trained model can be used to overcome these problems. There are two approaches to transfer learning, feature extraction approach and fine-tuning approach. Based on the used dataset, both approaches were evaluated, and the two approaches yielded high accuracy with a little better result for fine-tuning approach. To overcome overfitting issues that may occur during transfer learning, data augmentation and dropout techniques were applied. The dataset studied contained 6899 images with 8 distinct unbalanced classes, but only 5600 images with balanced classes were analyzed. The classes include airplane, car, cat, dog, flower, fruit, motorbike and person. We used Google’s Inception-v3 model that is trained on ImageNet database and can classify images into 1000 object categories like pencil, Zebra and many animals. We retrained the Inception model to classify the used natural images using the Keras Library with Tensorflow as backend and achieved an overall accuracy of 99.7% for the test data.
- Published
- 2020
- Full Text
- View/download PDF
42. Cyber Security Risks in MENA Region: Threats, Challenges and Countermeasures
- Author
-
Nour Eldeen M. Khalifa, Ahmed A. Mawgoud, Mohamed Loey, and Mohamed Hamed N. Taha
- Subjects
Cybercrime ,Business continuity ,Information and Communications Technology ,Economic sector ,Cyberterrorism ,International security ,Legislation ,Business ,Computer security ,computer.software_genre ,computer ,Social infrastructure - Abstract
Over the last few years, MENA region became an attractive target for cyber-attacks perpetrators. Hackers focus on governmental high valued sectors (i.e. oil and gas) alongside with other critical industries. MENA nations are increasingly investing in Information and Communication Technologies (ICTs) sector, social infrastructure, economic sector, schools and hospitals in the area are now completely based on the Internet. Currently, the position of ICTs became an essential phase of the domestic future and global security structure in the MENA Region, emphasizing the real need for a tremendous development in cybersecurity at a regional level. This environment raises questions about the developments in cybersecurity and offensive cyber tactics; this paper examines and investigates (1) the essential cybersecurity threats in MENA region, (2) the major challenges that faces both governments and organizations (3) the main countermeasures that governments follow to achieve the protection and business continuity in the region. It stresses the need for the importance of cybercrime legislation and higher defenses techniques towards cyberterrorism for MENA nations. It argues for the promotion of a cybersecurity awareness for the individuals as an effective mechanism for facing the current risks of cybersecurity in MENA region.
- Published
- 2019
- Full Text
- View/download PDF
43. An Optimized Deep Convolutional Neural Network to Identify Nanoscience Scanning Electron Microscope Images Using Social Ski Driver Algorithm
- Author
-
Aboul Ella Hassanien, Dalia Ezzat, and Mohamed Hamed N. Taha
- Subjects
Hyperparameter ,Optimization algorithm ,Computer science ,Scanning electron microscope ,Transfer of learning ,Convolutional neural network ,Algorithm - Abstract
In this paper, transfer learning from a pretrained Convolutional Neural Network (CNN) model called VGG16 in conjunction with a new evolutionary optimization algorithm called social ski driver algorithm (SSD) were applied for optimizing some hyperparameters of the CNN model to improve the classification performance of the images which was produced by the SEM technique. The results of the proposed approach (VGG16-SSD) are compared with the manual search method. The obtained results showed that the proposed approach was able to find the best values for the CNN hyperparameters that helped to successfully classify around 89.37% of a test dataset consisting of SEM images.
- Published
- 2019
- Full Text
- View/download PDF
44. Security Threats of Social Internet of Things in the Higher Education Environment
- Author
-
Ahmed A. Mawgoud, Mohamed Hamed N. Taha, and Nour Eldeen M. Khalifa
- Subjects
Information privacy ,Higher education ,business.industry ,Computer science ,Interoperability ,Computer security ,computer.software_genre ,Field (computer science) ,Variety (cybernetics) ,Information leakage ,Identity (object-oriented programming) ,business ,Set (psychology) ,computer - Abstract
Within the trials of the utilization of technology for the society, efforts have proven advantages of the new wave of rapid-growth change that began and is anticipating to proliferate with more potent connectivity and interoperability of diverse devices, named as the Social internet of things (SIoT). It is a rising paradigm of IoT wherein distinctive IoT devices interact and set up relationships with each in the academic field nowadays. Objects are establishing their social relationships in an unbiased way. The main issue is to understand how the objects in Social IoT can interact in the higher education systems to implement a secure system. Consequently, focus on the trustworthiness models. Because of the billions of linked devices, there may be a huge risk of identity and information leakage, device manipulation, records falsification, server/network attack and subsequent effect to application platforms. Whilst the variety of these interconnected devices keeps to grow each day in the academic learning field, So, does the wide variety of security threats and vulnerabilities posed to these devices at universities. Security is one of the most paramount technological studies troubles that exist nowadays for IoT. Security has many aspects; security built in the device, security of data transmission and data storage inside the systems and its applications. There may be an intensive quantity of literature that exists on the problem with endless issues as well as proposed solutions; however, maximum of the existing paintings does no longer provide a holistic view of protection and data privacy issues in the IoT. The primary aim of this research work is to state the risks and threats that faces SIoT by identifying (a) The essential domains in which SIoT is highly used in higher education, (b) The security necessities and challenges that SIoT is currently dealing with, (c) the existing security solutions which have been proposed or applied with their barriers.
- Published
- 2019
- Full Text
- View/download PDF
45. Enabling AI Applications in Data Science
- Author
-
Aboul-Ella Hassanien, Mohamed Hamed N. Taha, Nour Eldeen M. Khalifa, Aboul-Ella Hassanien, Mohamed Hamed N. Taha, and Nour Eldeen M. Khalifa
- Subjects
- Engineering—Data processing, Computational intelligence, Artificial intelligence
- Abstract
This book provides a detailed overview of the latest developments and applications in the field of artificial intelligence and data science. AI applications have achieved great accuracy and performance with the help of developments in data processing and storage. It has also gained strength through the amount and quality of data which is the main nucleus of data science. This book aims to provide the latest research findings in the field of artificial intelligence with data science.
- Published
- 2020
46. Toward Social Internet of Things (SIoT): Enabling Technologies, Architectures and Applications : Emerging Technologies for Connected and Smart Social Objects
- Author
-
Aboul Ella Hassanien, Roheet Bhatnagar, Nour Eldeen M. Khalifa, Mohamed Hamed N. Taha, Aboul Ella Hassanien, Roheet Bhatnagar, Nour Eldeen M. Khalifa, and Mohamed Hamed N. Taha
- Subjects
- Computational intelligence, Artificial intelligence, Application software
- Abstract
This unique book discusses a selection of highly relevant topics in the Social Internet of Things (SIoT), including blockchain, fog computing and data fusion. It also presents numerous SIoT-related applications in fields such as agriculture, health care, education and security, allowing researchers and industry practitioners to gain a better understanding of the Social Internet of Things
- Published
- 2019
47. Breast and Colon Cancer Classification from Gene Expression Profiles Using Data Mining Techniques
- Author
-
Hazem M. El-Bakry, Nour Eldeen M. Khalifa, Mohammed Wajeeh Jasim, Mohamed Hamed N. Taha, and Mohamed Loey
- Subjects
Physics and Astronomy (miscellaneous) ,Colorectal cancer ,Computer science ,General Mathematics ,Feature selection ,Computational biology ,02 engineering and technology ,Overfitting ,computer.software_genre ,03 medical and health sciences ,feature selection ,0302 clinical medicine ,Text mining ,cancer diagnosis ,Gene expression ,medicine ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,artificial_intelligence_robotics ,support vector machine ,Information gain ,Optimization algorithm ,business.industry ,lcsh:Mathematics ,Small number ,Intelligent decision support system ,lcsh:QA1-939 ,medicine.disease ,Support vector machine ,machine learning ,ComputingMethodologies_PATTERNRECOGNITION ,Chemistry (miscellaneous) ,information gain ,030220 oncology & carcinogenesis ,grey wolf optimization algorithm ,020201 artificial intelligence & image processing ,Data mining ,DNA microarray ,business ,computer ,Classifier (UML) - Abstract
Early detection of cancer increases the probability of recovery. This paper presents an intelligent decision support system (IDSS) for the early diagnosis of cancer based on gene expression profiles collected using DNA microarrays. Such datasets pose a challenge because of the small number of samples (no more than a few hundred) relative to the large number of genes (in the order of thousands). Therefore, a method of reducing the number of features (genes) that are not relevant to the disease of interest is necessary to avoid overfitting. The proposed methodology uses the information gain (IG) to select the most important features from the input patterns. Then, the selected features (genes) are reduced by applying the grey wolf optimization (GWO) algorithm. Finally, the methodology employs a support vector machine (SVM) classifier for cancer type classification. The proposed methodology was applied to two datasets (Breast and Colon) and was evaluated based on its classification accuracy, which is the most important performance measure in disease diagnosis. The experimental results indicate that the proposed methodology is able to enhance the stability of the classification accuracy as well as the feature selection.
- Published
- 2020
- Full Text
- View/download PDF
48. Towards Objective-Dependent Performance Analysis on Online Sentiment Review
- Author
-
Doaa Mohey El-Din, Mohamed Hamed N. Taha, and Nour Eldeen M. Khalifa
- Subjects
Measure (data warehouse) ,Computer science ,business.industry ,Sentiment analysis ,Perspective (graphical) ,English language ,Object (computer science) ,Machine learning ,computer.software_genre ,Domain (software engineering) ,Performance measurement ,Artificial intelligence ,business ,computer ,Meaning (linguistics) - Abstract
This chapter represents a new object dependent for online review evaluation for improving performance by a proposed performance criterion. This criterion can introduce an alternative solution for measuring sentiment accuracy. The problem illustrates in the accuracy comparison measurement between different sentiment techniques and frameworks. Each technique or framework targets one sentiment challenge or more. Another challenge appears in constructing database and its characteristics as memorability. For example, if two sentiment techniques are equal percentage of accuracy, the problem is that meaning they are achieved to the same classification, polarity and score. Is the sentiment challenge accuracy with 10% is bad? So, this study compares between several techniques based on proposed performance assessment. This assessment puts them in the same environment with respect three perspective. It is a new proposed criterion for performance measurement, which includes aggregation of performance measurement types: F-measure and Runtime with respect to speed of run time, memorability, and sentiment analysis challenge type. A comparison between several sentiment techniques is in movie domain in English language. It works on word-level sentiment analysis to measure the proposed performance criteria. It applies two experiments to evaluate the percentage degree of different techniques performance on measuring sentiments.
- Published
- 2018
- Full Text
- View/download PDF
49. Aquarium Family Fish Species Identification System Using Deep Neural Networks
- Author
-
Aboul Ella Hassanien, Mohamed Hamed N. Taha, and Nour Eldeen M. Khalifa
- Subjects
Computational complexity theory ,Computer science ,business.industry ,Deep learning ,Fish species ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,Convolutional neural network ,Convolution ,Identification system ,Identification (information) ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
In this paper, a system for aquarium family fish species identification is proposed. It identifies eight family fish species along with 191 sub-species. The proposed system is built using deep convolutional neural networks (CNN). It consists of four layers, two convolutional and two fully connected layers. A comparative result is presented against other CNN architectures such as AlexNet and VggNet according to four parameters (number of convolution and fully connected layers, the number of epochs in training phase to achieve 100% accuracy, validation accuracy, and testing accuracy). Through the paper, it is proven that the proposed system has competitive results against the other architectures. It achieved 85.59% testing accuracy while AlexNet achieves 85.41% over untrained benchmark dataset. Moreover, the proposed system has less trained images, less memory, less computational complexity in training, validation, and testing phases.
- Published
- 2018
- Full Text
- View/download PDF
50. Deep Galaxy V2: Robust Deep Convolutional Neural Networks for Galaxy Morphology Classifications
- Author
-
Nour Eldeen Mahmoud Khalifa, I. M. Selim, Aboul Ella Hassanien, and Mohamed Hamed N. Taha
- Subjects
Training set ,Computer science ,business.industry ,Process (computing) ,020206 networking & telecommunications ,Pattern recognition ,Astrophysics::Cosmology and Extragalactic Astrophysics ,02 engineering and technology ,Overfitting ,01 natural sciences ,Convolutional neural network ,Galaxy ,Immune system ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Artificial intelligence ,Layer (object-oriented design) ,business ,010303 astronomy & astrophysics - Abstract
This paper is an extended version of "Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks". In this paper, a robust deep convolutional neural network architecture for galaxy morphology classification is presented. A galaxy can be classified based on its features into one of three categories (Elliptical, Spiral, or Irregular) according to the Hubble galaxy morphology classification from 1926. The proposed convolutional neural network architecture consists of 8 layers, including one main convolutional layer for feature ex-traction with 96 filters and two principle fully connected layers for classification. The architecture is trained over 4238 images and achieved a 97.772% testing accuracy. In this version, "Deep Galaxy V2", an augmentation process is applied to the training data to overcome the overfitting problem and make the proposed architecture more robust and immune to memorizing the training data. A comparative result is present, and the testing accuracy was compared with those of other related works. The proposed architecture outperformed the other related works in terms of its testing accuracy.
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.