477 results
Search Results
2. Comment on papers using machine learning for significant wave height time series prediction: Complex models do not outperform auto-regression.
- Author
-
Jiang, Haoyu, Zhang, Yuan, Qian, Chengcheng, and Wang, Xuan
- Subjects
- *
ARTIFICIAL neural networks , *TIME series analysis , *PREDICTION models , *ARTIFICIAL intelligence , *MACHINE learning , *DECOMPOSITION method - Abstract
• Five Machine Learning (ML) models compared for wave height time series prediction. • Complex ML models do not outperform simple AR in wave height time series prediction. • Comment to related papers: signal decomposition in test set series is WRONG. Significant Wave Height (SWH) is crucial in many aspect of ocean engineering. The accurate prediction of SWH has therefore been of immense practical value. Recently, Artificial Intelligence (AI) time series prediction methods have been widely used for single-point short-term SWH time-series forecasting, resulting in many AI-based models claiming to achieve good results. However, the extent to which these complex AI models can outperform traditional methods has largely been overlooked. This study compared five different models - AutoRegressive (AR), eXtreme Gradient Boosting (XGB), Artificial Neural Network (ANN), Long Short-Term Memory (LSTM), and WaveNet - for their performance on SWH time series prediction at 16 buoy locations. Surprisingly, the results suggest that the differences of performance among different models are negligible, indicating that all these AI models have only "learned" the linear auto-regression from the data. Additionally, we noticed that many recent studies used signal decomposition method for such time series prediction, and most of them decomposed the test sets, which is WRONG. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Introduction to the virtual collection of papers on Artificial neural networks: applications in X‐ray photon science and crystallography.
- Author
-
Ekeberg, Tomas
- Subjects
- *
ARTIFICIAL neural networks , *DEEP learning , *CRYSTALLOGRAPHY , *ARTIFICIAL intelligence , *MACHINE learning , *PHOTONS - Abstract
Artificial intelligence is more present than ever, both in our society in general and in science. At the center of this development has been the concept of deep learning, the use of artificial neural networks that are many layers deep and can often reproduce human‐like behavior much better than other machine‐learning techniques. The articles in this collection are some recent examples of its application for X‐ray photon science and crystallography that have been published in Journal of Applied Crystallography. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. TRIBOLOGY INTERFACE OVER DIGITAL TECHNOLOGIES AND ENVISAGING TRIBOLOGY WITH PATENT LANDSCAPE — A QUEER REVIEW.
- Author
-
RAJAKUMARESWARAN, V., CHINTHAMU, NARENDER, MURALI, M., and DEVARAJAN, BALAJI
- Subjects
- *
DIGITAL technology , *ARTIFICIAL neural networks , *TRIBOLOGY , *MACHINE learning , *ARTIFICIAL intelligence , *SUPPORT vector machines , *DEEP learning - Abstract
Digital technologies sustain today's world. Every part of the world is working towards digital technologies, which none of us can eliminate. Enormous growth is achieved only by unexpected acceleration by digital technologies, including the Internet of Everything (IoE), Artificial Neural Networks (ANN), Machine Learning (ML), Internet of Things (IoT), Artificial Intelligence (AI), Deep Learning (DL), and many more. These technologies started occupying all the engineering sectors, including manufacturing. This paper focuses on tribology analysis related to manufacturing concerning various digital manufacturing technologies. The paper narration includes Tribology using digital technologies wherein the journals and patent landscape analysis abet them. In trend, Tribology utilizes all these technologies today and envisages its growth with the predominant technological invention in the border view. The survey of various literature reveals that only three digital technologies, including AI, ML, and ANN, are used by tribologists around the globe. Other Technologies like Evolutionary Algorithm (EA), Support Vector Machine (SVM), and Adaptive Neuro-Fuzzy Interference Systems (ANFIS) are not used predominantly. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Toward intelligent food drying: Integrating artificial intelligence into drying systems.
- Author
-
Miraei Ashtiani, Seyed-Hassan and Martynenko, Alex
- Subjects
- *
MACHINE learning , *DEEP learning , *ARTIFICIAL intelligence , *FOOD dehydration , *ARTIFICIAL neural networks , *INTELLIGENT control systems , *OPTIMIZATION algorithms - Abstract
Artificial intelligence (AI) and its data-driven counterpart, machine learning (ML), are rapidly evolving disciplines with increasing applications in modeling, simulation, control, and optimization within the drying industry. This paper presents a comprehensive overview of progress made in ML from shallow to deep learning and its implications for food drying. Theoretical foundations, advantages, and limitations of various ML approaches employed in this domain are explored. Additionally, advancements in ML models, particularly those enhanced by optimization algorithms, are reviewed. The review underscores the role of intelligent configuration of ML models, which affects their accuracy and ability to solve problems of high energy consumption, nutrient degradation, and uneven drying. Drawing upon research achievements, integrating of AI models with real-time measuring methods is discussed, enabling dynamic determination of optimal drying conditions and parameter adjustments. This integration facilitates automated decision-making, reducing human errors and enhancing operational efficiency in food drying. Moreover, AI models demonstrate proficiency in predicting drying times and analyzing energy usage patterns, thereby enabling optimization to minimize resource consumption while preserving product quality. Finally, this paper identifies current obstacles in technology development and proposes novel research avenues for sustainable drying technologies. The strengths and weaknesses of various AI methodologies are examined Artificial neural networks are extensively used for modeling drying phenomena Machine learning models can simulate complex processes of food drying Deep learning has significant potential for real-time monitoring of drying Intelligent control systems can optimize food drying [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Advances in artificial neural networks, machine learning and computational intelligence: Selected papers from the 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2018).
- Author
-
Oneto, Luca, Bunte, Kerstin, and Schleif, Frank-Michael
- Subjects
- *
ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *COMPUTATIONAL intelligence , *MACHINE learning , *STATISTICAL learning , *TECHNOLOGY - Published
- 2019
- Full Text
- View/download PDF
7. Advanced Machine Learning and Deep Learning Approaches for Remote Sensing II.
- Author
-
Jeon, Gwanggil
- Subjects
- *
REMOTE sensing , *MACHINE learning , *ARTIFICIAL neural networks , *DEEP learning , *ARTIFICIAL intelligence , *DISTANCE education - Abstract
This document is a summary of a special issue on advanced machine learning and deep learning techniques for remote sensing. The issue includes 16 research papers that cover a range of topics, including hyperspectral image classification, moving point target detection, radar echo extrapolation, and remote sensing object detection. Each paper introduces a novel approach or model and provides extensive testing and evaluation to demonstrate its effectiveness. The insights shared in this special issue are expected to contribute to future advancements in artificial intelligence-based remote sensing research. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
8. A novel machine learning approach for rice yield estimation.
- Author
-
Lingwal, Surabhi, Bhatia, Komal Kumar, and Singh, Manjeet
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *RICE quality , *FEEDFORWARD neural networks , *ARTIFICIAL intelligence , *RANDOM forest algorithms - Abstract
Artificial Intelligence is quickly emerging as a technological solution for the agriculture industry to surmount its classical challenges. Artificial Intelligence is facilitating farmers to refine their products and alleviate unfavourable impacts due to the environment. The central concern of this paper is predictive analytics to develop a machine learning model to identify and predict crop yield based on multiple environmental factors. In this paper, a hybrid learner 'RaNN' is proposed that combines the feature sampling and majority voting technique of Random Forest in-combination with the multilayer Feedforward Neural Network to predict the crop yield. Research has also ascertained the essential features responsible for accurate yield prediction. The proposed model works for rice yield prediction, one of the chief grains of India. The region chosen for the work is Punjab, which is among the largest producer states of India for rice. The dataset consists of 15 attributes comprising the weather and agriculture data collected from the Indian Meteorological Department Pune, and Punjab Environment Information System (ENVIS) Center, Government of India. The study has also made a comparative assessment of 'RaNN' with machine learning methods like Multiple Linear Regression, Random Forest, Decision Tree, Boosting Regression, Support Vector Machine Regression, Ensemble Learner, and Artificial Neural Network. Our model RaNN has listed a better prediction accuracy with minimal error among the other techniques providing a 98% correlation between the actual and the predicted yield. Abbreviations: AI – Artificial Intelligence; ANN – Artificial Neural Network; BR – Boosting Regression; Chem Fert – chemical fertilisers; DT – Decision Tree; EL – Ensemble Learner; ENVIS – Punjab Environment Information System; GBM – Stochastic Gradient Boosting Method; GPS – Global Positioning System; HMAX – highest maximum temperature in degrees C; IMD – Indian Meteorological Department; L1 – Lasso regression; L2 – Ridge regression; LMIN – lowest minimum temperature; ML – Machine Learning; MAE – Mean Absolute Error; MEVP – mean evaporation in mm; MLR – Multiple Linear Regression; MMAX – mean maximum temperature in degrees C; MMIN – mean minimum temperature in degrees C; MSSH – Mean sunshine duration in hours; MWS – mean wind speed in km/h; P1 – number of days with precipitation (0.1–0.2 mm); P2 – number of days with precipitation (greater than or equal to 0.3 mm); RaNN – Hybrid RF-ANN model; RMSE – Root Mean Squared Error; $${R^2}$$ R 2 – Coefficient of determination; RD – number of rainy days; RF – Random Forest; SVM Reg – Support Vector Machine Regression; TMRF – total rainfall per month in mm [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Artificial Intelligence for Energy Theft Detection in Distribution Networks.
- Author
-
Žarković, Mileta and Dobrić, Goran
- Subjects
- *
ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *THEFT , *ELECTRIC power consumption , *K-means clustering , *SMART meters - Abstract
The digitization of distribution power systems has revolutionized the way data are collected and analyzed. In this paper, the critical task of harnessing this information to identify irregularities and anomalies in electricity consumption is tackled. The focus is on detecting non-technical losses (NTLs) and energy theft within distribution networks. A comprehensive overview of the methodologies employed to uncover NTLs and energy theft is presented, leveraging measurements of electricity consumption. The most common scenarios and prevalent cases of anomalies and theft among consumers are identified. Additionally, statistical indicators tailored to specific anomalies are proposed. In this research paper, the practical implementation of numerous artificial intelligence (AI) algorithms, including the artificial neural network (ANN), ANFIS, autoencoder neural network, and K-mean clustering, is highlighted. These algorithms play a central role in our research, and our primary objective is to showcase their effectiveness in identifying NTLs. Real-world data sourced directly from distribution networks are utilized. Additionally, we carefully assess how well statistical methods work and compare them to AI techniques by testing them with real data. The artificial neural network (ANN) accurately identifies various consumer types, exhibiting a frequency error of 7.62%. In contrast, the K-means algorithm shows a slightly higher frequency error of 9.26%, while the adaptive neuro-fuzzy inference system (ANFIS) fails to detect the initial anomaly type, resulting in a frequency error of 11.11%. Our research suggests that AI can make finding irregularities in electricity consumption even more effective. This approach, especially when using data from smart meters, can help us discover problems and safeguard distribution networks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Prediction of punching shear strength of slab-column connections: A comprehensive evaluation of machine learning and deep learning based approaches.
- Author
-
Derogar, Shahram, Ince, Ceren, Yatbaz, Hakan Yekta, and Ever, Enver
- Subjects
- *
SHEAR strength , *MACHINE learning , *ARTIFICIAL neural networks , *CONCRETE slabs , *DEEP learning , *STRUCTURAL engineering , *TRANSVERSE reinforcements - Abstract
Despite the complex punching shear behavior of reinforced concrete slabs have been comprehensively addressed in the literature, it is further essential to develop a universal design model comprising high accuracy and the simplicity for design practicability, adaptable to diverse conditions encountered in practice. Artificial intelligence applications, artificial neural networks (ANN), and more recently, various machine learning (ML) and deep learning (DL) techniques veer off in a new direction in structural engineering context with improved accuracy and efficiency. The paper begins with the assessment of the capabilities of various artificial intelligence applications in predicting the punching shear strength of slab-column connections without shear reinforcement through the extensive database using 650 punching shear experiments from the literature. Critical parameters influencing the punching shear strength as well as the precision of the current code provisions in predicting this feature were then thoroughly examined in the paper. The results shown in this paper validated the competency of artificial intelligence applications in predicting the punching shear strength of such connections with increased accuracy and improved simplicity in practical terms. The proposed models utilizing the artificial intelligence applications encourage the ultimate rehabilitation policies to be proposed and improved code provisions to be developed for contemporary structures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. FAIR AI Models in High Energy Physics.
- Author
-
Li, Haoyang, Duarte, Javier, Roy, Avik, Zhu, Ruike, Huerta, E. A., Diaz, Daniel, Harris, Philip, Kansal, Raghav, Katz, Daniel S., Kavoori, Ishaan H., Kindratenko, Volodymyr V., Mokhtar, Farouk, Neubauer, Mark S., Park, Sang Eon, Quinnan, Melissa, Rusack, Roger, and Zhao, Zhizhen
- Subjects
- *
PARTICLE physics , *ARTIFICIAL intelligence , *MACHINE learning , *DATA modeling , *ARTIFICIAL neural networks - Abstract
The findable, accessible, interoperable, and reusable (FAIR) data principles serve as a framework for examining, evaluating, and improving data sharing to advance scientific endeavors. There is an emerging trend to adapt these principles for machine learning models—algorithms that learn from data without specific coding—and, more generally, AI models, due to AI's swiftly growing impact on scientific and engineering sectors. In this paper, we propose a practical definition of the FAIR principles for AI models and provide a template program for their adoption. We exemplify this strategy with an implementation from high-energy physics, where a graph neural network is employed to detect Higgs bosons decaying into two bottom quarks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Early Breast Cancer Risk Assessment: Integrating Histopathology with Artificial Intelligence.
- Author
-
Ivanova, Mariia, Pescia, Carlo, Trapani, Dario, Venetis, Konstantinos, Frascarelli, Chiara, Mane, Eltjona, Cursano, Giulia, Sajjadi, Elham, Scatena, Cristian, Cerbelli, Bruna, d'Amati, Giulia, Porta, Francesca Maria, Guerini-Rocco, Elena, Criscitiello, Carmen, Curigliano, Giuseppe, and Fusco, Nicola
- Subjects
- *
BREAST tumor risk factors , *RISK assessment , *MEDICAL protocols , *CANCER relapse , *ARTIFICIAL intelligence , *EARLY detection of cancer , *CYTOCHEMISTRY , *TUMOR markers , *DECISION making in clinical medicine , *IMMUNOHISTOCHEMISTRY , *PATIENT-centered care , *DEEP learning , *ARTIFICIAL neural networks , *MACHINE learning , *ONCOLOGISTS , *INDIVIDUALIZED medicine , *MOLECULAR pathology , *HEALTH care teams , *ALGORITHMS , *DISEASE risk factors - Abstract
Simple Summary: Risk assessment in early breast cancer is critical for clinical decisions, but defining risk categories poses a significant challenge. The integration of conventional histopathology and biomarkers with artificial intelligence (AI) techniques, including machine learning and deep learning, has the potential to offer more precise information. AI applications extend beyond detection to histological subtyping, grading, and molecular feature identification. The successful integration of AI into clinical practice requires collaboration between histopathologists, molecular pathologists, computational pathologists, and oncologists to optimize patient outcomes. Effective risk assessment in early breast cancer is essential for informed clinical decision-making, yet consensus on defining risk categories remains challenging. This paper explores evolving approaches in risk stratification, encompassing histopathological, immunohistochemical, and molecular biomarkers alongside cutting-edge artificial intelligence (AI) techniques. Leveraging machine learning, deep learning, and convolutional neural networks, AI is reshaping predictive algorithms for recurrence risk, thereby revolutionizing diagnostic accuracy and treatment planning. Beyond detection, AI applications extend to histological subtyping, grading, lymph node assessment, and molecular feature identification, fostering personalized therapy decisions. With rising cancer rates, it is crucial to implement AI to accelerate breakthroughs in clinical practice, benefiting both patients and healthcare providers. However, it is important to recognize that while AI offers powerful automation and analysis tools, it lacks the nuanced understanding, clinical context, and ethical considerations inherent to human pathologists in patient care. Hence, the successful integration of AI into clinical practice demands collaborative efforts between medical experts and computational pathologists to optimize patient outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. The Role of Artificial Intelligence in the Diagnosis and Treatment of Ulcerative Colitis.
- Author
-
Uchikov, Petar, Khalid, Usman, Vankov, Nikola, Kraeva, Maria, Kraev, Krasimir, Hristov, Bozhidar, Sandeva, Milena, Dragusheva, Snezhanka, Chakarov, Dzhevdet, Petrov, Petko, Dobreva-Yatseva, Bistra, and Novakov, Ivan
- Subjects
- *
ULCERATIVE colitis , *ARTIFICIAL intelligence , *INFLAMMATORY bowel diseases , *MACHINE learning , *LITERATURE reviews - Abstract
Background and objectives: This review aims to delve into the role of artificial intelligence in medicine. Ulcerative colitis (UC) is a chronic, inflammatory bowel disease (IBD) characterized by superficial mucosal inflammation, rectal bleeding, diarrhoea and abdominal pain. By identifying the challenges inherent in UC diagnosis, we seek to highlight the potential impact of artificial intelligence on enhancing both diagnosis and treatment methodologies for this condition. Method: A targeted, non-systematic review of literature relating to ulcerative colitis was undertaken. The PubMed and Scopus databases were searched to categorize a well-rounded understanding of the field of artificial intelligence and its developing role in the diagnosis and treatment of ulcerative colitis. Articles that were thought to be relevant were included. This paper only included articles published in English. Results: Artificial intelligence (AI) refers to computer algorithms capable of learning, problem solving and decision-making. Throughout our review, we highlighted the role and importance of artificial intelligence in modern medicine, emphasizing its role in diagnosis through AI-assisted endoscopies and histology analysis and its enhancements in the treatment of ulcerative colitis. Despite these advances, AI is still hindered due to its current lack of adaptability to real-world scenarios and its difficulty in widespread data availability, which hinders the growth of AI-led data analysis. Conclusions: When considering the potential of artificial intelligence, its ability to enhance patient care from a diagnostic and therapeutic perspective shows signs of promise. For the true utilization of artificial intelligence, some roadblocks must be addressed. The datasets available to AI may not truly reflect the real-world, which would prevent its impact in all clinical scenarios when dealing with a spectrum of patients with different backgrounds and presenting factors. Considering this, the shift in medical diagnostics and therapeutics is coinciding with evolving technology. With a continuous advancement in artificial intelligence programming and a perpetual surge in patient datasets, these networks can be further enhanced and supplemented with a greater cohort, enabling better outcomes and prediction models for the future of modern medicine. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Predicting Money Laundering Using Machine Learning and Artificial Neural Networks Algorithms in Banks.
- Author
-
Lokanan, Mark E.
- Subjects
- *
ARTIFICIAL neural networks , *MONEY laundering , *MACHINE learning , *ALGORITHMS , *RANDOM forest algorithms - Abstract
This paper aims to build a machine learning and a neural network model to detect the probability of money laundering in banks. The paper's data came from a simulation of actual transactions flagged for money laundering in Middle Eastern banks. The main findings highlight that criminal networks mainly use the integration stage to integrate money into the financial system. Fraudsters prefer to launder funds in the early hours, morning followed by the business day's afternoon time intervals. Additionally, the Naïve Bayes and Random Forest classifiers were identified as the two best-performing models to predict bank money laundering transactions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. AI/ML Chatbots' Souls, or Transformers: Less Than Meets the Eye.
- Author
-
Lazzari, Edmund Michael
- Subjects
- *
CHATBOTS , *ARTIFICIAL intelligence , *MACHINE learning , *LINGUISTICS , *COMPUTATIONAL linguistics , *ARTIFICIAL neural networks - Abstract
Given the peculiarly linguistic approach that contemporary philosophers use to apply St. Thomas Aquinas's arguments on the immateriality of the human soul, this paper will present a Thomistic-inspired evaluation of whether artificial intelligence/machine learning (AI/ML) chatbots' composition and linguistic performance justify the assertion that AI/ML chatbots have immaterial souls. The first section of the paper will present a strong, but ultimately crucially flawed argument that AI/ML chatbots do have souls based on contemporary Thomistic argumentation. The second section of the paper will provide an overview of the actual computer science models that make artificial neural networks and AI/ML chatbots function, which I hope will assist other theologians and philosophers writing about technology, The third section will present some of Emily Bender's and Alexander Koller's objections to AI/ML chatbots being able to access meaning from computational linguistics. The final section will highlight the similarities of Bender's and Koller's argument to a fuller presentation of St. Thomas Aquinas's argument for the immateriality of the human soul, ultimately arguing that the current mechanisms and linguistic activity of AI/ML programming do not constitute activity sufficient to conclude that they have immaterial souls on the strength of St. Thomas's arguments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. THE IMPACT AND PROSPECTS OF USING ARTIFICIAL INTELLIGENCE IN THE ECONOMY.
- Author
-
Aguzarova, Larisa, Aguzarova, Fatima, and Tsallaeva, Kamilla
- Subjects
- *
ARTIFICIAL intelligence , *ECONOMIC development , *ARTIFICIAL neural networks - Abstract
This paper discusses issues related to the development of technologies and the relationship of this process with economic processes. Due to the rapid digitalization, the importance and role of artificial intelligence are growing. At the same time there are many problems and they are considered in this paper: unemployment of technical workers, a security problem that is associated with the confidentiality, the existing neural networks cannot be suitable for use in all industries etc. When using the generalization method, the authors have made relevant conclusions and recommendations for using Artificial Intellect: to solve the following universal tasks: automatic translation; getting business intelligence; recognition of visual signs; character recognition; information extraction; understanding and analyzing texts; image analysis; ensuring information security and protection against cyber-attacks; speech recognition; robotic tools for the implementation of tasks at different levels and in different fields. [ABSTRACT FROM AUTHOR]
- Published
- 2023
17. A Method for Object-oriented Detection of Deep Convection from Geostationary Satellite Imagery Using Machine Learning.
- Author
-
Shishov, A. E.
- Subjects
- *
GEOSTATIONARY satellites , *REMOTE-sensing images , *ARTIFICIAL neural networks , *MACHINE learning , *CONVECTIVE clouds , *SEVERE storms - Abstract
Due to high spatial and temporal resolution, geostationary meteorological satellite imagery is a valuable source of information on the development of deep convective clouds and related severe weather events. Some methods for automatic deep convection detection from satellite data provide a satisfactory probability of detection for independent datasets, but are characterized by a high false alarm rate. The paper gives a description of an algorithm for automatic detection of deep convective clouds with satellite imagery using gradient boosting, logistic regression, and artificial neural network models. The results of validation of the proposed method using dependent and independent data of ground-based observations for the period 2013–2020 are presented. A low false alarm rate and high probability of detection suggest that the algorithm can be used in the operational mode. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Insider employee-led cyber fraud (IECF) in Indian banks: from identification to sustainable mitigation planning.
- Author
-
Roy, Neha Chhabra and Prabhakaran, Sreeleakha
- Subjects
- *
BANKING laws , *FRAUD prevention , *CORRUPTION , *ORGANIZATIONAL behavior , *RISK assessment , *DATA security , *RANDOM forest algorithms , *COMPUTERS , *FOCUS groups , *DATA security failures , *INTERVIEWING , *DEBT , *QUESTIONNAIRES , *ARTIFICIAL intelligence , *LOGISTIC regression analysis , *IDENTITY theft , *SECURITY systems , *FINANCIAL stress , *RESEARCH methodology , *CONCEPTUAL structures , *JOB stress , *ARTIFICIAL neural networks , *MACHINE learning , *ALGORITHMS - Abstract
This paper explores the different insider employee-led cyber frauds (IECF) based on the recent large-scale fraud events of prominent Indian banking institutions. Examining the different types of fraud and appropriate control measures will protect the banking industry from fraudsters. In this study, we identify and classify Cyber Fraud (CF), map the severity of the fraud on a scale of priority, test the mitigation effectiveness, and propose optimal mitigation measures. The identification and classification of CF losses were based on a literature review and focus group discussions with risk and vigilance officers and cyber cell experts. The CF was analyzed using secondary data. We predicted and prioritized CF based on machine learning-derived Random Forest (RF). An efficient fraud mitigation model was developed based on an offender-victim-centric approach. Mitigation is advised both before and after fraud occurs. Through the findings of this research, banks and fraud investigators can prevent CF by detecting it quickly and controlling it on time. This study proposes a structured, sustainable CF mitigation plan that protects banks, employees, regulators, customers, and the economy, thus saving time, resources, and money. Further, these mitigation measures will improve the reputation of the Indian banking industry and ensure its survival. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. A Review of Artificial Intelligence Methods in Predicting Thermophysical Properties of Nanofluids for Heat Transfer Applications.
- Author
-
Basu, Ankan, Saha, Aritra, Banerjee, Sumanta, Roy, Prokash C., and Kundu, Balaram
- Subjects
- *
NANOFLUIDS , *NANOFLUIDICS , *THERMOPHYSICAL properties , *ARTIFICIAL intelligence , *HEAT transfer , *ARTIFICIAL neural networks , *SPECIFIC heat capacity - Abstract
This present review explores the application of artificial intelligence (AI) methods in analysing the prediction of thermophysical properties of nanofluids. Nanofluids, colloidal solutions comprising nanoparticles dispersed in various base fluids, have received significant attention for their enhanced thermal properties and broad application in industries ranging from electronics cooling to renewable energy systems. In particular, nanofluids' complexity and non-linear behaviour necessitate advanced predictive models in heat transfer applications. The AI techniques, which include genetic algorithms (GAs) and machine learning (ML) methods, have emerged as powerful tools to address these challenges and offer novel alternatives to traditional mathematical and physical models. Artificial Neural Networks (ANNs) and other AI algorithms are highlighted for their capacity to process large datasets and identify intricate patterns, thereby proving effective in predicting nanofluid thermophysical properties (e.g., thermal conductivity and specific heat capacity). This review paper presents a comprehensive overview of various published studies devoted to the thermal behaviour of nanofluids, where AI methods (like ANNs, support vector regression (SVR), and genetic algorithms) are employed to enhance the accuracy of predictions of their thermophysical properties. The reviewed works conclusively demonstrate the superiority of AI models over the classical approaches, emphasizing the role of AI in advancing research for nanofluids used in heat transfer applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Guest Editorial: Artificial intelligence‐empowered reliable forecasting for energy sectors.
- Author
-
Mahmoud, Karar, Guerrero, Josep M., Abdel‐Nasser, Mohamed, and Yorino, Naoto
- Subjects
- *
ENERGY industries , *ARTIFICIAL neural networks , *MACHINE learning , *FORECASTING , *QUANTILE regression , *CONVOLUTIONAL neural networks , *DEMAND forecasting - Abstract
This document is a guest editorial from the journal IET Generation, Transmission & Distribution. It discusses the use of artificial intelligence (AI) in reliable forecasting for energy sectors. The editorial highlights the challenges of integrating renewable energy sources and fluctuating electricity demand, and emphasizes the importance of accurate forecasting for system operators. The document also provides summaries of several papers included in a special issue on AI-empowered forecasting in energy sectors, covering topics such as load forecasting, wind power prediction, and control parameter optimization. The editorial concludes by recommending further research and practical implementations of AI approaches in the energy sectors. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
21. Human oocytes image classification method based on deep neural networks.
- Author
-
Targosz, Anna, Myszor, Dariusz, and Mrugacz, Grzegorz
- Subjects
- *
ARTIFICIAL neural networks , *IMAGE recognition (Computer vision) , *HUMAN in vitro fertilization , *OVUM , *INTRACYTOPLASMIC sperm injection , *FERTILIZATION in vitro - Abstract
Background: The effectiveness of in vitro fertilization depends on the assessment and selection of oocytes and embryos with the highest developmental potential. One of the tasks in the ICSI (intracytoplasmic sperm injection) procedure is the classification of oocytes according to the stages of their meiotic maturity. Oocytes classification traditionally is done manually during their observation under the light microscope. The paper is part of the bigger task, the development of the system for optimal oocyte and embryos selection. In the hereby work, we present the method for the automatic classification of oocytes based on their images, that employs DNN algorithms. Results: For the purpose of oocyte class determination, two structures based on deep neural networks were applied. DeepLabV3Plus was responsible for the analysis of oocyte images in order to extract specific regions of oocyte images. Then extracted components were transferred to the network, inspired by the SqueezeNet architecture, for the purpose of oocyte type classification. The structure of this network was refined by a genetic algorithm in order to improve generalization abilities as well as reduce the network's FLOPs thus minimizing inference time. As a result, Acc ¯ at the level of 0.964 was obtained at the level of the validation set and 0.957 at the level of the test set. Generated neural networks as well as code that allows running the processing pipe were made publicly available. Conclusions: In this paper, the complete pipeline was proposed that is able to automatically classify human oocytes into three classes MI, MII, and PI based on the oocytes' microscopic image. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Artificial Intelligence in Neurosurgery: A State-of-the-Art Review from Past to Future.
- Author
-
Tangsrivimol, Jonathan A., Schonfeld, Ethan, Zhang, Michael, Veeravagu, Anand, Smith, Timothy R., Härtl, Roger, Lawton, Michael T., El-Sherbini, Adham H., Prevedello, Daniel M., Glicksberg, Benjamin S., and Krittanawong, Chayakrit
- Subjects
- *
ARTIFICIAL intelligence , *NEUROSURGERY , *SPINAL surgery , *MACHINE learning , *ARTIFICIAL neural networks , *SURGICAL complications - Abstract
In recent years, there has been a significant surge in discussions surrounding artificial intelligence (AI), along with a corresponding increase in its practical applications in various facets of everyday life, including the medical industry. Notably, even in the highly specialized realm of neurosurgery, AI has been utilized for differential diagnosis, pre-operative evaluation, and improving surgical precision. Many of these applications have begun to mitigate risks of intraoperative and postoperative complications and post-operative care. This article aims to present an overview of the principal published papers on the significant themes of tumor, spine, epilepsy, and vascular issues, wherein AI has been applied to assess its potential applications within neurosurgery. The method involved identifying high-cited seminal papers using PubMed and Google Scholar, conducting a comprehensive review of various study types, and summarizing machine learning applications to enhance understanding among clinicians for future utilization. Recent studies demonstrate that machine learning (ML) holds significant potential in neuro-oncological care, spine surgery, epilepsy management, and other neurosurgical applications. ML techniques have proven effective in tumor identification, surgical outcomes prediction, seizure outcome prediction, aneurysm prediction, and more, highlighting its broad impact and potential in improving patient management and outcomes in neurosurgery. This review will encompass the current state of research, as well as predictions for the future of AI within neurosurgery. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Visual Features for Improving Endoscopic Bleeding Detection Using Convolutional Neural Networks.
- Author
-
Brzeski, Adam, Dziubich, Tomasz, and Krawczyk, Henryk
- Subjects
- *
CONVOLUTIONAL neural networks , *ARTIFICIAL neural networks , *IMAGE recognition (Computer vision) , *IMAGE processing , *COMPUTER vision - Abstract
The presented paper investigates the problem of endoscopic bleeding detection in endoscopic videos in the form of a binary image classification task. A set of definitions of high-level visual features of endoscopic bleeding is introduced, which incorporates domain knowledge from the field. The high-level features are coupled with respective feature descriptors, enabling automatic capture of the features using image processing methods. Each of the proposed feature descriptors outputs a feature activation map in the form of a grayscale image. Acquired feature maps can be appended in a straightforward way to the original color channels of the input image and passed to the input of a convolutional neural network during the training and inference steps. An experimental evaluation is conducted to compare the classification ROC AUC of feature-extended convolutional neural network models with baseline models using regular color image inputs. The advantage of feature-extended models is demonstrated for the Resnet and VGG convolutional neural network architectures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. A survey on machine and deep learning in semiconductor industry: methods, opportunities, and challenges.
- Author
-
Huang, An Chi, Meng, Sheng Hui, and Huang, Tian Jiun
- Subjects
- *
MACHINE learning , *SEMICONDUCTOR industry , *ARTIFICIAL neural networks , *DEEP learning , *OBJECT recognition (Computer vision) , *ARTIFICIAL intelligence - Abstract
The technology of big data analysis and artificial intelligence deep learning has been actively cross-combined with various fields to increase the effect of its original low single field. Precision components commonly used in electronic products use changes in the conductivity of semiconductors to process information. This study aims to review key milestones and recent developments in the semiconductor industry using artificial intelligence methods. For this systematic review, we searched academic networks between 2015 and 2022, including Nature, Elsevier, Springer, Taylor & Francis Online, Multidisciplinary Digital Publishing Institute, and the Institute of Electrical and Electronics Engineers. The literature reviewed is based on conference proceedings and journal articles, specifically covering the key achievements of the discussion paper, the key technologies used, experimental results, opportunities, and future research pathways. After searching on an academic website, we selected six major studies. In five of these studies, visual object detection, surface defect detection, machine production scheduling application, fault diagnosis and prediction, and monitoring of the manufacturing process were made using artificial neural networks, machine learning methods, and hybrid models. In addition, the studies covered independent, single methods or used more than two types of technologies for performance comparison. Finally, we reviewed the strengths and weaknesses of the literature. We also analysed various datasets, acquisition systems, and experimental scenarios. The review shows that as the number of studies conducted in manufacturing continues to increase, more research is needed to unearth key information that is often overlooked, all of which are challenges in refining science and overcoming real-world scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Application of artificial intelligence techniques in municipal solid waste management: a systematic literature review.
- Author
-
Mounadel, Adnane, Ech-Cheikh, Hamid, Lissane Elhaq, Saâd, Rachid, Ahmed, Sadik, Mohamed, and Abdellaoui, Bilal
- Subjects
- *
SOLID waste management , *ARTIFICIAL intelligence , *ARTIFICIAL neural networks , *WASTE treatment , *MACHINE learning , *SOLID waste - Abstract
Municipal solid waste (MSW) management has become a highly challenging issue for many countries at different levels of development since the growth of population and urbanisation has resulted in a large increase in MSW generation. It is difficult to forecast the solid waste generated and its characteristics due to the non-linear nature of the MSW system, hence the importance of introducing artificial intelligence (AI) techniques and machine learning (ML) methods. This paper discusses a systematic literature review (SLR) on the application of AI techniques in MSW management, including waste generation prediction, waste collection and transportation and waste treatment and final disposal. The study reviewed and analysed the research studies published between 2000 and 2021, and investigated the current challenges faced by researchers in implementing AI approaches in the MSW system. It was concluded that artificial neural networks are the most used approach in various MSW-related problems. However, the lack and reliability of data are limiting the advancement of AI techniques in this field. Additionally, most studies claimed that their results are accurate and can be implemented in real-life scenarios, with an absence of a clear baseline to assess the performance of the adopted approaches. The detailed gaps and future suggestions for AI techniques in MSW systems are also discussed for further research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Testing and enhancing spatial transferability of artificial neural networks based travel behavior models.
- Author
-
Koushik, Anil NP, Manoj, M, Nezamuddin, N, and Prathosh, AP
- Subjects
- *
ARTIFICIAL neural networks , *SHORT-term memory , *LONG-term memory , *FEEDFORWARD neural networks , *BEHAVIORAL research , *ARTIFICIAL intelligence - Abstract
Artificial Neural Networks (ANNs) are emerging classes of AI algorithms, and have seen numerous applications in travel behavior research recently. However, the transferability of ANN-based travel behavior models is seldom tested. A few studies that test transferability, merely use vanilla Feedforward Neural Networks. This paper evaluates the spatial transferability of two ANN-based models: first, a Feedforward ANN-based mode choice model, and next, a Long Short Term Memory (LSTM)-based activity generation and activity-timing model, and enhances their transferability using transfer learning (TL). Both the models were found to exhibit poor transferability in case of naïve transfer. Transfer learning resulted in significant improvements with the TL-enhanced models that utilizeonly 50% of local data achieving results similar to a locally developed model. Further, ANNs performed poorer when compared with nested logit (NL) models during naïve transfer. However, the TL-enhanced ANN-based models showed significant improvement compared to transfer scaling enhanced NL models. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Standardising Breast Radiotherapy Structure Naming Conventions: A Machine Learning Approach.
- Author
-
Haidar, Ali, Field, Matthew, Batumalai, Vikneswary, Cloak, Kirrily, Al Mouiee, Daniel, Chlap, Phillip, Huang, Xiaoshui, Chin, Vicky, Aly, Farhannah, Carolan, Martin, Sykes, Jonathan, Vinod, Shalini K., Delaney, Geoffrey P., and Holloway, Lois
- Subjects
- *
SPECIALTY hospitals , *HUMAN body , *MACHINE learning , *RETROSPECTIVE studies , *ARTIFICIAL intelligence , *CANCER treatment , *TERMS & phrases , *RESEARCH funding , *RADIOTHERAPY , *DATA analysis , *ARTIFICIAL neural networks , *RECEIVER operating characteristic curves , *THREE-dimensional printing , *BREAST tumors , *ONCOLOGY , *ALGORITHMS , *LONGITUDINAL method , *RADIATION dosimetry , *DATA mining - Abstract
Simple Summary: In radiotherapy treatment, organs at risk and target volumes are contoured by the clinicians to prepare a dosimetry plan. In retrospective data, these structures are not often standardised to universal names across the patients plans, which is required to enable data mining and analysis. In this paper, a new method was proposed and evaluated to automatically standardise radiotherapy structures names using machine learning algorithms. The proposed approach was deployed over a dataset with 1613 patients collected from Liverpool & Macarthur Cancer Therapy Centres, New South Wales, Australia. It was concluded that machine learning techniques can standardise the dosimetry plan structures, taking into consideration the integration of multiple modalities representing each structure during the training process. In progressing the use of big data in health systems, standardised nomenclature is required to enable data pooling and analyses. In many radiotherapy planning systems and their data archives, target volumes (TV) and organ-at-risk (OAR) structure nomenclature has not been standardised. Machine learning (ML) has been utilised to standardise volumes nomenclature in retrospective datasets. However, only subsets of the structures have been targeted. Within this paper, we proposed a new approach for standardising all the structures nomenclature by using multi-modal artificial neural networks. A cohort consisting of 1613 breast cancer patients treated with radiotherapy was identified from Liverpool & Macarthur Cancer Therapy Centres, NSW, Australia. Four types of volume characteristics were generated to represent each target and OAR volume: textual features, geometric features, dosimetry features, and imaging data. Five datasets were created from the original cohort, the first four represented different subsets of volumes and the last one represented the whole list of volumes. For each dataset, 15 sets of combinations of features were generated to investigate the effect of using different characteristics on the standardisation performance. The best model reported 99.416% classification accuracy over the hold-out sample when used to standardise all the nomenclatures in a breast cancer radiotherapy plan into 21 classes. Our results showed that ML based automation methods can be used for standardising naming conventions in a radiotherapy plan taking into consideration the inclusion of multiple modalities to better represent each volume. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Analysis of Artificial Intelligence-Based Approaches Applied to Non-Invasive Imaging for Early Detection of Melanoma: A Systematic Review.
- Author
-
Patel, Raj H., Foltz, Emilie A., Witkowski, Alexander, and Ludzik, Joanna
- Subjects
- *
MELANOMA diagnosis , *ONLINE information services , *MEDICAL databases , *DERMATOLOGISTS , *DEEP learning , *MEDICAL information storage & retrieval systems , *IN vivo studies , *MICROSCOPY , *SYSTEMATIC reviews , *EARLY detection of cancer , *ARTIFICIAL intelligence , *MACHINE learning , *DIAGNOSTIC imaging , *OPTICAL coherence tomography , *DERMOSCOPY , *DESCRIPTIVE statistics , *MEDLINE , *SENSITIVITY & specificity (Statistics) , *ARTIFICIAL neural networks , *ALGORITHMS - Abstract
Simple Summary: Melanoma is the most dangerous type of skin cancer worldwide. Early detection of melanoma is crucial for better outcomes, but this often can be challenging. This research explores the use of artificial intelligence (AI) techniques combined with non-invasive imaging methods to improve melanoma detection. The authors aim to evaluate the current state of AI-based techniques using tools including dermoscopy, optical coherence tomography (OCT), and reflectance confocal microscopy (RCM). The findings demonstrate that several AI algorithms perform as well as or better than dermatologists in detecting melanoma, particularly in the analysis of dermoscopy images. This research highlights the potential of AI to enhance diagnostic accuracy, leading to improved patient outcomes. Further studies are needed to address limitations and ensure the reliability and effectiveness of AI-based techniques. Background: Melanoma, the deadliest form of skin cancer, poses a significant public health challenge worldwide. Early detection is crucial for improved patient outcomes. Non-invasive skin imaging techniques allow for improved diagnostic accuracy; however, their use is often limited due to the need for skilled practitioners trained to interpret images in a standardized fashion. Recent innovations in artificial intelligence (AI)-based techniques for skin lesion image interpretation show potential for the use of AI in the early detection of melanoma. Objective: The aim of this study was to evaluate the current state of AI-based techniques used in combination with non-invasive diagnostic imaging modalities including reflectance confocal microscopy (RCM), optical coherence tomography (OCT), and dermoscopy. We also aimed to determine whether the application of AI-based techniques can lead to improved diagnostic accuracy of melanoma. Methods: A systematic search was conducted via the Medline/PubMed, Cochrane, and Embase databases for eligible publications between 2018 and 2022. Screening methods adhered to the 2020 version of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Included studies utilized AI-based algorithms for melanoma detection and directly addressed the review objectives. Results: We retrieved 40 papers amongst the three databases. All studies directly comparing the performance of AI-based techniques with dermatologists reported the superior or equivalent performance of AI-based techniques in improving the detection of melanoma. In studies directly comparing algorithm performance on dermoscopy images to dermatologists, AI-based algorithms achieved a higher ROC (>80%) in the detection of melanoma. In these comparative studies using dermoscopic images, the mean algorithm sensitivity was 83.01% and the mean algorithm specificity was 85.58%. Studies evaluating machine learning in conjunction with OCT boasted accuracy of 95%, while studies evaluating RCM reported a mean accuracy rate of 82.72%. Conclusions: Our results demonstrate the robust potential of AI-based techniques to improve diagnostic accuracy and patient outcomes through the early identification of melanoma. Further studies are needed to assess the generalizability of these AI-based techniques across different populations and skin types, improve standardization in image processing, and further compare the performance of AI-based techniques with board-certified dermatologists to evaluate clinical applicability. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Pore-GNN: A graph neural network-based framework for predicting flow properties of porous media from micro-CT images.
- Author
-
Alzahrani, Mohammed K., Shapoval, Artur, Chen, Zhixi, and Rahman, Sheikh S.
- Subjects
- *
DEEP learning , *MACHINE learning , *ARTIFICIAL neural networks , *POROUS materials , *SANDSTONE , *COMPUTED tomography - Abstract
This paper presents a hybrid deep learning framework that combines graph neural networks with convolutional neural networks to predict porous media properties. This approach capitalizes on the capabilities of pre-trained convolutional neural networks to extract ndimensional feature vectors from processed three dimensional micro computed tomography porous media images obtained from seven different sandstone rock samples. Subsequently, two strategies for embedding the computed feature vectors into graphs were explored: extracting a single feature vector per sample (image) and treating each sample as a node in the training graph, and representing each sample as a graph by extracting a fixed number of feature vectors, which form the nodes of each training graph. Various types of graph convolutional layers were examined to evaluate the capabilities and limitations of spectral and spatial approaches. The dataset was divided into 70/20/10 for training, validation, and testing. The models were trained to predict the absolute permeability of porous media. Notably, the proposed architectures further reduce the selected objective loss function to values below 35 mD, with improvements in the coefficient of determination reaching 9%. Moreover, the generalizability of the networks was evaluated by testing their performance on unseen sandstone and carbonate rock samples that were not encountered during training. Finally, a sensitivity analysis is conducted to investigate the influence of various hyperparameters on the performance of the models. The findings highlight the potential of graph neural networks as promising deep learning-based alternatives for characterizing porous media properties. The proposed architectures efficiently predict the permeability, which is more than 500 times faster than that of numerical solvers. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Incremental Learning for Online Data Using QR Factorization on Convolutional Neural Networks.
- Author
-
Kim, Jonghong, Lee, WonHee, Baek, Sungdae, Hong, Jeong-Ho, and Lee, Minho
- Subjects
- *
MACHINE learning , *ARTIFICIAL neural networks , *CONVOLUTIONAL neural networks , *ONLINE education , *FACTORIZATION , *RIGHT to be forgotten , *RECURRENT neural networks - Abstract
Catastrophic forgetting, which means a rapid forgetting of learned representations while learning new data/samples, is one of the main problems of deep neural networks. In this paper, we propose a novel incremental learning framework that can address the forgetting problem by learning new incoming data in an online manner. We develop a new incremental learning framework that can learn extra data or new classes with less catastrophic forgetting. We adopt the hippocampal memory process to the deep neural networks by defining the effective maximum of neural activation and its boundary to represent a feature distribution. In addition, we incorporate incremental QR factorization into the deep neural networks to learn new data with both existing labels and new labels with less forgetting. The QR factorization can provide the accurate subspace prior, and incremental QR factorization can reasonably express the collaboration between new data with both existing classes and new class with less forgetting. In our framework, a set of appropriate features (i.e., nodes) provides improved representation for each class. We apply our method to the convolutional neural network (CNN) for learning Cifar-100 and Cifar-10 datasets. The experimental results show that the proposed method efficiently alleviates the stability and plasticity dilemma in the deep neural networks by providing the performance stability of a trained network while effectively learning unseen data and additional new classes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Impulsive Behaviour Detection System Using Machine Learning and IoT.
- Author
-
Raychaudhuri, Soumya Jyoti, Manjunath, Soumya, Srinivasan, Chithra Priya, Swathi, N., Sushma, S., Bhushan K. N., Nitin, and Narendra Babu, C.
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *BIOMETRIC identification , *RANDOM forest algorithms , *DECISION trees , *REINFORCEMENT learning - Abstract
Wearable devices equipped with a multitude of compact, lightweight, biometric sensors are helpful in tracking the real-time physiological data for healthcare-related analysis. However, a survey of devices under the smart-wearable market segment revealed that the contemporary focus is limited to capturing and displaying some of the biometrics like pulse rate, movement of the user, calorie counter, etc. on a smart screen. Employing machine learning techniques can be particularly helpful in analyzing the trends of user-specific biometric data for pre-emptive actions. This paper presents a meaningful analysis in real-time by using machine learning approaches to interpret this physiological data and alert the user about his behavioral pattern through IoT. This research focuses on classifying the mood of the user as agitated or non-agitated, by analyzing the biometrics to help the user decipher meaningful conclusions and take suitable pre-emptive measures to refrain from any unintentional impulsive outburst. Machine learning algorithms like Polynomial regression with threshold, Decision Tree, Random Forest ensemble and variants of Deep Neural Networks (DNN) have been employed to analyse the biometric patterns from the experimental data acquired under different circumstances and detect the user's mood to assign a score to the user. The proposed approach uses a reinforcement learning algorithm to calibrate the user's current temperament by taking intermediate user feedback and comparing the score with the temperament. The results reveal that the proposed system detects the user's mood fluctuations with higher accuracy and relevance compared to any contemporary model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Explainable AI for Bioinformatics: Methods, Tools and Applications.
- Author
-
Karim, Md Rezaul, Islam, Tanhim, Shajalal, Md, Beyan, Oya, Lange, Christoph, Cochez, Michael, Rebholz-Schuhmann, Dietrich, and Decker, Stefan
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *MEDICAL informatics , *ARTIFICIAL intelligence , *COMPUTER vision , *BIOINFORMATICS - Abstract
Artificial intelligence (AI) systems utilizing deep neural networks and machine learning (ML) algorithms are widely used for solving critical problems in bioinformatics, biomedical informatics and precision medicine. However, complex ML models that are often perceived as opaque and black-box methods make it difficult to understand the reasoning behind their decisions. This lack of transparency can be a challenge for both end-users and decision-makers, as well as AI developers. In sensitive areas such as healthcare, explainability and accountability are not only desirable properties but also legally required for AI systems that can have a significant impact on human lives. Fairness is another growing concern, as algorithmic decisions should not show bias or discrimination towards certain groups or individuals based on sensitive attributes. Explainable AI (XAI) aims to overcome the opaqueness of black-box models and to provide transparency in how AI systems make decisions. Interpretable ML models can explain how they make predictions and identify factors that influence their outcomes. However, the majority of the state-of-the-art interpretable ML methods are domain-agnostic and have evolved from fields such as computer vision, automated reasoning or statistics, making direct application to bioinformatics problems challenging without customization and domain adaptation. In this paper, we discuss the importance of explainability and algorithmic transparency in the context of bioinformatics. We provide an overview of model-specific and model-agnostic interpretable ML methods and tools and outline their potential limitations. We discuss how existing interpretable ML methods can be customized and fit to bioinformatics research problems. Further, through case studies in bioimaging, cancer genomics and text mining, we demonstrate how XAI methods can improve transparency and decision fairness. Our review aims at providing valuable insights and serving as a starting point for researchers wanting to enhance explainability and decision transparency while solving bioinformatics problems. GitHub: https://github.com/rezacsedu/XAI-for-bioinformatics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Federated Learning Intellectual Capital Platform.
- Author
-
He, Chengying, Xiao, Bin, Chen, Xi, Xu, Qingzhen, and Lin, Jianwu
- Subjects
- *
INTELLECTUAL capital , *SUPERVISED learning , *REINFORCEMENT learning , *ARTIFICIAL neural networks , *MACHINE learning , *ARTIFICIAL intelligence , *BLOCKCHAINS - Abstract
In the era of artificial intelligence, trained neural network models have become new products of the information age. Most of machine learning strategies currently used to train neural networks are supervised learning, and thus, training data with labels become new intellectual capital (IC). Due to commercial confidentiality, data cannot be shared directly among information companies, which in turn prevents them from integrating resources to train better models. We need a framework that encrypts neural network models trained on the data and provides certain model exchange rewards that can be used to incentivize data sharing and to protect intellectual property (IP) and privacy of intellectual capital. Currently, federated learning provides a framework to train neural networks without compromising privacy, while block chain–based trading systems can attract other participants through a reward mechanism set by smart contracts. In this paper, we propose a block chain–based federated learning algorithm that enables reliable data sharing while protecting data from leakage, and design smart contracts based on the incentive mechanism of Shapley Values to reward data providers. We design a platform for managing IC by combining federated learning and block chain called Federated Learning Intellectual Capital Platform (FedLICP). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. What Is Machine Learning, Artificial Neural Networks and Deep Learning?—Examples of Practical Applications in Medicine.
- Author
-
Kufel, Jakub, Bargieł-Łączek, Katarzyna, Kocot, Szymon, Koźlik, Maciej, Bartnikowska, Wiktoria, Janik, Michał, Czogalik, Łukasz, Dudek, Piotr, Magiera, Mikołaj, Lis, Anna, Paszkiewicz, Iga, Nawrat, Zbigniew, Cebula, Maciej, and Gruszczyńska, Katarzyna
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *DEEP learning , *ARTIFICIAL intelligence - Abstract
Machine learning (ML), artificial neural networks (ANNs), and deep learning (DL) are all topics that fall under the heading of artificial intelligence (AI) and have gained popularity in recent years. ML involves the application of algorithms to automate decision-making processes using models that have not been manually programmed but have been trained on data. ANNs that are a part of ML aim to simulate the structure and function of the human brain. DL, on the other hand, uses multiple layers of interconnected neurons. This enables the processing and analysis of large and complex databases. In medicine, these techniques are being introduced to improve the speed and efficiency of disease diagnosis and treatment. Each of the AI techniques presented in the paper is supported with an example of a possible medical application. Given the rapid development of technology, the use of AI in medicine shows promising results in the context of patient care. It is particularly important to keep a close eye on this issue and conduct further research in order to fully explore the potential of ML, ANNs, and DL, and bring further applications into clinical use in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Using Machine Learning to Improve Cost and Duration Prediction Accuracy in Green Building Projects.
- Author
-
Darko, Amos, Glushakova, Iuliia, Boateng, Emmanuel B., and Chan, Albert P. C.
- Subjects
- *
ARTIFICIAL neural networks , *SUSTAINABLE buildings , *CONSTRUCTION projects , *COST overruns , *WEB development , *WEB-based user interfaces , *MACHINE learning , *GREEN technology - Abstract
A major source of risk in green building projects (GBPs) is inaccurate human prediction of the final project cost and duration, which in turn results in cost and schedule overruns (i.e., poor project performance). This paper presents promising new models to mitigate such risk based upon machine learning (ML). Historical data from 198 GBPs in Hong Kong were used to develop and train two fully connected deep neural networks (DNN) models to learn and predict cost and duration, respectively, based on green building rating (GBR) and other project parameters. The models can predict cost and duration with mean absolute percentage error (MAPE) values of 0.07 and 0.09, respectively. They were then integrated with support vector regression (SVR), and results indicated that the integrated DNN-SVR models improve prediction accuracy, decreasing the MAPE from 0.07 to 0.06 (cost) and 0.09 to 0.07 (duration), respectively. The validated models were for the first time deployed as a ML-based web application for automated, fast, and accurate GBP cost and duration prediction. The feature importance analysis results revealed that the most influential parameters on the GBP cost and duration are project area and weather, respectively, not the GBR. Theoretically, the outcomes of this study provide new insights into the impact of GBR on project cost and duration, which are useful for the promotion of GBPs to improve sustainability. Practically, the study provides policymakers and practitioners with novel ML-based models and a web application to improve GBP delivery performance. Green building projects help to combat climate change and improve our health, wellbeing, and quality of life, but they face two key challenges in their execution: cost overruns and schedule delays. To address these challenges, there is a need for accurate prediction of the final project cost and duration from the early stages of the project. The practical relevance of this study is in the development of the first data-driven and machine learning-based web application for addressing this need. New integrated optimized machine learning models are developed. For practitioners to have access to and use these models without the need to possess machine learning expertise, a corresponding easy-to-use web application is offered. From the design and construction stages of their project, practitioners only need to input the green building ratings in sustainable site, materials and waste, energy use, water use, health and wellbeing, and innovations and additions they want to achieve for the project. The project type and area (type and size), original budget, planned duration, and start month (SM) and year should also be input. Once this project data input is completed, the web application automatically and instantaneously predicts with a high level of accuracy the total costs and time needed to deliver the project successfully—on time and on budget. These cost and duration prediction outputs are a valuable tool in helping practitioners adjust green building project plans and budgets to develop more realistic and accurate project budgets and programs to avoid cost overruns and schedule delays. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Artificial intelligence model of fuel blendings as a step toward the zero emissions optimization of a 660 MWe supercritical power plant performance.
- Author
-
Amjad, Ahsan, Ashraf, Waqar Muhammad, Uddin, Ghulam Moeen, and Krzywanski, Jaroslaw
- Subjects
- *
ARTIFICIAL intelligence , *PLANT performance , *LIGNITE , *COAL mining , *BITUMINOUS coal , *POWER plants , *BIOMASS conversion , *MULTIPLE intelligences - Abstract
Accurately predicting fuel blends' lower heating values (LHV) is crucial for optimizing a power plant. In this paper, we employ multiple artificial intelligence (AI) and machine learning‐based models for predicting the LHV of various fuel blends. Coal of two different ranks and two types of biomass is used in this study. One was the South African imported bituminous coal, and the other was lignite thar coal extracted from the Thar Coal Block‐2 mine by Sind Engro Coal Mining Company, Pakistan. Two types of biomass, that is, sugarcane bagasse and rice husk, were obtained locally from a sugar mill and rice mill located in the vicinity of Sahiwal, Punjab. Bituminous coal mixture with other coal types and both types of biomass are used with 10%, 20%, 30%, 40%, and 50% weight fractions, respectively. The calculation and model development procedure resulted in 91 different AI‐based models. The best is the Ridge Regressor, a high‐level end‐to‐end approach for fitting the model. The model can predict the LHV of the bituminous coal with lignite and biomass under a vast share of fuel blends. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. A 3D Structure Mapping-Based Efficient Topology Optimization Framework.
- Author
-
Kangjie Li, Wenjing Ye, and Yicong Gao
- Subjects
- *
ARTIFICIAL neural networks , *DEEP learning , *CONVOLUTIONAL neural networks , *TOPOLOGY - Abstract
It is well known that the computational cost of classic topology optimization (TO) methods increases rapidly with the size of the design problem because of the high-dimensional numerical simulation required at each iteration. Recently, the technical route of replacing the TO process with artificial neural network (ANN) models has gained popularity. These ANN models, once trained, can rapidly produce an optimized design solution for a given design specification. However, the complex mapping relationship between design specifications and corresponding optimized structures presents challenges in the construction of neural networks with good generalizability. In this paper, we propose a new design framework that uses deep learning techniques to accelerate the TO process. Specifically, we present an efficient topology optimization (ETO) framework in which structure update at each iteration is conducted on a coarse scale and a structure mapping neural network (SMapNN) is constructed to map the updated coarse structure to its corresponding fine structure. As such, fine-scale numerical simulations are replaced by coarse-scale simulations, thereby greatly reducing the computational cost. In addition, fragmentation and padding strategies are used to improve the trainability and adaptability of SMapNN, leading to a better generalizability. The efficiency and accuracy of the proposed ETO framework are verified using both benchmark and complex design tasks. It has been shown that with the SMapNN, TO designs of millions of elements can be completed within a few minutes on a personal computer. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Review on disease detection of plants using image processing and machine learning techniques.
- Author
-
Santhosh Kumar, P., Balakrishna, R., and Vinod Kumar, K.
- Subjects
- *
IMAGE processing , *ARTIFICIAL neural networks , *CROPS , *ARTIFICIAL intelligence , *BACK propagation , *DEEP learning , *MACHINE learning , *PLANT diseases - Abstract
Agriculture is essential for everyone to promote sustainable development, the farming, which combines image processing, artificial intelligence, Deep learning, and Internet of Things (IOT). World population incensing every day. Due to the rising demand in the Agriculture industry, the need to collectively improve a plants and growth its field is very useful. In this paper, it is important to maintain the crop during its initial time, and also at period of harvesting. The image processing and artificial networks are used as a different techniques to maintain the detecting the diseases on the leaves and correct time to harvesting. When we take images with help of drones, the images are divided and changed to disease described three things vectors namely the first one is color, one more is texture and morphology. The vectors morphology gives 95% accuracy and its give more compare to other two vector features. This research paper proposed effective and useful algorithms for detection of disease with help implementation of Artificial Neural network algorithms using MATLAB. Detection of leaf or plant diseases with some manual techniques are requires a lot of work by maintaining a huge farm of crops, and it's very early stages it detects different types of symptoms to different diseases on plants, when the displayed on crop leaves. In this research paper survey on various disease classification techniques that can be for plant leaves diseases detection. For this purpose Artificial Intelligence, Neural network algorithms and back propagation techniques for adjustment of training data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Analysis of developments and hotspots of international research on sports AI.
- Author
-
Li, Jian, Li, Meiyue, and Lin, Hao
- Subjects
- *
DEEP learning , *ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *SPORTS sciences , *WEARABLE technology , *MACHINE learning - Abstract
In this paper, 1,538 papers retrieved with the keywords "sports artificial intelligence (AI)" on the Web of Science database since 2007 were taken as the data source, and the Cite Space V software was used to visualize and analyze them. A visual knowledge graph was used to streamline the countries, institutions and authors conducting sports AI research, discipline distribution, research hotspots and development trends in the past 15 years. Subsequently, its development direction and research progress were discussed. Sports AI was widely distributed, with the US, China and the UK leading the way. The most prolific authors and teams in research on sports AI were concentrated in American universities. Their main research direction is to develop and improve smart wearable devices based on machine learning and deep learning technologies for different groups of people. Research on sports AI involved multiple disciplines, which mainly applied and referred to research methodologies and theories on engineering, computer science and sports science. It could be seen from the frequency and centrality of keywords that in the current field of sports AI, machine learning is the main direction, artificial neural networks is the main algorithm, and practical and empirical research based on data mining is the focus. The research hotspots were divided into three major clusters: physical health promotion, sports injury prevention and control, and athletic performance enhancement. How to introduce intelligent technology into sports for a perfect integration still has an arduous and long way to go. Future development requires joint efforts and participation of scientific researchers, professionals and common people. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. A Review of Auto-Regressive Methods Applications to Short-Term Demand Forecasting in Power Systems.
- Author
-
Czapaj, Rafał, Kamiński, Jacek, and Sołtysik, Maciej
- Subjects
- *
DEMAND forecasting , *LOAD forecasting (Electric power systems) , *INDEPENDENT system operators , *ARTIFICIAL neural networks , *ELECTRIC power consumption , *ELECTRIC power distribution grids , *ARTIFICIAL intelligence - Abstract
The paper conducts a literature review of applications of autoregressive methods to short-term forecasting of power demand. This need is dictated by the advancement of modern forecasting methods and their achievement in good forecasting efficiency in particular. The annual effectiveness of forecasting power demand for the Polish National Power Grid for the next day is approx. 1%; therefore, the main objective of the review is to verify whether it is possible to improve efficiency while maintaining the minimum financial outlays and time-consuming efforts. The methods that fulfil these conditions are autoregressive methods; therefore, the paper focuses on autoregressive methods, which are less time-consuming and, as a result, cheaper in development and applications. The prepared review ranks the forecasting models in terms of the forecasting effectiveness achieved in the literature on the subject, which enables the selection of models that may improve the currently achieved effectiveness of the transmission system operator. Due to the applied approach, a transparent set of forecasting methods and models was obtained, in addition to knowledge about their potential in the context of the needs for short-term forecasting of electricity demand in the national power system. The articles in which the MAPE error was used to assess the quality of short-term forecasts were analyzed. The investigation included 47 articles, several dozen forecasting methods, and 264 forecasting models. The articles date from 1997 and, apart from the autoregressive methods, also include the methods and models that use explanatory variables (non-autoregressive ones). The input data used come from the period 1998–2014. The analysis included 25 power systems located on four continents (Asia, Europe, North America, and Australia) that were published by 44 different research teams. The results of the review show that in the autoregressive methods applied to forecasting short-term power demand, there is a potential to improve forecasting effectiveness in power systems. The most promising prognostic models using the autoregressive approach, based on the review, include Fuzzy Logic, Artificial Neural Networks, Wavelet Artificial Neural Networks, Adaptive Neurofuse Inference Systems, Genetic Algorithms, Fuzzy Regression, and Data Envelope Analysis. These methods make it possible to achieve the efficiency of short-term forecasting of electricity demand with hourly resolution at the level below 1%, which confirms the assumption made by the authors about the potential of autoregressive methods. Other forecasting models, the effectiveness of which is high, may also prove useful in forecasting by electricity system operators. The paper also discusses the classical methods of Artificial Intelligence, Data Mining, Big Data, and the state of research in short-term power demand forecasting in power systems using autoregressive and non-autoregressive methods and models. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Derivatives of feed-forward neural networks and their application in real-time market risk management.
- Author
-
Ratku, Antal and Neumann, Dirk
- Subjects
- *
MARKETING management , *ARTIFICIAL neural networks , *AUTOMATIC differentiation , *DERIVATIVE securities , *PRICE sensitivity - Abstract
Market risk management of financial derivatives requires the efficient calculation of their price sensitivities with respect to changes in market factors. This paper shows how a deep feed-forward neural network which has been trained for pricing derivative instruments can be efficiently used to calculate these sensitivities as well. The proposed method is a fast and easily implementable alternative approach to automatic differentiation, and it simultaneously calculates all the first- and second-order derivatives of a multilayer feed-forward neural network with respect to its input features. The paper quantifies the performance improvement of the proposed method over a recent, publicly available implementation of automatic differentiation for a wide range of network sizes. The number of input parameters in these networks corresponds to those of commonly used financial models with stochastic volatility. The numerical accuracy of the proposed sensitivity calculations is demonstrated with a case study, calculating price sensitivities of European options under stochastic volatility. While the paper focuses on financial applications, the results presented herein are applicable to all deep feed-forward neural networks with sufficiently smooth activation functions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Imperative Role of Machine Learning Algorithm for Detection of Parkinson's Disease: Review, Challenges and Recommendations.
- Author
-
Rana, Arti, Dumka, Ankur, Singh, Rajesh, Panda, Manoj Kumar, Priyadarshi, Neeraj, and Twala, Bhekisipho
- Subjects
- *
PARKINSON'S disease , *MACHINE learning , *ARTIFICIAL intelligence , *SYMPTOMS , *ARTIFICIAL neural networks , *MOVEMENT disorders - Abstract
Parkinson's disease (PD) is a neurodegenerative disease that affects the neural, behavioral, and physiological systems of the brain. This disease is also known as tremor. The common symptoms of this disease are a slowness of movement known as 'bradykinesia', loss of automatic movements, speech/writing changes, and difficulty with walking at early stages. To solve these issues and to enhance the diagnostic process of PD, machine learning (ML) algorithms have been implemented for the categorization of subjective disease and healthy controls (HC) with comparable medical appearances. To provide a far-reaching outline of data modalities and artificial intelligence techniques that have been utilized in the analysis and diagnosis of PD, we conducted a literature analysis of research papers published up until 2022. A total of 112 research papers were included in this study, with an examination of their targets, data sources and different types of datasets, ML algorithms, and associated outcomes. The results showed that ML approaches and new biomarkers have a lot of promise for being used in clinical decision-making, resulting in a more systematic and informed diagnosis of PD. In this study, some major challenges were addressed along with a future recommendation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Artificial intelligence based learning for wireless application – A survey.
- Author
-
Raja, L., Velmurugan, S., Shanthi, G., and Nirmala, S.
- Subjects
- *
ARTIFICIAL intelligence , *ARTIFICIAL neural networks , *DEEP learning , *NEXT generation networks , *MACHINE learning , *COMPUTER network traffic - Abstract
The wireless networks of the future generation are evolved as complex systems due to broadening in service prerequisites, application heterogeneity and networking of gadgets. In recent times, the major step forward of machine learning techniques is deep learning. However, in wireless/heterogeneous networks, the deep learning application for network traffic control is relatively new. The key disputes in the wireless backbone networks since the advancement of wireless networks are resource allocation and efficient network traffic control such as routing. In larger scale of networks and tangled radio environments, the method of adding intelligence to wireless networks is achieved by Deep Learning (DL). The tangled wireless networks supported with several nodes and their quality of variable link can be investigated by Deep Learning. This paper is presented with a perception of harnessing the next generation communication networks with the artificial neural networks. This work helps the readers to explore the unsolved issues to pursue their research and deeply understand the wireless network design with DL based state of the art facilities. In this work, we integrate the deep learning and wireless networking research with a widespread survey. Finally, this paper summarizes the disputes and benefits of acquiring ML and AI for next generation wireless systems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Asynchronous Federated Learning for Improved Cardiovascular Disease Prediction Using Artificial Intelligence.
- Author
-
Khan, Muhammad Amir, Alsulami, Musleh, Yaqoob, Muhammad Mateen, Alsadie, Deafallah, Saudagar, Abdul Khader Jilani, AlKhathami, Mohammed, and Farooq Khattak, Umar
- Subjects
- *
ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *ASYNCHRONOUS learning , *CARDIOVASCULAR diseases , *DEEP learning - Abstract
Healthcare professionals consider predicting heart disease an essential task and deep learning has proven to be a promising approach for achieving this goal. This research paper introduces a novel method called the asynchronous federated deep learning approach for cardiac prediction (AFLCP), which combines a heart disease dataset and deep neural networks (DNNs) with an asynchronous learning technique. The proposed approach employs a method for asynchronously updating the parameters of DNNs and incorporates a temporally weighted aggregation technique to enhance the accuracy and convergence of the central model. To evaluate the effectiveness of the proposed AFLCP method, two datasets with various DNN architectures are tested, and the results demonstrate that the AFLCP approach outperforms the baseline method in terms of both communication cost and model accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Recent Progresses in Machine Learning Assisted Raman Spectroscopy.
- Author
-
Qi, Yaping, Hu, Dan, Jiang, Yucheng, Wu, Zhenping, Zheng, Ming, Chen, Esther Xinyi, Liang, Yong, Sadi, Mohammad A., Zhang, Kang, and Chen, Yong P.
- Subjects
- *
DEEP learning , *MACHINE learning , *RAMAN spectroscopy , *ARTIFICIAL neural networks , *CONVOLUTIONAL neural networks , *SUPPORT vector machines - Abstract
With the development of Raman spectroscopy and the expansion of its application domains, conventional methods for spectral data analysis have manifested many limitations. Exploring new approaches to facilitate Raman spectroscopy and analysis has become an area of intensifying focus for research. It has been demonstrated that machine learning techniques can more efficiently extract valuable information from spectral data, creating unprecedented opportunities for analytical science. This paper outlines traditional and more recently developed statistical methods that are commonly used in machine learning (ML) and ML‐algorithms for different Raman spectroscopy‐based classification and recognition applications. The methods include Principal Component Analysis, K‐Nearest Neighbor, Random Forest, and Support Vector Machine, as well as neural network‐based deep learning algorithms such as Artificial Neural Networks, Convolutional Neural Networks, etc. The bulk of the review is dedicated to the research advances in machine learning applied to Raman spectroscopy from several fields, including material science, biomedical applications, food science, and others, which reached impressive levels of analytical accuracy. The combination of Raman spectroscopy and machine learning offers unprecedented opportunities to achieve high throughput and fast identification in many of these application fields. The limitations of current studies are also discussed and perspectives on future research are provided. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. A Comprehensive Review of Bio-Inspired Optimization Algorithms Including Applications in Microelectronics and Nanophotonics.
- Author
-
Jakšić, Zoran, Devi, Swagata, Jakšić, Olga, and Guha, Koushik
- Subjects
- *
MICROELECTRONICS , *NANOPHOTONICS , *MACHINE learning , *ARTIFICIAL neural networks , *CONVOLUTIONAL neural networks - Abstract
The application of artificial intelligence in everyday life is becoming all-pervasive and unavoidable. Within that vast field, a special place belongs to biomimetic/bio-inspired algorithms for multiparameter optimization, which find their use in a large number of areas. Novel methods and advances are being published at an accelerated pace. Because of that, in spite of the fact that there are a lot of surveys and reviews in the field, they quickly become dated. Thus, it is of importance to keep pace with the current developments. In this review, we first consider a possible classification of bio-inspired multiparameter optimization methods because papers dedicated to that area are relatively scarce and often contradictory. We proceed by describing in some detail some more prominent approaches, as well as those most recently published. Finally, we consider the use of biomimetic algorithms in two related wide fields, namely microelectronics (including circuit design optimization) and nanophotonics (including inverse design of structures such as photonic crystals, nanoplasmonic configurations and metamaterials). We attempted to keep this broad survey self-contained so it can be of use not only to scholars in the related fields, but also to all those interested in the latest developments in this attractive area. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. AI-Based Glioma Grading for a Trustworthy Diagnosis: An Analytical Pipeline for Improved Reliability.
- Author
-
Pitarch, Carla, Ribas, Vicent, and Vellido, Alfredo
- Subjects
- *
RELIABILITY (Personality trait) , *DIGITAL image processing , *DEEP learning , *CLINICAL decision support systems , *GLIOMAS , *MACHINE learning , *MAGNETIC resonance imaging , *ARTIFICIAL intelligence , *RESEARCH funding , *AUTOMATION , *COMPUTER-aided diagnosis , *ARTIFICIAL neural networks , *PREDICTION models , *TUMOR grading , *ALGORITHMS , *TRUST - Abstract
Simple Summary: Accurately grading gliomas, which are the most common and aggressive malignant brain tumors in adults, poses a significant challenge for radiologists. This study explores the application of Deep Learning techniques in assisting tumor grading using Magnetic Resonance Images (MRIs). By analyzing a glioma database sourced from multiple public datasets and comparing different settings, the aim of this study is to develop a robust and reliable grading system. The study demonstrates that by focusing on the tumor region of interest and augmenting the available data, there is a significant improvement in both the accuracy and confidence of tumor grade classifications. While successful in differentiating low-grade gliomas from high-grade gliomas, the accurate classification of grades 2, 3, and 4 remains challenging. The research findings have significant implications for advancing the development of a non-invasive, robust, and trustworthy data-driven system to support clinicians in the diagnosis and therapy planning of glioma patients. Glioma is the most common type of tumor in humans originating in the brain. According to the World Health Organization, gliomas can be graded on a four-stage scale, ranging from the most benign to the most malignant. The grading of these tumors from image information is a far from trivial task for radiologists and one in which they could be assisted by machine-learning-based decision support. However, the machine learning analytical pipeline is also fraught with perils stemming from different sources, such as inadvertent data leakage, adequacy of 2D image sampling, or classifier assessment biases. In this paper, we analyze a glioma database sourced from multiple datasets using a simple classifier, aiming to obtain a reliable tumor grading and, on the way, we provide a few guidelines to ensure such reliability. Our results reveal that by focusing on the tumor region of interest and using data augmentation techniques we significantly enhanced the accuracy and confidence in tumor classifications. Evaluation on an independent test set resulted in an AUC-ROC of 0.932 in the discrimination of low-grade gliomas from high-grade gliomas, and an AUC-ROC of 0.893 in the classification of grades 2, 3, and 4. The study also highlights the importance of providing, beyond generic classification performance, measures of how reliable and trustworthy the model's output is, thus assessing the model's certainty and robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Intellectual property protection of DNN models.
- Author
-
Peng, Sen, Chen, Yufei, Xu, Jie, Chen, Zizhuo, Wang, Cong, and Jia, Xiaohua
- Subjects
- *
INTELLECTUAL property , *ARTIFICIAL neural networks , *NATURAL language processing , *DEEP learning , *ARTIFICIAL intelligence , *IMAGE recognition (Computer vision) - Abstract
Deep learning has been widely applied in solving many tasks, such as image recognition, speech recognition, and natural language processing. It requires a high-quality dataset, advanced expert knowledge, and enormous computation to train a large-scale Deep Neural Network (DNN) model, which makes it valuable enough to be protected as Intellectual Property (IP). Defending DNN models against IP violations such as illegal usage, replication, and reproduction is particularly important to the healthy development of deep learning techniques. Many approaches have been developed to protect the DNN model IP, such as DNN watermarking, DNN fingerprinting, DNN authentication, and inference perturbation. Given its significant importance, DNN IP protection is still in its infancy stage. In this paper, we present a comprehensive survey of the existing DNN IP protection approaches. We first summarize the deployment mode for DNN models and describe the DNN IP protection problem. Then we categorize the existing protection approaches based on their protection strategies and introduce them in detail. Finally, we compare these approaches and discuss future research topics in DNN IP protection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Deep Neural Networks for the Estimation of Masonry Structures Failures under Rockfalls.
- Author
-
Mavrouli, Olga, Skentou, Athanasia D., Carbonell, Josep Maria, Tsoukalas, Markos Z., Núñez-Andrés, M. Amparo, and Asteris, Panagiotis G.
- Subjects
- *
ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *MASONRY , *FINITE element method , *ROCKFALL - Abstract
Although the principal aim of the rockfall management is to prevent rock boulders from reaching the buildings instead of the buildings resisting the boulder impacts, there usually exists a residual risk that has to be assessed, even when structural protection measurements are taken. The evaluation of the expected damage of buildings due to rockfalls using empirical data from past events is not always possible, as transferring and applying damage observations from one area to another can be unrealistic. In order to simulate potential rockfall scenarios and their damage on buildings, numerical methods can be an alternative. However due to their increased requirements in expertise and computational costs, their integration into the risk analysis is limited, and simpler tools to assess the rockfall vulnerability of buildings are needed. This paper focuses on the application of artificial intelligence AI methods for providing the expected damage of masonry walls which are subjected to rockfall impacts. First, a damage database with 672 datasets was created numerically using the particle finite element method and the finite element method. The input variables are the rock volume (VR), the rock velocity (RV), the masonry wall (t) and the masonry tensile strength f m . The output variable is a damage index (DI) equal to the percentage of the damaged wall area. Different AI algorithms were investigated and the ANN LM 4-21-1 model was selected to optimally assess the expected wall damage. The optimum model is provided here (a) as an analytical equation and (b) in the form of contour graphs, mapping the DI value. Known the VR and the RV, the DI can be directly used as an input for the vulnerability of masonry walls into the quantitative rockfall risk assessment equation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. A Business Management Resource-Scheduling Method based on Deep Learning Algorithm.
- Author
-
Wang, Jing
- Subjects
- *
DEEP learning , *MACHINE learning , *ARTIFICIAL neural networks , *INDUSTRIAL management , *ARTIFICIAL intelligence , *PROJECT management - Abstract
As China's economic level and industrial volume continue to develop and expand, the traditional methods of business management and resource scheduling demonstrate significant limitations and misalignments with China's daily economic activities. Traditional industrial and commercial management resource scheduling is highly reliant on manual labor. When confronted with the requirements of large-scale project management, the pure manual mode cannot track the real-time progress of the project flexibly, effectively, and in a timely manner, resulting in the inability to complete the corresponding resource-scheduling work efficiently, which will affect the overall operation progress of the project. Deep neural networks have a high-application potential for solving extremely complex and highly nonlinear optimization problems. Artificial intelligence and deep learning research have advanced at a rapid pace over the past few decades, thanks to the efforts of numerous researchers. Combined with GPU technology, the deep learning framework can provide an extremely complex optimization problem with a practical and feasible optimization scheme and corresponding solution path in a very short amount of time. Therefore, this paper explores the potential application of deep learning technology to industrial and commercial resource-scheduling management. By analyzing the benefits of deep learning technology and the bottleneck issues of existing industrial and commercial resource scheduling, a real-time optimized industrial and commercial resource-scheduling model based on deep learning technology is developed. The model is evaluated using the respective data set. The test results demonstrate that the resource-scheduling model proposed in this paper has strong real-time and high-operation efficiency and can assist engineers in completing the corresponding resource-scheduling tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.