14 results
Search Results
2. Dendritic Growth Optimization: A Novel Nature-Inspired Algorithm for Real-World Optimization Problems.
- Author
-
Priyadarshini, Ishaani
- Subjects
OPTIMIZATION algorithms ,BIOLOGICALLY inspired computing ,DEEP learning ,MACHINE learning ,METAHEURISTIC algorithms ,PROBLEM solving ,ALGORITHMS - Abstract
In numerous scientific disciplines and practical applications, addressing optimization challenges is a common imperative. Nature-inspired optimization algorithms represent a highly valuable and pragmatic approach to tackling these complexities. This paper introduces Dendritic Growth Optimization (DGO), a novel algorithm inspired by natural branching patterns. DGO offers a novel solution for intricate optimization problems and demonstrates its efficiency in exploring diverse solution spaces. The algorithm has been extensively tested with a suite of machine learning algorithms, deep learning algorithms, and metaheuristic algorithms, and the results, both before and after optimization, unequivocally support the proposed algorithm's feasibility, effectiveness, and generalizability. Through empirical validation using established datasets like diabetes and breast cancer, the algorithm consistently enhances model performance across various domains. Beyond its working and experimental analysis, DGO's wide-ranging applications in machine learning, logistics, and engineering for solving real-world problems have been highlighted. The study also considers the challenges and practical implications of implementing DGO in multiple scenarios. As optimization remains crucial in research and industry, DGO emerges as a promising avenue for innovation and problem solving. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Improved FunkSVD Algorithm Based on RMSProp.
- Author
-
Yue, Xiaochen and Liu, Qicheng
- Subjects
ALGORITHMS ,DEEP learning ,MACHINE learning ,MATHEMATICAL optimization ,PROBLEM solving - Abstract
To solve the problem of low accuracy in the traditional FunkSVD recommendation algorithm, an improved FunkSVD algorithm (RM-FS) is proposed. RM-FS is an improvement of the traditional FunkSVD algorithm, using RMSProp, a deep learning optimization algorithm. The RM-FS algorithm can not only solve the problem of reduced accuracy of the traditional FunkSVD algorithm because of iterative oscillations but also alleviate the impact of data sparseness on the accuracy of the algorithm, achieving the effect of improving the accuracy of the traditional algorithm. The experimental results show that the RM-FS algorithm proposed in this paper effectively improves the accuracy of the recommendation algorithm, which is better than the traditional FunkSVD recommendation algorithm and other improved FunkSVD algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Diversity Evolutionary Policy Deep Reinforcement Learning.
- Author
-
Liu, Jian and Feng, Liming
- Subjects
DEEP learning ,REINFORCEMENT learning ,MACHINE learning ,CROSS-entropy method ,ALGORITHMS ,PROBLEM solving - Abstract
The reinforcement learning algorithms based on policy gradient may fall into local optimal due to gradient disappearance during the update process, which in turn affects the exploration ability of the reinforcement learning agent. In order to solve the above problem, in this paper, the cross-entropy method (CEM) in evolution policy, maximum mean difference (MMD), and twin delayed deep deterministic policy gradient algorithm (TD3) are combined to propose a diversity evolutionary policy deep reinforcement learning (DEPRL) algorithm. By using the maximum mean discrepancy as a measure of the distance between different policies, some of the policies in the population maximize the distance between them and the previous generation of policies while maximizing the cumulative return during the gradient update. Furthermore, combining the cumulative returns and the distance between policies as the fitness of the population encourages more diversity in the offspring policies, which in turn can reduce the risk of falling into local optimal due to the disappearance of the gradient. The results in the MuJoCo test environment show that DEPRL has achieved excellent performance on continuous control tasks; especially in the Ant-v2 environment, the return of DEPRL ultimately achieved a nearly 20% improvement compared to TD3. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. Application of Deep Learning Algorithms in Determination of Trace Rare Earth Elements of Cerium Group in Rocks and Minerals.
- Author
-
Ma, Sumin and Huang, Wenhui
- Subjects
DEEP learning ,CERIUM group ,RARE earth metals ,MACHINE learning ,PROBLEM solving ,MINERALS ,ALGORITHMS - Abstract
Since the breakthrough of deep learning in object classification in 2012, extraordinary achievements have been made in the field of target detection, but the high time and space complexity of the target detection network based on deep learning has hindered the technology from application in actual product. To solve this problem, first of all, this paper uses the MobileNet classification network to optimize the Faster R-CNN target detection network. The experimental results on the rare earth element detection data set show that the MobileNet classification network is not suitable for optimizing the Faster R-CNN network. After that, this paper proposes a classification network that combines VGG16 and MobileNet, and uses the fusion network to optimize the Faster R-CNN target detection network. The experimental results on the rare earth element detection data set show that the Faster R-CNN target detection network optimized by the fusion classification network has the advantages of using VGG16 and MobileNet's Faster R-CNN target detection network to detect rare earth elements. The innovation of this article is that the results on 5 time series data sets show that CDA-WR has better predictive performance than other ELM variant models. The effect of determining trace cerium elements in rocks and minerals is increased by more than 50%, based on deep learning. The algorithm studies the methods of target detection and recognition and integrates it into the intelligent robot used in this subject, giving the robot the ability to accurately detect the target object in real time. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Face Mask Wearing Detection Algorithm Based on Improved YOLO-v4.
- Author
-
Yu, Jimin, Zhang, Wei, and Berretti, Stefano
- Subjects
PROBLEM solving ,DEEP learning ,FEATURE extraction ,REDUNDANCY in engineering ,ALGORITHMS ,LEARNING ability ,MACHINE learning - Abstract
To solve the problems of low accuracy, low real-time performance, poor robustness and others caused by the complex environment, this paper proposes a face mask recognition and standard wear detection algorithm based on the improved YOLO-v4. Firstly, an improved CSPDarkNet53 is introduced into the trunk feature extraction network, which reduces the computing cost of the network and improves the learning ability of the model. Secondly, the adaptive image scaling algorithm can reduce computation and redundancy effectively. Thirdly, the improved PANet structure is introduced so that the network has more semantic information in the feature layer. At last, a face mask detection data set is made according to the standard wearing of masks. Based on the object detection algorithm of deep learning, a variety of evaluation indexes are compared to evaluate the effectiveness of the model. The results of the comparations show that the mAP of face mask recognition can reach 98.3% and the frame rate is high at 54.57 FPS, which are more accurate compared with the exiting algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Crowd counting method based on cross column fusion attention mechanism.
- Author
-
Cui, Xiao, Zhang, Zhi-Feng, Zheng, Qian, and Cao, Jie
- Subjects
DEEP learning ,PROBLEM solving ,MODULAR coordination (Architecture) ,FEATURE extraction ,ALGORITHMS ,MACHINE learning - Abstract
Deep learning has made substantial progress in crowd density estimation, but there are still some problems in existing methods, such as large population density, background interference, and scale change, which makes it difficult to count people. To solve the above problems, we proposed a crowd counting method based on a cross column fusion attention mechanism. First, the first ten layers of VGG16 with good migration ability and feature extraction ability are used as the front-end network to preliminarily extract human head features. Then, a cross column fusion attention module is designed. In this module, feature maps are fused across columns to make the network contain richer deep and shallow features. At the same time, to alleviate the background interference, the attention mechanism is used to guide the network to focus on the head position in the picture, and different weights are assigned to different positions according to the attention score map, so as to highlight the crowd and weaken the background, and finally get a high-quality density map. In addition, a shallow convolution module is designed as another branch. The output feature map of the shallow convolution module and the output feature map of the attention module of cross column fusion are fused to solve the problem of scale change effectively. Finally, in the last layer of the network, the convolution layer of 1 × 1 is used to replace the full connection layer, and fewer network parameters are used to reduce the calculation and the population density map is regressed. The experimental results show that the mean absolute error and mean square error of the proposed algorithm are significantly reduced compared with the comparison algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Transformative Potential of AI in Healthcare: Definitions, Applications, and Navigating the Ethical Landscape and Public Perspectives.
- Author
-
Bekbolatova, Molly, Mayer, Jonathan, Ong, Chi Wei, and Toma, Milan
- Subjects
HEALTH care industry ,DEEP learning ,PSYCHOLOGICAL burnout ,PROBLEM solving ,DRUG discovery ,NATURAL language processing ,OPERATIVE surgery ,ENDOSCOPIC ultrasonography ,ARTIFICIAL intelligence ,MEDICAL care ,TASK performance ,MACHINE learning ,VACCINE development ,DIAGNOSTIC imaging ,PREDICTION models ,POPULATION health ,ARTIFICIAL neural networks ,DECISION making in clinical medicine ,CONTACT tracing ,PUBLIC opinion ,ALGORITHMS ,COVID-19 pandemic ,TELEMEDICINE ,HEALTH care rationing - Abstract
Artificial intelligence (AI) has emerged as a crucial tool in healthcare with the primary aim of improving patient outcomes and optimizing healthcare delivery. By harnessing machine learning algorithms, natural language processing, and computer vision, AI enables the analysis of complex medical data. The integration of AI into healthcare systems aims to support clinicians, personalize patient care, and enhance population health, all while addressing the challenges posed by rising costs and limited resources. As a subdivision of computer science, AI focuses on the development of advanced algorithms capable of performing complex tasks that were once reliant on human intelligence. The ultimate goal is to achieve human-level performance with improved efficiency and accuracy in problem-solving and task execution, thereby reducing the need for human intervention. Various industries, including engineering, media/entertainment, finance, and education, have already reaped significant benefits by incorporating AI systems into their operations. Notably, the healthcare sector has witnessed rapid growth in the utilization of AI technology. Nevertheless, there remains untapped potential for AI to truly revolutionize the industry. It is important to note that despite concerns about job displacement, AI in healthcare should not be viewed as a threat to human workers. Instead, AI systems are designed to augment and support healthcare professionals, freeing up their time to focus on more complex and critical tasks. By automating routine and repetitive tasks, AI can alleviate the burden on healthcare professionals, allowing them to dedicate more attention to patient care and meaningful interactions. However, legal and ethical challenges must be addressed when embracing AI technology in medicine, alongside comprehensive public education to ensure widespread acceptance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A New Cascade-Correlation Growing Deep Learning Neural Network Algorithm.
- Author
-
Mohamed, Soha Abd El-Moamen, Mohamed, Marghany Hassan, Farghally, Mohammed F., and Radac, Mircea-Bogdan
- Subjects
FEEDFORWARD neural networks ,DEEP learning ,ARTIFICIAL neural networks ,PROBLEM solving ,MACHINE learning ,ALGORITHMS - Abstract
In this paper, a proposed algorithm that dynamically changes the neural network structure is presented. The structure is changed based on some features in the cascade correlation algorithm. Cascade correlation is an important algorithm that is used to solve the actual problem by artificial neural networks as a new architecture and supervised learning algorithm. This process optimizes the architectures of the network which intends to accelerate the learning process and produce better performance in generalization. Many researchers have to date proposed several growing algorithms to optimize the feedforward neural network architectures. The proposed algorithm has been tested on various medical data sets. The results prove that the proposed algorithm is a better method to evaluate the accuracy and flexibility resulting from it. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. 40 years of actigraphy in sleep medicine and current state of the art algorithms.
- Author
-
Patterson, Matthew R., Nunes, Adonay A. S., Gerstel, Dawid, Pilkar, Rakesh, Guthrie, Tyler, Neishabouri, Ali, and Guo, Christine C.
- Subjects
DEEP learning ,EVALUATION of medical care ,SLEEP-wake cycle ,INTERNAL medicine ,PROBLEM solving ,RESEARCH evaluation ,ACTIGRAPHY ,WEARABLE technology ,REGRESSION analysis ,MACHINE learning ,POLYSOMNOGRAPHY ,ELECTRONIC equipment ,DIGITAL health ,RANDOM forest algorithms ,RAPID eye movement sleep ,SLEEP disorders ,SLEEP ,ACCELEROMETRY ,COMPARATIVE studies ,QUESTIONNAIRES ,DESCRIPTIVE statistics ,SENSITIVITY & specificity (Statistics) ,ARTIFICIAL neural networks ,ALGORITHMS ,EPIDEMIOLOGICAL research - Abstract
For the last 40 years, actigraphy or wearable accelerometry has provided an objective, low-burden and ecologically valid approach to assess real-world sleep and circadian patterns, contributing valuable data to epidemiological and clinical insights on sleep and sleep disorders. The proper use of wearable technology in sleep research requires validated algorithms that can derive sleep outcomes from the sensor data. Since the publication of the first automated scoring algorithm by Webster in 1982, a variety of sleep algorithms have been developed and contributed to sleep research, including many recent ones that leverage machine learning and / or deep learning approaches. However, it remains unclear how these algorithms compare to each other on the same data set and if these modern data science approaches improve the analytical validity of sleep outcomes based on wrist-worn acceleration data. This work provides a systematic evaluation across 8 state-of-the-art sleep algorithms on a common sleep data set with polysomnography (PSG) as ground truth. Despite the inclusion of recently published complex algorithms, simple regression-based and heuristic algorithms demonstrated slightly superior performance in sleep-wake classification and sleep outcome estimation. The performance of complex machine learning and deep learning models seem to suffer from poor generalization. This independent and systematic analytical validation of sleep algorithms provides key evidence on the use of wearable digital health technologies for sleep research and care. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Deep Q-Learning Based Optimal Query Routing Approach for Unstructured P2P Network.
- Author
-
Shoab, Mohammad and Alotaibi, Abdullah Shawan
- Subjects
REINFORCEMENT learning ,COMPUTER network architectures ,PEER-to-peer architecture (Computer networks) ,DEEP learning ,PROBLEM solving ,MACHINE learning ,ALGORITHMS - Abstract
Deep Reinforcement Learning (DRL) is a class of Machine Learning (ML) that combines Deep Learning with Reinforcement Learning and provides a framework by which a system can learn from its previous actions in an environment to select its efforts in the future efficiently. DRL has been used in many application fields, including games, robots, networks, etc. for creating autonomous systems that improve themselves with experience. It is well acknowledged that DRL is well suited to solve optimization problems in distributed systems in general and network routing especially. Therefore, a novel query routing approach called Deep Reinforcement Learning based Route Selection (DRLRS) is proposed for unstructured P2P networks based on a Deep Q-Learning algorithm. The main objective of this approach is to achieve better retrieval effectiveness with reduced searching cost by less number of connected peers, exchanged messages, and reduced time. The simulation results shows a significantly improve searching a resource with compression to k-Random Walker and Directed BFS. Here, retrieval effectiveness, search cost in terms of connected peers, and average overhead are 1.28, 106, 149, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Using off-the-shelf data-human interface platforms: traps and tricks.
- Author
-
Angeli, Alessia, Marfia, Gustavo, and Riedel, Norman
- Subjects
FEATURE selection ,MACHINE learning ,DEEP learning ,DATA science ,ALGORITHMS ,PROBLEM solving - Abstract
With the development of learning algorithms, the constantly increasing computing power and the available amount of multimedia data, the adoption rate of data science techniques is steadily growing. Machine and deep learning algorithms are already used in a wide variety of ways to solve domain-specific problems. However, the potential of such methodologies will be fulfilled when also non specialized data scientists will be empowered with their use. Focusing on such perspective, this work does not deal with a classical data science problem, but instead exploits existing and available easy to use data-human interfaces. To this aim, we picked an exemplar scenario, amounting to an existing qualitative activity recognition data set that was in the past analyzed utilizing feature selection techniques and custom machine learning paradigms. We here verify how it is today possible, without changing the default settings and/or performing any type of feature selection, to employ the machine and deep learning algorithms provided by different publicly accessible tools (namely, Weka, Orange, Ludwig and KNIME) to address the same problem. Nevertheless, not all of the utilized platforms and algorithms provided satisfactorily results: we here finally discuss the possible issues and opportunities posed by such approach. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Robot grasping method optimization using improved deep deterministic policy gradient algorithm of deep reinforcement learning.
- Author
-
Zhang, Hongxu, Wang, Fei, Wang, Jianhui, and Cui, Ben
- Subjects
DEEP learning ,REINFORCEMENT learning ,MACHINE learning ,ALGORITHMS ,ROBOTS ,PROBLEM solving - Abstract
Robot grasping has become a very hot research field so that the requirements for robot operation are getting higher and higher. In previous research studies, the use of traditional target detection algorithms for grasping is often very inefficient, and this article is dedicated to improving the deep reinforcement learning algorithm to improve the grasping efficiency and solve the problem of robots dealing with the impact of unknown disturbances on grasping. Using the characteristic that deep reinforcement learning actively explores the unknown environment, a Gaussian parameter Deep Deterministic Policy Gradient (Gaussian-DDPG) algorithm based on the Importance-Weighted Autoencoder (IWAE) is proposed to realize the robot's autonomous learning of the grasping task. Traditional coordinate positioning methods and deep learning methods have poor grasping effects for disturbed situations (such as the movement of the target object). The IWAE algorithm is used to compress the high-dimensional information of the original visual input to the hidden space and pass it to the deep reinforcement learning network as part of the state value. Based on the classic DDPG algorithm, it smoothly adds Gaussian parameters to improve the exploratory nature of the algorithm, dynamically sets the robot grasping space parameters to adapt to the workspace of multiple scales, and finally, realizes the accurate grasping of the robot. Relying on the possible position information deviation of the visual information, the control of the grasping position by the manipulator torque information is further optimized to improve the grasping efficiency of disturbed objects. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. An Autoencoder-Based Deep Learning Approach for Load Identification in Structural Dynamics.
- Author
-
Rosafalco, Luca, Manzoni, Andrea, Mariani, Stefano, and Corigliano, Alberto
- Subjects
SENSOR networks ,DEEP learning ,STRUCTURAL health monitoring ,CIVIL engineering ,ALGORITHMS ,MACHINE learning ,PROBLEM solving ,STRUCTURAL dynamics - Abstract
In civil engineering, different machine learning algorithms have been adopted to process the huge amount of data continuously acquired through sensor networks and solve inverse problems. Challenging issues linked to structural health monitoring or load identification are currently related to big data, consisting of structural vibration recordings shaped as a multivariate time series. Any algorithm should therefore allow an effective dimensionality reduction, retaining the informative content of data and inferring correlations within and across the time series. Within this framework, we propose a time series AutoEncoder (AE) employing inception modules and residual learning for the encoding and the decoding parts, and an extremely reduced latent representation specifically tailored to tackle load identification tasks. We discuss the choice of the dimensionality of this latent representation, considering the sources of variability in the recordings and the inverse-forward nature of the AE. To help setting the aforementioned dimensionality, the false nearest neighbor heuristics is also exploited. The reported numerical results, related to shear buildings excited by dynamic loadings, highlight the signal reconstruction capacity of the proposed AE, and the capability to accomplish the load identification task. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.