3,015 results
Search Results
2. A novel algorithm for maximum power point tracking using computer vision (CVMPPT).
- Author
-
Ahmadi M, Abrari M, Ghanaatshoar M, and Khalafi A
- Subjects
- Reproducibility of Results, Electronics, Temperature, Algorithms, Computers
- Abstract
The behavior of an illuminated solar module can be characterized by its power-voltage curve. Tracking the peak of this curve is essential to harvest the maximum power by the module. The position of the peak varies with temperature and irradiance and needs to be traced. Under partial shading conditions, the number of peaks increases and makes it more difficult to find the global maximum power point (MPP). Various methods are used for maximum power point tracking (MPPT) that are based on iterations. These methods are time-consuming and fail to work satisfactorily under rapidly changing environmental conditions. In this paper, a novel algorithm is proposed that for the first time, utilizes computer vision to find the global maximum power point. This algorithm, which is implemented in Matlab/Simulink, is free of voltage iterations and gives the real-time data for the maximum power point. The proposed algorithm increases the speed and the reliability of the MPP tracking via replacing analogue electronics calculations by digital means. The validity of the algorithm is experimentally verified., Competing Interests: The authors have declared that no competing interests exist., (Copyright: © 2024 Ahmadi et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)
- Published
- 2024
- Full Text
- View/download PDF
3. Optimizing FPGA implementation of high-precision chaotic systems for improved performance.
- Author
-
Damaj I, Zaher A, and Lawand W
- Subjects
- Communication, Computers, Algorithms
- Abstract
Developing chaotic systems-on-a-chip is gaining much attention due to its great potential in securing communication, encrypting data, generating random numbers, and more. The digital implementation of chaotic systems strives to achieve high performance in terms of time, speed, complexity, and precision. In this paper, the focus is on developing high-speed Field Programmable Gate Array (FPGA) cores for chaotic systems, exemplified by the Lorenz system. The developed cores correspond to numerical integration techniques that can extend to the equations of the sixth order and at high precision. The investigation comprises a thorough analysis and evaluation of the developed cores according to the algorithm complexity and the achieved precision, hardware area, throughput, power consumption, and maximum operational frequency. Validations are done through simulations and careful comparisons with outstanding closely related work from the recent literature. The results affirm the successful creation of highly efficient sixth-order Lorenz discretizations, achieving a high throughput of 3.39 Gbps with a precision of 16 bits. Additionally, an outstanding throughput of 21.17 Gbps was achieved for the first-order implementation coupled with a high precision of 64 bits. These outcomes set our work as a benchmark for high-performance characteristics, surpassing similar investigations reported in the literature., Competing Interests: The authors have declared that no competing interests exist., (Copyright: © 2024 Damaj et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)
- Published
- 2024
- Full Text
- View/download PDF
4. Reports from Zhejiang Normal University Highlight Recent Findings in Engineering Software (Paper Intrusion Detection Approach for Cloud and Iot Environments Using Deep Learning and Capuchin Search Algorithm)
- Subjects
Engineering -- Computer programs ,Security software ,Algorithms ,Network security software ,Algorithm ,Engineering software ,Computers ,News, opinion and commentary - Abstract
2023 MAR 8 (VerticalNews) -- By a News Reporter-Staff News Editor at Computer Weekly News -- Current study results on Engineering - Engineering Software have been published. According to news [...]
- Published
- 2023
5. Research on the Fusion of Hybrid Fuzzy Clustering Algorithm and Computer Automatic Test Paper Composition Algorithm.
- Author
-
Kan, Baopeng
- Subjects
COMPUTERS ,COMPUTER algorithms ,FUZZY algorithms ,COMPUTER workstation clusters ,ALGORITHMS ,HIGHER education exams - Abstract
In order to improve the effect of intelligent automatic test paper composition, this paper combines the hybrid fuzzy clustering algorithm to study the computer automatic test paper composition algorithm. In this paper, a computer automatic test paper composition system based on hybrid fuzzy clustering algorithm is constructed. Moreover, the hybrid fuzzy clustering method used in this paper is used as the basic algorithm of the system, and the algorithm is improved according to the actual needs of intelligent paper composition. In addition, this paper uses an intelligent algorithm to input the relevant constraint parameters and combines the original parameters to select the most suitable test questions from the database and combine them into test papers. Finally, this paper constructs the system structure based on the requirements of intelligent test paper composition. The experimental research shows that the computer automatic test paper composition system based on the hybrid fuzzy clustering algorithm proposed in this paper has a good test paper composition function, which can effectively promote the progress of the intelligent examination mode in colleges and universities. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Computer-aided accurate calculation of interacted volumes for 3D isosurface point clouds of molecular electrostatic potential.
- Author
-
Lv K, Zhang J, Liu X, Zhou Y, and Liu K
- Subjects
- Static Electricity, Computer Simulation, Algorithms, Computers
- Abstract
The quality of chiral environment (i.e. catalytic pocket) is directly related to the performance of chiral catalysts. The existing methods need super computing power and time, i.e., it is difficult to quickly judge the interaction between chiral catalysts and substrates for accurately evaluating the effects of chiral catalytic pockets. In this paper, for the 3D isosurface point clouds of molecular electrostatic potential, by using computer simulations, we propose a robust method to detect interacted points, and then accurately have the corresponding interacted volumes. First, by using the existing marching cubes algorithm, we construct the 3D models with triangular surface for isosurface point clouds of molecular electrostatic potentials. Second, by using our improved hierarchical bounding boxes algorithm, we significantly filter out most redundant non-collision points. Third, by using the normal vectors of the remaining points and related triangles, we robustly determine the interacted points to construct interacted sets. And finally, by combining the classical slicing with our multi-contour segmenting, we accurately calculate the interacted volumes. Over three groups of the point clouds of the chemical molecules, experimental results show that our method effectively removes the non-interacted points at average rates of 71.65%, 77.76%, and 71.82%, and calculates the interacted volumes with the average relative errors of 1.7%, 1.6%, and 1.9%, respectively., Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2023 Elsevier Inc. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
7. Algorithm-Oriented SIMD Computer Mathematical Model and Its Application
- Author
-
Jiang, Yongfeng and Li, Yuan
- Abstract
This paper has designed a professional and practical SIMD computer mathematical model based on the SIMD physical machine model combined with the variable addition method. Furthermore, the model is applied in image collection, processing, and display operations, and a SIMD data parallel image processing system is finally established by absorbing the parallel computing advantages of the mathematical model. In addition, the data-parallel image processing algorithm is introduced and the convolutional neural network algorithm is optimized to promote the significant improvement of the main performance such as the accuracy of the application system. The final experimental results have shown that the highest accuracy of the data-parallel image processing algorithm reaches 93.3% and the lowest error rate reaches 0.11%, which proves the superiority of the SIMD computer mathematical model in image processing applications.
- Published
- 2022
- Full Text
- View/download PDF
8. Scientific papers and artificial intelligence. Brave new world?
- Author
-
Nexøe, Jørgen
- Subjects
COMPUTERS ,MANUSCRIPTS ,ARTIFICIAL intelligence ,MACHINE learning ,DATA analysis ,MEDICAL literature ,MEDICAL research ,ALGORITHMS - Published
- 2023
- Full Text
- View/download PDF
9. Data from Norwegian University of Science and Technology (NTNU) Provide New Insights into Information Technology (Classification of Forensic Hyperspectral Paper Data Using Hybrid Spectral Similarity Algorithms)
- Subjects
Algorithms ,Algorithm ,Computers - Abstract
2022 FEB 1 (VerticalNews) -- By a News Reporter-Staff News Editor at Information Technology Newsweekly -- Investigators publish new report on Information Technology. According to news reporting originating in Gjovik, [...]
- Published
- 2022
10. The SHARK integral generation and digestion system.
- Author
-
Neese F
- Subjects
- Electrons, Digestion, Algorithms, Computers
- Abstract
In this paper, the SHARK integral generation and digestion engine is described. In essence, SHARK is based on a reformulation of the popular McMurchie/Davidson approach to molecular integrals. This reformulation leads to an efficient algorithm that is driven by BLAS level 3 operations. The algorithm is particularly efficient for high angular momentum basis functions (up to L = 7 is available by default, but the algorithm is programmed for arbitrary angular momenta). SHARK features a significant number of specific programming constructs that are designed to greatly simplify the workflow in quantum chemical program development and avoid undesirable code duplication to the largest possible extent. SHARK can handle segmented, generally and partially generally contracted basis sets. It can be used to generate a host of one- and two-electron integrals over various kernels including, two-, three-, and four-index repulsion integrals, integrals over Gauge Including Atomic Orbitals (GIAOs), relativistic integrals and integrals featuring a finite nucleus model. SHARK provides routines to evaluate Fock like matrices, generate integral transformations and related tasks. SHARK is the essential engine inside the ORCA package that drives essentially all tasks that are related to integrals over basis functions in version ORCA 5.0 and higher. Since the core of SHARK is based on low-level basic linear algebra (BLAS) operations, it is expected to not only perform well on present day but also on future hardware provided that the hardware manufacturer provides a properly optimized BLAS library for matrix and vector operations. Representative timings and comparisons to the Libint library used by ORCA are reported for Intel i9 and Apple M1 max processors., (© 2022 The Author. Journal of Computational Chemistry published by Wiley Periodicals LLC.)
- Published
- 2023
- Full Text
- View/download PDF
11. Video Sequence Segmentation Based on K-Means in Air-Gap Data Transmission for a Cluttered Environment.
- Author
-
Mazurek P and Bak D
- Subjects
- Communication, Information Systems, Cluster Analysis, Image Processing, Computer-Assisted methods, Algorithms, Computers
- Abstract
An air gap is a technique that increases the security of information systems. The use of unconventional communication channels allows for obtaining communication that is of interest to the attacker as well as to cybersecurity engineers. One of the very dangerous forms of attack is the use of computer screen brightness modulation, which is not visible to the user but can be observed from a distance by the attacker. Once infected, the computer can transmit data over long distances. Even in the absence of direct screen visibility, transmission can be realized by analyzing the modulated reflection of the monitor's afterglow. The paper presents a new method for the automatic segmentation of video sequences to retrieve the transmitted data that does not have the drawbacks of the heretofore known method of growth (filling) based on an analysis of adjacent pixels. A fast camera operating at 380 fps was used for image acquisition. The method uses the characteristics of the amplitude spectrum for individual pixels, which is specific to the light sources in the room, and clustering with the k-means algorithm to group pixels into larger areas. Then, using the averaging of values for individual areas, it is possible to recover the 2-PAM (pulse-amplitude modulation) signal even at a 1000 times greater level of interference in the area to the transmitted signal, as shown in the experiments. The method does not require high-quality lenses.
- Published
- 2023
- Full Text
- View/download PDF
12. Phylogenetic tree reconstruction via graph cut presented using a quantum-inspired computer.
- Author
-
Onodera W, Hara N, Aoki S, Asahi T, and Sawamura N
- Subjects
- Phylogeny, Cluster Analysis, Databases, Protein, Algorithms, Computers
- Abstract
Phylogenetic trees are essential tools in evolutionary biology that present information on evolutionary events among organisms and molecules. From a dataset of n sequences, a phylogenetic tree of (2n-5)!! possible topologies exists, and determining the optimum topology using brute force is infeasible. Recently, a recursive graph cut on a graph-represented-similarity matrix has proven accurate in reconstructing a phylogenetic tree containing distantly related sequences. However, identifying the optimum graph cut is challenging, and approximate solutions are currently utilized. Here, a phylogenetic tree was reconstructed with an improved graph cut using a quantum-inspired computer, the Fujitsu Digital Annealer (DA), and the algorithm was named the "Normalized-Minimum cut by Digital Annealer (NMcutDA) method". First, a criterion for the graph cut, the normalized cut value, was compared with existing clustering methods. Based on the cut, we verified that the simulated phylogenetic tree could be reconstructed with the highest accuracy when sequences were diverged. Moreover, for some actual data from the structure-based protein classification database, only NMcutDA could cluster sequences into correct superfamilies. Conclusively, NMcutDA reconstructed better phylogenetic trees than those using other methods by optimizing the graph cut. We anticipate that when the diversity of sequences is sufficiently high, NMcutDA can be utilized with high efficiency., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2022 Elsevier Inc. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
13. An Ensemble Learning Aided Computer Vision Method with Advanced Color Enhancement for Corroded Bolt Detection in Tunnels.
- Author
-
Tan L, Tang T, and Yuan D
- Subjects
- Machine Learning, Algorithms, Computers
- Abstract
Bolts, as the basic units of tunnel linings, are crucial to safe tunnel service. Caused by the moist and complex environment in the tunnel, corrosion becomes a significant defect of bolts. Computer vision technology is adopted because manual patrol inspection is inefficient and often misses the corroded bolts. However, most current studies are conducted in a laboratory with good lighting conditions, while their effects in actual practice have yet to be considered, and the accuracy also needs to be improved. In this paper, we put forward an Ensemble Learning approach combining our Improved MultiScale Retinex with Color Restoration (IMSRCR) and You Only Look Once (YOLO) based on truly acquired tunnel image data to detect corroded bolts in the lining. The IMSRCR sharpens and strengthens the features of the lining pictures, weakening the bad effect of a dim environment compared with the existing MSRCR. Furthermore, we combine models with different parameters that show different performance using the ensemble learning method, greatly improving the accuracy. Sufficient comparisons and ablation experiments based on a dataset collected from the tunnel in service are conducted to prove the superiority of our proposed algorithm.
- Published
- 2022
- Full Text
- View/download PDF
14. Digital Image Decoder for Efficient Hardware Implementation.
- Author
-
Savić G, Prokin M, Rajović V, and Prokin D
- Subjects
- Algorithms, Computers
- Abstract
Increasing the resolution of digital images and the frame rate of video sequences leads to an increase in the amount of required logical and memory resources necessary for digital image and video decompression. Therefore, the development of new hardware architectures for digital image decoder with a reduced amount of utilized logical and memory resources become a necessity. In this paper, a digital image decoder for efficient hardware implementation, has been presented. Each block of the proposed digital image decoder has been described. Entropy decoder, decoding probability estimator, dequantizer and inverse subband transformer (parts of the digital image decoder) have been developed in such way which allows efficient hardware implementation with reduced amount of utilized logic and memory resources. It has been shown that proposed hardware realization of inverse subband transformer requires 20% lower memory capacity and uses less logic resources compared with the best state-of-the-art realizations. The proposed digital image decoder has been implemented in a low-cost FPGA device and it has been shown that it requires at least 32% less memory resources in comparison to the other state-of-the-art decoders which can process high-definition frame size. The proposed solution also requires effectively lower memory size than state-of-the-art architectures which process frame size or tile size smaller than high-definition size. The presented digital image decoder has maximum operating frequency comparable with the highest maximum operating frequencies among the state-of-the-art solutions., Competing Interests: The authors declare no conflict of interest.
- Published
- 2022
- Full Text
- View/download PDF
15. Communication-efficient algorithms for solving pressure Poisson equation for multiphase flows using parallel computers.
- Author
-
Ghosh S, Lu J, Gupta V, and Tryggvason G
- Subjects
- Communication, Algorithms, Computers
- Abstract
Numerical solution of partial differential equations on parallel computers using domain decomposition usually requires synchronization and communication among the processors. These operations often have a significant overhead in terms of time and energy. In this paper, we propose communication-efficient parallel algorithms for solving partial differential equations that alleviate this overhead. First, we describe an asynchronous algorithm that removes the requirement of synchronization and checks for termination in a distributed fashion while maintaining the provision to restart iterations if necessary. Then, we build on the asynchronous algorithm to propose an event-triggered communication algorithm that communicates the boundary values to neighboring processors only at certain iterations, thereby reducing the number of messages while maintaining similar accuracy of solution. We demonstrate our algorithms on a successive over-relaxation solver for the pressure Poisson equation arising from variable density incompressible multiphase flows in 3-D and show that our algorithms improve time and energy efficiency., Competing Interests: The authors have declared that no competing interests exist., (Copyright: © 2022 Ghosh et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)
- Published
- 2022
- Full Text
- View/download PDF
16. BIOS-Based Server Intelligent Optimization.
- Author
-
Qi X, Yang J, Zhang Y, and Xiao B
- Subjects
- Markov Chains, Algorithms, Computers
- Abstract
Servers are the infrastructure of enterprise applications, and improving server performance under fixed hardware resources is an important issue. Conducting performance tuning at the application layer is common, but it is not systematic and requires prior knowledge of the running application. Some works performed tuning by dynamically adjusting the hardware prefetching configuration with a predictive model. Similarly, we design a BIOS (Basic Input/Output System)-based dynamic tuning framework for a Taishan 2280 server, including dynamic identification and static optimization. We simulate five workload scenarios (CPU-instance, etc.) with benchmark tools and perform scenario recognition dynamically with performance monitor counters (PMCs). The adjustable configurations provided by Kunpeng processing reach 2N(N>100). Therefore, we propose a joint BIOS optimization algorithm using a deep Q-network. Configuration optimization is modeled as a Markov decision process starting from a feasible solution and optimizing gradually. To improve the continuous optimization capabilities, the neighborhood search method of state machine control is added. To assess its performance, we compare our algorithm with the genetic algorithm and particle swarm optimization. Our algorithm shows that it can also improve performance up to 1.10× compared to experience configuration and perform better in reducing the probability of server downtime. The dynamic tuning framework in this paper is extensible, can be trained to adapt to different scenarios, and is more suitable for servers with many adjustable configurations. Compared with the heuristic intelligent search algorithm, the proposed joint BIOS optimization algorithm can generate fewer infeasible solutions and is not easily disturbed by initialization.
- Published
- 2022
- Full Text
- View/download PDF
17. Motion-Based Object Location on a Smart Image Sensor Using On-Pixel Memory.
- Author
-
Valenzuela W, Saavedra A, Zarkesh-Ha P, and Figueroa M
- Subjects
- Motion, Algorithms, Computers
- Abstract
Object location is a crucial computer vision method often used as a previous stage to object classification. Object-location algorithms require high computational and memory resources, which poses a difficult challenge for portable and low-power devices, even when the algorithm is implemented using dedicated digital hardware. Moving part of the computation to the imager may reduce the memory requirements of the digital post-processor and exploit the parallelism available in the algorithm. This paper presents the architecture of a Smart Imaging Sensor (SIS) that performs object location using pixel-level parallelism. The SIS is based on a custom smart pixel, capable of computing frame differences in the analog domain, and a digital coprocessor that performs morphological operations and connected components to determine the bounding boxes of the detected objects. The smart-pixel array implements on-pixel temporal difference computation using analog memories to detect motion between consecutive frames. Our SIS can operate in two modes: (1) as a conventional image sensor and (2) as a smart sensor which delivers a binary image that highlights the pixels in which movement is detected between consecutive frames and the object bounding boxes. In this paper, we present the design of the smart pixel and evaluate its performance using post-parasitic extraction on a 0.35 µm mixed-signal CMOS process. With a pixel-pitch of 32 µm × 32 µm, we achieved a fill factor of 28%. To evaluate the scalability of the design, we ported the layout to a 0.18 µm process, achieving a fill factor of 74%. On an array of 320×240 smart pixels, the circuit operates at a maximum frame rate of 3846 frames per second. The digital coprocessor was implemented and validated on a Xilinx Artix-7 XC7A35T field-programmable gate array that runs at 125 MHz, locates objects in a video frame in 0.614 µs, and has a power consumption of 58 mW.
- Published
- 2022
- Full Text
- View/download PDF
18. Hardware Acceleration of the STRIKE String Kernel Algorithm for Estimating Protein to Protein Interactions.
- Author
-
Sibai FN, El-Moursy A, Asaduzzaman A, and Majzoub S
- Subjects
- Acceleration, Computational Biology methods, Proteins, Algorithms, Computers
- Abstract
Protein-protein interaction (PPI) is an important field in bioinformatics which helps in understanding diseases and devising therapy. PPI aims at estimating the similarity of protein sequences and their common regions. STRIKE was introduced as a PPI algorithm which was able to achieve reasonable improvement over existing PPI prediction methods. Although it consumes a lower execution time than most of other state-of the-art PPI prediction methods, its compute-intensive nature and the large volume of protein sequences in protein databases necessitate further time acceleration. In this paper, we develop hardware accelerator designs for the STRIKE algorithm. Results indicate that the weighted STRIKE accelerator execution times are about 10x longer than the unweighted STRIKE accelerator execution times. To further accelerate the performance of the weighted STRIKE, a parallel module accelerator organization duplicating the weighted STRIKE modules is introduced, achieving near linear speedups for long sequences of 100 or more characters. As demonstrated by Verilog simulations and FPGA runs, the weighted STRIKE module accelerator exhibits three orders of magnitude speed improvement over multi-core and cluster computers. Much higher speedups are possible with the parallel module accelerator.
- Published
- 2022
- Full Text
- View/download PDF
19. LIPSHOK: LIARA Portable Smart Home Kit.
- Author
-
Chapron K, Thullier F, Lapointe P, Maître J, Bouchard K, and Gaboury S
- Subjects
- Technology, Algorithms, Computers
- Abstract
Several smart home architecture implementations have been proposed in the last decade. These architectures are mostly deployed in laboratories or inside real habitations built for research purposes to enable the use of ambient intelligence using a wide variety of sensors, actuators and machine learning algorithms. However, the major issues for most related smart home architectures are their price, proprietary hardware requirements and the need for highly specialized personnel to deploy such systems. To tackle these challenges, lighter forms of smart home architectures known as smart homes in a box (SHiB) have been proposed. While SHiB remain an encouraging first step towards lightweight yet affordable solutions, they still suffer from few drawbacks. Indeed, some of these kits lack hardware support for some technologies, and others do not include enough sensors and actuators to cover most smart homes' requirements. Thus, this paper introduces the LIARA Portable Smart Home Kit (LIPSHOK). It has been designed to provide an affordable SHiB solution that anyone is able to install in an existing home. Moreover, LIPSHOK is a generic kit that includes a total of four specialized sensor modules that were introduced independently, as our laboratory has been working on their development over the last few years. This paper first provides a summary of each of these modules and their respective benefits within a smart home context. Then, it mainly focus on the introduction of the LIPSHOK architecture that provides a framework to unify the use of the proposed sensors thanks to a common modular infrastructure capable of managing heterogeneous technologies. Finally, we compare our work to the existing SHiB kit solutions and outline that it offers a more affordable, extensible and scalable solution whose resources are distributed under an open-source license.
- Published
- 2022
- Full Text
- View/download PDF
20. Quantum Speedup for Inferring the Value of Each Bit of a Solution State in Unsorted Databases Using a Bio-Molecular Algorithm on IBM Quantum's Computers.
- Author
-
Chang WL, Chung WY, Hsiao CY, Wong R, Chen JC, Feng M, and Vasilakos AV
- Subjects
- DNA chemistry, Databases, Factual, Algorithms, Computers
- Abstract
In this paper, we propose a bio-molecular algorithm with O( n
2 ) biological operations, O( 2n-1 ) DNA strands, O( n ) tubes and the longest DNA strand, O( n ), for inferring the value of a bit from the only output satisfying any given condition in an unsorted database with 2n items of n bits. We show that the value of each bit of the outcome is determined by executing our bio-molecular algorithm n times. Then, we show how to view a bio-molecular solution space with 2n-1 DNA strands as an eigenvector and how to find the corresponding unitary operator and eigenvalues for inferring the value of a bit in the output. We also show that using an extension of the quantum phase estimation and quantum counting algorithms computes its unitary operator and eigenvalues from bio-molecular solution space with 2n-1 DNA strands. Next, we demonstrate that the value of each bit of the output solution can be determined by executing the proposed extended quantum algorithms n times. To verify our theorem, we find the maximum-sized clique to a graph with two vertices and one edge and the solution b that satisfies b2 ≡ 1 (mod 15) and using IBM Quantum's backend.- Published
- 2022
- Full Text
- View/download PDF
21. Parallel Genetic Algorithms' Implementation Using a Scalable Concurrent Operation in Python.
- Author
-
Skorpil V and Oujezsky V
- Subjects
- Algorithms, Computers
- Abstract
This paper presents an implementation of the parallelization of genetic algorithms. Three models of parallelized genetic algorithms are presented, namely the Master-Slave genetic algorithm, the Coarse-Grained genetic algorithm, and the Fine-Grained genetic algorithm. Furthermore, these models are compared with the basic serial genetic algorithm model. Four modules, Multiprocessing, Celery, PyCSP, and Scalable Concurrent Operation in Python, were investigated among the many parallelization options in Python. The Scalable Concurrent Operation in Python was selected as the most favorable option, so the models were implemented using the Python programming language, RabbitMQ, and SCOOP. Based on the implementation results and testing performed, a comparison of the hardware utilization of each deployed model is provided. The results' implementation using SCOOP was investigated from three aspects. The first aspect was the parallelization and integration of the SCOOP module into the resulting Python module. The second was the communication within the genetic algorithm topology. The third aspect was the performance of the parallel genetic algorithm model depending on the hardware.
- Published
- 2022
- Full Text
- View/download PDF
22. Design and Implementation of a Ball-Plate Control System and Python Script for Educational Purposes in STEM Technologies.
- Author
-
Tudić V, Kralj D, Hoster J, and Tropčić T
- Subjects
- Feedback, Algorithms, Computers
- Abstract
This paper presents the process of designing, fabricating, assembling, programming and optimizing a prototype nonlinear mechatronic Ball-Plate System (BPS) as a laboratory platform for engineering education STEM. Due to the nonlinearity and complexity of BPS, the task presents challenges such as: (1) difficulty in controlling the stabilization of a particular position point, known as steady-state error, (2) position resolution, known as specific distance error, and (3) adverse environmental effects-light-shadow error, which is also discussed in this paper. The laboratory prototype BPS for education was designed, manufactured and installed at Karlovac University of Applied Sciences in the Department of Mechanical Engineering, Mechatronics program. The low-cost two-degree BPS uses a USB HD camera for computer vision as a feedback sensor and two DC servo motors as actuators. Due to control problems, an advanced block diagram of the control system is proposed and discussed. An open-source control system based on Python scripts, which allows the use of ready-made functions from the library, allows the color of the ball and the parameters of the PID controller to be changed, indirectly simplifying the control system and performing mathematical calculations directly. The authors will continue their research on this BPS mechatronic platform and control algorithms.
- Published
- 2022
- Full Text
- View/download PDF
23. A full-parallel implementation of Self-Organizing Maps on hardware.
- Author
-
Dias LA, Damasceno AMP, Gaura E, and Fernandes MAC
- Subjects
- Cluster Analysis, Algorithms, Computers
- Abstract
Self-Organizing Maps (SOMs) are extensively used for data clustering and dimensionality reduction. However, if applications are to fully benefit from SOM based techniques, high-speed processing is demanding, given that data tends to be both highly dimensional and yet "big". Hence, a fully parallel architecture for the SOM is introduced to optimize the system's data processing time. Unlike most literature approaches, the architecture proposed here does not contain sequential steps - a common limiting factor for processing speed. The architecture was validated on FPGA and evaluated concerning hardware throughput and the use of resources. Comparisons to the state of the art show a speedup of 8.91× over a partially serial implementation, using less than 15% of hardware resources available. Thus, the method proposed here points to a hardware architecture that will not be obsolete quickly., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2021 Elsevier Ltd. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
24. Fully Parallel Implementation of Otsu Automatic Image Thresholding Algorithm on FPGA.
- Author
-
Barros WKP, Dias LA, and Fernandes MAC
- Subjects
- Algorithms, Computers
- Abstract
This work proposes a high-throughput implementation of the Otsu automatic image thresholding algorithm on Field Programmable Gate Array (FPGA), aiming to process high-resolution images in real-time. The Otsu method is a widely used global thresholding algorithm to define an optimal threshold between two classes. However, this technique has a high computational cost, making it difficult to use in real-time applications. Thus, this paper proposes a hardware design exploiting parallelization to optimize the system's processing time. The implementation details and an analysis of the synthesis results concerning the hardware area occupation, throughput, and dynamic power consumption, are presented. Results have shown that the proposed hardware achieved a high speedup compared to similar works in the literature.
- Published
- 2021
- Full Text
- View/download PDF
25. MC-LSTM: Real-Time 3D Human Action Detection System for Intelligent Healthcare Applications.
- Author
-
Yin J, Han J, Xie R, Wang C, Duan X, Rong Y, Zeng X, and Tao J
- Subjects
- Delivery of Health Care, Humans, Movement, Algorithms, Computers
- Abstract
Due to the movement expressiveness and privacy assurance of human skeleton data, 3D skeleton-based action inference is becoming popular in healthcare applications. These scenarios call for more advanced performance in application-specific algorithms and efficient hardware support. Warnings on health emergencies sensitive to response speed require low latency output and action early detection capabilities. Medical monitoring that works in an always-on edge platform needs the system processor to have extreme energy efficiency. Therefore, in this paper, we propose the MC-LSTM, a functional and versatile 3D skeleton-based action detection system, for the above demands. Our system achieves state-of-the-art accuracy on trimmed and untrimmed cases of general-purpose and medical-specific datasets with early-detection features. Further, the MC-LSTM accelerator supports parallel inference on up to 64 input channels. The implementation on Xilinx ZCU104 reaches a throughput of 18 658 Frames-Per-Second (FPS) and an inference latency of 3.5 ms with the batch size of 64. Accordingly, the power consumption is 3.6 W for the whole FPGA+ARM system, which is 37.8x and 10.4x more energy-efficient than the high-end Titan X GPU and i7-9700 CPU, respectively. Meanwhile, our accelerator also keeps a 4 ∼ 5x energy efficiency advantage against the low-power high-performance Firefly-RK3399 board carrying an ARM Cortex-A72+A53 CPU. We further synthesize an 8-bit quantized version on the same hardware, providing a 48.8% increase in energy efficiency under the same throughput.
- Published
- 2021
- Full Text
- View/download PDF
26. Data on Diabetic Ketoacidosis Published by a Researcher at University of New Mexico (Evaluation of Computer-Based Insulin Infusion Algorithm Compared With a Paper-Based Protocol in the Treatment of Diabetic Ketoacidosis).
- Abstract
Keywords: Acid-Base Imbalance; Algorithms; Computers; Diabetes Complications; Diabetes Mellitus; Diabetic Ketoacidosis; Glucose Metabolism Disorders; Health and Medicine; Hospitals; Nutritional and Metabolic Diseases and Conditions; Peptide Hormones; Peptide Proteins; Proinsulin EN Acid-Base Imbalance Algorithms Computers Diabetes Complications Diabetes Mellitus Diabetic Ketoacidosis Glucose Metabolism Disorders Health and Medicine Hospitals Nutritional and Metabolic Diseases and Conditions Peptide Hormones Peptide Proteins Proinsulin 53 53 1 04/17/23 20230420 NES 230420 2023 APR 17 (NewsRx) -- By a News Reporter-Staff News Editor at Diabetes Week -- New study results on diabetic ketoacidosis have been published. Acid-Base Imbalance, Algorithms, Computers, Diabetes Complications, Diabetes Mellitus, Diabetic Ketoacidosis, Glucose Metabolism Disorders, Health and Medicine, Hospitals, Nutritional and Metabolic Diseases and Conditions, Peptide Hormones, Peptide Proteins, Proinsulin. [Extracted from the article]
- Published
- 2023
27. Memory hierarchy characterization of SPEC CPU2006 and SPEC CPU2017 on the Intel Xeon Skylake-SP.
- Author
-
Navarro-Torres A, Alastruey-Benedé J, Ibáñez-Marín P, and Viñals-Yúfera V
- Subjects
- Algorithms, Benchmarking, Computer Systems standards, Computers standards, Software
- Abstract
SPEC CPU is one of the most common benchmark suites used in computer architecture research. CPU2017 has recently been released to replace CPU2006. In this paper we present a detailed evaluation of the memory hierarchy performance for both the CPU2006 and single-threaded CPU2017 benchmarks. The experiments were executed on an Intel Xeon Skylake-SP, which is the first Intel processor to implement a mostly non-inclusive last-level cache (LLC). We present a classification of the benchmarks according to their memory pressure and analyze the performance impact of different LLC sizes. We also test all the hardware prefetchers showing they improve performance in most of the benchmarks. After comprehensive experimentation, we can highlight the following conclusions: i) almost half of SPEC CPU benchmarks have very low miss ratios in the second and third level caches, even with small LLC sizes and without hardware prefetching, ii) overall, the SPEC CPU2017 benchmarks demand even less memory hierarchy resources than the SPEC CPU2006 ones, iii) hardware prefetching is very effective in reducing LLC misses for most benchmarks, even with the smallest LLC size, and iv) from the memory hierarchy standpoint the methodologies commonly used to select benchmarks or simulation points do not guarantee representative workloads., Competing Interests: The authors have declared that no competing interests exist.
- Published
- 2019
- Full Text
- View/download PDF
28. Conferences Versus Journals in Computer Science.
- Author
-
Vrettas, George and Sanderson, Mark
- Subjects
ALGORITHMS ,COMPUTERS ,CONFERENCES & conventions ,SCHOLARLY method ,PUBLISHING ,SERIAL publications ,T-test (Statistics) ,CITATION analysis - Abstract
The question of which type of computer science (CS) publication-0conference or journal-0is likely to result in more citations for a published paper is addressed. A series of data sets are examined and joined in order to analyze the citations of over 195,000 conference papers and 108,000 journal papers. Two means of evaluating the citations of journals and conferences are explored: h5 and average citations per paper; it was found that h5 has certain biases that make it a difficult measure to use (despite it being the main measure used by Google Scholar). Results from the analysis show that CS, as a discipline, values conferences as a publication venue more highly than any other academic field of study. The analysis also shows that a small number of elite CS conferences have the highest average paper citation rate of any publication type, although overall, citation rates in conferences are no higher than in journals. It is also shown that the length of a paper is correlated with citation rate. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
29. Components of the item selection algorithm in computerized adaptive testing.
- Author
-
Han KCT
- Subjects
- Educational Measurement methods, Humans, Models, Statistical, Algorithms, Computers, Educational Measurement statistics & numerical data
- Abstract
Computerized adaptive testing (CAT) greatly improves measurement efficiency in high-stakes testing operations through the selection and administration of test items with the difficulty level that is most relevant to each individual test taker. This paper explains the 3 components of a conventional CAT item selection algorithm: test content balancing, the item selection criterion, and item exposure control. Several noteworthy methodologies underlie each component. The test script method and constrained CAT method are used for test content balancing. Item selection criteria include the maximized Fisher information criterion, the b -matching method, the a -stratification method, the weighted likelihood information criterion, the efficiency balanced information criterion, and the Kullback-Leibler information criterion. The randomesque method, the Sympson-Hetter method, the unconditional and conditional multinomial methods, and the fade-away method are used for item exposure control. Several holistic approaches to CAT use automated test assembly methods, such as the shadow test approach and the weighted deviation model. Item usage and exposure count vary depending on the item selection criterion and exposure control method. Finally, other important factors to consider when determining an appropriate CAT design are the computer resources requirement, the size of item pools, and the test length. The logic of CAT is now being adopted in the field of adaptive learning, which integrates the learning aspect and the (formative) assessment aspect of education into a continuous, individualized learning experience. Therefore, the algorithms and technologies described in this review may be able to help medical health educators and high-stakes test developers to adopt CAT more actively and efficiently.
- Published
- 2018
- Full Text
- View/download PDF
30. Enhancing computer image recognition with improved image algorithms.
- Author
-
Huang, Lanqing, Yao, Cheng, Zhang, Lingyan, Luo, Shijian, Ying, Fangtian, and Ying, Weiqiang
- Subjects
IMAGE recognition (Computer vision) ,IMAGE processing ,ALGORITHMS ,COMPUTERS ,COMPUTER algorithms - Abstract
Advances in computer image recognition have significantly impacted many industries, including healthcare, security and autonomous systems. This paper aims to explore the potential of improving image algorithms to enhance computer image recognition. Specifically, we will focus on regression methods as a means to improve the accuracy and efficiency of identifying images. In this study, we will analyze various regression techniques and their applications in computer image recognition, as well as the resulting performance improvements through detailed examples and data analysis. This paper deals with the problems related to visual image processing in outdoor unstructured environment. Finally, the heterogeneous patterns are converted into the same pattern, and the heterogeneous patterns are extracted from the fusion features of data modes. The simulation results show that the perception ability and recognition ability of outdoor image recognition in complex environment are improved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Design and Optimization of Motion Training System Assisted by Human Posture Estimation Algorithm.
- Author
-
Dang, Zijun, Dong, Huan, Li, Tong, and Kong, Kai
- Subjects
POSTURE ,FEATURE extraction ,TRAJECTORY optimization ,PHYSICAL training & conditioning ,ALGORITHMS ,COMPUTERS ,COMPUTER engineering ,MOTION - Abstract
With the rapid development of computer technology and electronic information technology, the sports training system no longer depends on the traditional algorithm for operation support, and various advanced posture algorithms are emerging. At the same time, it also further optimizes the intelligence and accuracy of the sports training algorithm. As an advanced algorithm combined with virtual reality technology, human posture estimation algorithm plays an obvious role in optimizing the effect of sports training. This paper will design a motion training system based on the optimized and improved human posture trajectory algorithm, use the depth image correlation theory to solve the problem of non-Gaussian noise crosstalk in the depth image of the traditional human posture algorithm in principle, improve the accurate feature extraction of the depth image by the algorithm, and solve the problem of human feature redundancy, so as to further improve the accuracy of the establishment of a single human model; on the problem of multi-person posture estimation algorithm, this paper proposes a high-resolution multi-person posture high-precision network model and adds the focus mechanism. Based on this, this paper realizes the high-precision and high-speed modeling of multi-person posture, so as to provide an accurate model for the multi-person function of sports training system and improve the efficiency of the algorithm. In the experimental part, this paper takes tennis as a typical case to design the sports training system and experiments based on the system designed in this paper. The experimental results show that the system under the proposed algorithm has obvious advantages in accuracy and training effect. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications.
- Author
-
Hossain MS, Saeedi E, and Kong Y
- Subjects
- United States, United States Government Agencies, Algorithms, Computer Security instrumentation, Computers
- Abstract
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.
- Published
- 2017
- Full Text
- View/download PDF
33. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers.
- Author
-
Milankovic IL, Mijailovic NV, Filipovic ND, and Peulic AS
- Subjects
- Breast diagnostic imaging, Female, Humans, Algorithms, Breast Neoplasms diagnostic imaging, Computers, Mammography instrumentation, Mammography methods
- Abstract
Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.
- Published
- 2017
- Full Text
- View/download PDF
34. Condition number estimation of preconditioned matrices.
- Author
-
Kushida N
- Subjects
- Finite Element Analysis, Models, Theoretical, Algorithms, Computer Systems, Computers
- Abstract
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.
- Published
- 2015
- Full Text
- View/download PDF
35. Design and implementation of a hybrid MPI-CUDA model for the Smith-Waterman algorithm.
- Author
-
Khaled H, Faheem Hel D, and El Gohary R
- Subjects
- Algorithms, Computers, Models, Theoretical, Programming Languages
- Abstract
This paper provides a novel hybrid model for solving the multiple pair-wise sequence alignment problem combining message passing interface and CUDA, the parallel computing platform and programming model invented by NVIDIA. The proposed model targets homogeneous cluster nodes equipped with similar Graphical Processing Unit (GPU) cards. The model consists of the Master Node Dispatcher (MND) and the Worker GPU Nodes (WGN). The MND distributes the workload among the cluster working nodes and then aggregates the results. The WGN performs the multiple pair-wise sequence alignments using the Smith-Waterman algorithm. We also propose a modified implementation to the Smith-Waterman algorithm based on computing the alignment matrices row-wise. The experimental results demonstrate a considerable reduction in the running time by increasing the number of the working GPU nodes. The proposed model achieved a performance of about 12 Giga cell updates per second when we tested against the SWISS-PROT protein knowledge base running on four nodes.
- Published
- 2015
- Full Text
- View/download PDF
36. Comparing features extractors in EEG-based cognitive fatigue detection of demanding computer tasks.
- Author
-
Rifai Chai, Smith MR, Nguyen TN, Sai Ho Ling, Coutts AJ, and Nguyen HT
- Subjects
- Adolescent, Adult, Bayes Theorem, Humans, Neural Networks, Computer, Signal Processing, Computer-Assisted, Young Adult, Algorithms, Cognition, Computers, Electroencephalography methods, Mental Fatigue diagnosis, Task Performance and Analysis
- Abstract
An electroencephalography (EEG)-based classification system could be used as a tool for detecting cognitive fatigue from demanding computer tasks. The most widely used feature extractor in EEG-based fatigue classification is power spectral density (PSD). This paper investigates PSD and three alternative feature extraction methods, in order to find the best feature extractor for the classification of cognitive fatigue during cognitively demanding tasks. These compared methods are power spectral entropy (PSE), wavelet, and autoregressive (AR). Bayesian neural network was selected as the classifier in this study. The results showed that the use of PSD and PSE methods provide an average accuracy of 60% for each computer task. This finding is slightly improved using the wavelet method which has an average accuracy of 61%. The AR method is the best feature extractor compared with the PSD, PSE and wavelet in this study with accuracy of 75.95% in AX-continuous performance test (AX-CPT), 75.23% in psychomotor vigilance test (PVT) and 76.02% in Stroop task (p-value <; 0.05).
- Published
- 2015
- Full Text
- View/download PDF
37. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.
- Author
-
Neylon J, Sheng K, Yu V, Chen Q, Low DA, Kupelian P, and Santhanam A
- Subjects
- Humans, Imaging, Three-Dimensional methods, Lung diagnostic imaging, Lung Neoplasms diagnostic imaging, Lung Neoplasms radiotherapy, Models, Biological, Phantoms, Imaging, Radiotherapy Dosage, Radiotherapy, Computer-Assisted instrumentation, Tomography, X-Ray Computed methods, Water, Algorithms, Computer Graphics instrumentation, Computers, Radiotherapy, Computer-Assisted methods
- Abstract
Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU., Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm., Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method., Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.
- Published
- 2014
- Full Text
- View/download PDF
38. A heuristic placement selection of live virtual machine migration for energy-saving in cloud computing environment.
- Author
-
Zhao J, Hu L, Ding Y, Xu G, and Hu M
- Subjects
- Artificial Intelligence economics, Internet economics, Algorithms, Computers economics, Computing Methodologies
- Abstract
The field of live VM (virtual machine) migration has been a hotspot problem in green cloud computing. Live VM migration problem is divided into two research aspects: live VM migration mechanism and live VM migration policy. In the meanwhile, with the development of energy-aware computing, we have focused on the VM placement selection of live migration, namely live VM migration policy for energy saving. In this paper, a novel heuristic approach PS-ES is presented. Its main idea includes two parts. One is that it combines the PSO (particle swarm optimization) idea with the SA (simulated annealing) idea to achieve an improved PSO-based approach with the better global search's ability. The other one is that it uses the Probability Theory and Mathematical Statistics and once again utilizes the SA idea to deal with the data obtained from the improved PSO-based process to get the final solution. And thus the whole approach achieves a long-term optimization for energy saving as it has considered not only the optimization of the current problem scenario but also that of the future problem. The experimental results demonstrate that PS-ES evidently reduces the total incremental energy consumption and better protects the performance of VM running and migrating compared with randomly migrating and optimally migrating. As a result, the proposed PS-ES approach has capabilities to make the result of live VM migration events more high-effective and valuable.
- Published
- 2014
- Full Text
- View/download PDF
39. Computational Intelligence Methods for Bioinformatics and Biostatistics : 14th International Meeting, CIBB 2017, Cagliari, Italy, September 7-9, 2017, Revised Selected Papers
- Author
-
Massimo Bartoletti, Annalisa Barla, Andrea Bracciali, Gunnar W. Klau, Leif Peterson, Alberto Policriti, Roberto Tagliaferri, Massimo Bartoletti, Annalisa Barla, Andrea Bracciali, Gunnar W. Klau, Leif Peterson, Alberto Policriti, and Roberto Tagliaferri
- Subjects
- Bioinformatics, Artificial intelligence, Machine theory, Algorithms, Computers
- Abstract
This book constitutes the thoroughly refereed post-conference proceedings of the 14th International Meeting on Computational. Intelligence Methods for Bioinformatics and Biostatistics, CIBB 2017, held in Cagliari, Italy, in September 2017.The 19 revised full papers presented were carefully reviewed and selected from 44 submissions. The papers deal with the application of computational intelligence to open problems in bioinformatics, biostatistics, systems and synthetic biology, medical informatics, computational approaches to life sciences in general.
- Published
- 2019
40. Supercomputing : 5th Russian Supercomputing Days, RuSCDays 2019, Moscow, Russia, September 23–24, 2019, Revised Selected Papers
- Author
-
Vladimir Voevodin, Sergey Sobolev, Vladimir Voevodin, and Sergey Sobolev
- Subjects
- Computers, Special purpose, Computers, Computer vision, Algorithms, Computer programming, Operating systems (Computers)
- Abstract
This book constitutes the refereed post-conference proceedings of the 5th Russian Supercomputing Days, RuSCDays 2019, held in Moscow, Russia, in September 2019.The 60 revised full papers presented were carefully reviewed and selected from 127 submissions. The papers are organized in the following topical sections: parallel algorithms; supercomputer simulation; HPC, BigData, AI: architectures, technologies, tools; and distributed and cloud computing.
- Published
- 2019
41. Are All Literature Citations Equally Important? Automatic Citation Strength Estimation and Its Applications.
- Author
-
Xiaojun Wan and Fang Liu
- Subjects
ALGORITHMS ,COMPUTERS ,CITATION analysis - Published
- 2014
- Full Text
- View/download PDF
42. Number-Theoretic Methods in Cryptology : First International Conference, NuTMiC 2017, Warsaw, Poland, September 11-13, 2017, Revised Selected Papers
- Author
-
Jerzy Kaczorowski, Josef Pieprzyk, Jacek Pomykała, Jerzy Kaczorowski, Josef Pieprzyk, and Jacek Pomykała
- Subjects
- Cryptography, Data encryption (Computer science), Computer science—Mathematics, Discrete mathematics, Algorithms, Number theory, Software engineering, Computers
- Abstract
This book constitutes the refereed post-conference proceedings of the First International Conference on Number-Theoretic Methods in Cryptology, NuTMiC 2017, held in Warsaw, Poland, in September 2017.The 15 revised full papers presented in this book together with 3 invited talks were carefully reviewed and selected from 32 initial submissions. The papers are organized in topical sections on elliptic curves in cryptography; public-key cryptography; lattices in cryptography; number theory; pseudorandomness; and algebraic structures and analysis.
- Published
- 2018
43. RESEARCH ON COMPUTER INTELLIGENT COLLABORATIVE FILTERING ALGORITHM FOR PERSONALIZED NETWORK DATA RECOMMENDATION SYSTEM.
- Author
-
YONG YU
- Subjects
RECOMMENDER systems ,DATA privacy ,ALGORITHMS ,SOCIAL networks ,COMPUTERS - Abstract
In order to meet the protection needs of user privacy data in social networks, this paper proposes a computer intelligent collaborative filtering algorithm for personalized network data recommendation systems. This algorithm predicts user preferences for specific items by utilizing user evaluation information on groups of similar feature items, thereby achieving personalized recommendations. The experimental results show that as the number of project feature selections N increases, the MAE, RMSE, and NDCG5 of the algorithm gradually improve. This is mainly attributed to increasing the number of features under a fixed similarity threshold, which makes the data granularity finer and helps to describe project features more accurately. In the case of a fixed number of project feature selections N, the impact of the number of nearest neighbors s in similar groups on algorithm performance was further studied. The results showed that with the increase of s, MAE, RMSE, and NDCG5 showed a decreasing trend. Although the algorithm suffers from certain losses in recommendation accuracy, it is still within an acceptable range. It is worth noting that due to the system only using generalized data as input, user privacy data is effectively protected. Based on the comprehensive experimental results, this algorithm has significant value in practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Parallel Computational Technologies : 11th International Conference, PCT 2017, Kazan, Russia, April 3–7, 2017, Revised Selected Papers
- Author
-
Leonid Sokolinsky, Mikhail Zymbler, Leonid Sokolinsky, and Mikhail Zymbler
- Subjects
- Computer systems, Computer simulation, Numerical analysis, Computers, Computer programming, Algorithms
- Abstract
This book constitutes the refereed proceedings of the 11th International Conference on Parallel Computational Technologies, PCT 2017, held in Kazan, Russia, in April 2017. The 24 revised full papers presented were carefully reviewed and selected from 167 submissions. The papers are organized in topical sections on high performance architectures, tools and technologies; parallel numerical algorithms; supercomputer simulation.
- Published
- 2017
45. Supercomputing : Second Russian Supercomputing Days, RuSCDays 2016, Moscow, Russia, September 26–27, 2016, Revised Selected Papers
- Author
-
Vladimir Voevodin, Sergey Sobolev, Vladimir Voevodin, and Sergey Sobolev
- Subjects
- Computer simulation, Computer programming, Computer science—Mathematics, Computers, Numerical analysis, Algorithms
- Abstract
This book constitutes the refereed proceedings of the Second Russian Supercomputing Days, RuSCDays 2016, held in Moscow, Russia, in September 2016. The 28 revised full papers presented were carefully reviewed and selected from 94 submissions. The papers are organized in topical sections on the present of supercomputing: large tasks solving experience; the future of supercomputing: new technologies.
- Published
- 2017
46. Constructive Side-Channel Analysis and Secure Design : 7th International Workshop, COSADE 2016, Graz, Austria, April 14-15, 2016, Revised Selected Papers
- Author
-
François-Xavier Standaert, Elisabeth Oswald, François-Xavier Standaert, and Elisabeth Oswald
- Subjects
- Cryptography, Data encryption (Computer science), Data protection, Electronic data processing—Management, Algorithms, Computer science—Mathematics, Discrete mathematics, Computers
- Abstract
This book constitutes revised selected papers from the 7th International Workshop on Constructive Side-Channel Analysis and Secure Design, COSADE 2016, held in Graz, Austria, in April 2016. The 12 papers presented in this volume were carefully reviewed and selected from 32 submissions. They were organized in topical sections named: security and physical attacks; side-channel analysis (case studies); fault analysis; and side-channel analysis (tools).
- Published
- 2016
47. The Semantic Web: ESWC 2021 Satellite Events : Virtual Event, June 6–10, 2021, Revised Selected Papers
- Author
-
Ruben Verborgh, Anastasia Dimou, Aidan Hogan, Claudia d'Amato, Ilaria Tiddi, Arne Bröring, Simon Mayer, Femke Ongenae, Riccardo Tommasini, Mehwish Alam, Ruben Verborgh, Anastasia Dimou, Aidan Hogan, Claudia d'Amato, Ilaria Tiddi, Arne Bröring, Simon Mayer, Femke Ongenae, Riccardo Tommasini, and Mehwish Alam
- Subjects
- Information storage and retrieval systems, Computer systems, Algorithms, Machine theory, Information technology—Management, Computers
- Abstract
This book constitutes the proceedings of the satellite events held at the 18th Extended Semantic Web Conference, ESWC 2021, in June 2021. The conference was held online, due to the COVID-19 pandemic.During ESWC 2021, the following six workshops took place: 1) the Second International Workshop on Deep Learning meets Ontologies and Natural Language Processing (DeepOntoNLP 2021) 2) the Second International Workshop on Semantic Digital Twins (SeDiT 2021) 3) the Second International Workshop on Knowledge Graph Construction (KGC 2021) 5) the 6th International Workshop on eXplainable SENTIment Mining and EmotioN deTection (X-SENTIMENT 2021) 6) the 4th International Workshop on Geospatial Linked Data (GeoLD 2021).
- Published
- 2021
48. Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation
- Author
-
Riccardo Iacobucci, Samar Helou, Victoria Abou-Khalil, Elie El Helou, and Ken Kiyono
- Subjects
020205 medical informatics ,genetic structures ,Computer science ,clinic layout ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Health Informatics ,02 engineering and technology ,pose estimation ,gaze ,03 medical and health sciences ,0302 clinical medicine ,Human–computer interaction ,patient-physician communication ,Ethnography ,0202 electrical engineering, electronic engineering, information engineering ,electronic medical records ,Humans ,computational ethnography ,030212 general & internal medicine ,Set (psychology) ,Pose ,Anthropology, Cultural ,Physician-Patient Relations ,Original Paper ,dialogue ,Computers ,Communication ,Gaze ,Face (geometry) ,Key (cryptography) ,doctor-patient-computer interaction ,voice activity ,Public aspects of medicine ,RA1-1270 ,Classifier (UML) ,Voice activity ,Algorithms - Abstract
Background The study of doctor-patient-computer interactions is a key research area for examining doctor-patient relationships; however, studying these interactions is costly and obtrusive as researchers usually set up complex mechanisms or intrude on consultations to collect, then manually analyze the data. Objective We aimed to facilitate human-computer and human-human interaction research in clinics by providing a computational ethnography tool: an unobtrusive automatic classifier of screen gaze and dialogue combinations in doctor-patient-computer interactions. Methods The classifier’s input is video taken by doctors using their computers' internal camera and microphone. By estimating the key points of the doctor's face and the presence of voice activity, we estimate the type of interaction that is taking place. The classification output of each video segment is 1 of 4 interaction classes: (1) screen gaze and dialogue, wherein the doctor is gazing at the computer screen while conversing with the patient; (2) dialogue, wherein the doctor is gazing away from the computer screen while conversing with the patient; (3) screen gaze, wherein the doctor is gazing at the computer screen without conversing with the patient; and (4) other, wherein no screen gaze or dialogue are detected. We evaluated the classifier using 30 minutes of video provided by 5 doctors simulating consultations in their clinics both in semi- and fully inclusive layouts. Results The classifier achieved an overall accuracy of 0.83, a performance similar to that of a human coder. Similar to the human coder, the classifier was more accurate in fully inclusive layouts than in semi-inclusive layouts. Conclusions The proposed classifier can be used by researchers, care providers, designers, medical educators, and others who are interested in exploring and answering questions related to screen gaze and dialogue in doctor-patient-computer interactions.
- Published
- 2021
49. PERFORMANCE MEASUREMENT WITH HIGH-PERFORMANCE COMPUTER USING HW-GA ANOMALY-DETECTION ALGORITHMS FOR STREAMING DATA.
- Author
-
FONDAJ, JAKUP, HASANI, ZIRIJE, and KRRABAJ, SAMEDIN
- Subjects
ANOMALY detection (Computer security) ,ALGORITHMS ,COMPUTERS ,GENETIC algorithms ,COVID-19 - Abstract
Anomaly detection for streaming real-time data is very important; more significant is the performance of an algorithm in order to meet real-time requirements. Anomaly detection is very crucial in every sector because, by knowing what is going wrong with data/digital systems, we can make decisions to help in every sector. Dealing with real-time data requires speed; for this reason, the aim of this paper is to measure the performance of our proposed Holt-Winters genetic algorithm (HW-GA) as compared to other anomaly-detection algorithms with a large amount of data as well as to measure how other factors such as visualization and the performance of the testing environment affect the algorithm's performance. The experiments will be done in R with different data sets such as the as real Covid-19 and IoT sensor data that we collected from Smart Agriculture Libelium sensors and e-dnevnik 1 as well as three benchmarks from the Numenta data sets. The real data has no known anomalies, but the anomalies are known in the benchmark data; this was done in order to evaluate how the algorithm works in both situations. The novelty of this paper is that the performance will be tested on three different computers (in which one is a high-performance computer); also, a large amount of data will be used for our testing, as will how the visualization phase affects the algorithm's performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. Automated Computer Vision Assessment of Hypomimia in Parkinson Disease: Proof-of-Principle Pilot Study
- Author
-
Bryan Ho, Camilla Kilbane, Avner Abrami, Rachel Ostrand, Guillermo A. Cecchi, and Steven A Gunzler
- Subjects
Adult ,Male ,Telemedicine ,2019-20 coronavirus outbreak ,Hypomimia ,Health Informatics ,Pilot Projects ,Disease ,lcsh:Computer applications to medicine. Medical informatics ,Convolutional neural network ,050105 experimental psychology ,computer vision ,03 medical and health sciences ,0302 clinical medicine ,Medicine ,Humans ,0501 psychology and cognitive sciences ,Computer vision ,Vision, Ocular ,Aged ,Aged, 80 and over ,Neurologic Examination ,Facial expression ,Original Paper ,Receiver operating characteristic ,business.industry ,Computers ,lcsh:Public aspects of medicine ,05 social sciences ,lcsh:RA1-1270 ,Parkinson Disease ,Middle Aged ,Facial muscles ,medicine.anatomical_structure ,hypomimia ,lcsh:R858-859.7 ,Female ,Artificial intelligence ,telemedicine ,medicine.symptom ,business ,030217 neurology & neurosurgery ,Algorithms - Abstract
Background Facial expressions require the complex coordination of 43 different facial muscles. Parkinson disease (PD) affects facial musculature leading to “hypomimia” or “masked facies.” Objective We aimed to determine whether modern computer vision techniques can be applied to detect masked facies and quantify drug states in PD. Methods We trained a convolutional neural network on images extracted from videos of 107 self-identified people with PD, along with 1595 videos of controls, in order to detect PD hypomimia cues. This trained model was applied to clinical interviews of 35 PD patients in their on and off drug motor states, and seven journalist interviews of the actor Alan Alda obtained before and after he was diagnosed with PD. Results The algorithm achieved a test set area under the receiver operating characteristic curve of 0.71 on 54 subjects to detect PD hypomimia, compared to a value of 0.75 for trained neurologists using the United Parkinson Disease Rating Scale-III Facial Expression score. Additionally, the model accuracy to classify the on and off drug states in the clinical samples was 63% (22/35), in contrast to an accuracy of 46% (16/35) when using clinical rater scores. Finally, each of Alan Alda’s seven interviews were successfully classified as occurring before (versus after) his diagnosis, with 100% accuracy (7/7). Conclusions This proof-of-principle pilot study demonstrated that computer vision holds promise as a valuable tool for PD hypomimia and for monitoring a patient’s motor state in an objective and noninvasive way, particularly given the increasing importance of telemedicine.
- Published
- 2020
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.