4,870 results
Search Results
2. A novel algorithm for maximum power point tracking using computer vision (CVMPPT).
- Author
-
Ahmadi M, Abrari M, Ghanaatshoar M, and Khalafi A
- Subjects
- Reproducibility of Results, Electronics, Temperature, Algorithms, Computers
- Abstract
The behavior of an illuminated solar module can be characterized by its power-voltage curve. Tracking the peak of this curve is essential to harvest the maximum power by the module. The position of the peak varies with temperature and irradiance and needs to be traced. Under partial shading conditions, the number of peaks increases and makes it more difficult to find the global maximum power point (MPP). Various methods are used for maximum power point tracking (MPPT) that are based on iterations. These methods are time-consuming and fail to work satisfactorily under rapidly changing environmental conditions. In this paper, a novel algorithm is proposed that for the first time, utilizes computer vision to find the global maximum power point. This algorithm, which is implemented in Matlab/Simulink, is free of voltage iterations and gives the real-time data for the maximum power point. The proposed algorithm increases the speed and the reliability of the MPP tracking via replacing analogue electronics calculations by digital means. The validity of the algorithm is experimentally verified., Competing Interests: The authors have declared that no competing interests exist., (Copyright: © 2024 Ahmadi et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)
- Published
- 2024
- Full Text
- View/download PDF
3. Optimizing FPGA implementation of high-precision chaotic systems for improved performance.
- Author
-
Damaj I, Zaher A, and Lawand W
- Subjects
- Communication, Computers, Algorithms
- Abstract
Developing chaotic systems-on-a-chip is gaining much attention due to its great potential in securing communication, encrypting data, generating random numbers, and more. The digital implementation of chaotic systems strives to achieve high performance in terms of time, speed, complexity, and precision. In this paper, the focus is on developing high-speed Field Programmable Gate Array (FPGA) cores for chaotic systems, exemplified by the Lorenz system. The developed cores correspond to numerical integration techniques that can extend to the equations of the sixth order and at high precision. The investigation comprises a thorough analysis and evaluation of the developed cores according to the algorithm complexity and the achieved precision, hardware area, throughput, power consumption, and maximum operational frequency. Validations are done through simulations and careful comparisons with outstanding closely related work from the recent literature. The results affirm the successful creation of highly efficient sixth-order Lorenz discretizations, achieving a high throughput of 3.39 Gbps with a precision of 16 bits. Additionally, an outstanding throughput of 21.17 Gbps was achieved for the first-order implementation coupled with a high precision of 64 bits. These outcomes set our work as a benchmark for high-performance characteristics, surpassing similar investigations reported in the literature., Competing Interests: The authors have declared that no competing interests exist., (Copyright: © 2024 Damaj et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)
- Published
- 2024
- Full Text
- View/download PDF
4. Reports from Zhejiang Normal University Highlight Recent Findings in Engineering Software (Paper Intrusion Detection Approach for Cloud and Iot Environments Using Deep Learning and Capuchin Search Algorithm)
- Subjects
Engineering -- Computer programs ,Security software ,Algorithms ,Network security software ,Algorithm ,Engineering software ,Computers ,News, opinion and commentary - Abstract
2023 MAR 8 (VerticalNews) -- By a News Reporter-Staff News Editor at Computer Weekly News -- Current study results on Engineering - Engineering Software have been published. According to news [...]
- Published
- 2023
5. Research on the Fusion of Hybrid Fuzzy Clustering Algorithm and Computer Automatic Test Paper Composition Algorithm.
- Author
-
Kan, Baopeng
- Subjects
COMPUTERS ,COMPUTER algorithms ,FUZZY algorithms ,COMPUTER workstation clusters ,ALGORITHMS ,HIGHER education exams - Abstract
In order to improve the effect of intelligent automatic test paper composition, this paper combines the hybrid fuzzy clustering algorithm to study the computer automatic test paper composition algorithm. In this paper, a computer automatic test paper composition system based on hybrid fuzzy clustering algorithm is constructed. Moreover, the hybrid fuzzy clustering method used in this paper is used as the basic algorithm of the system, and the algorithm is improved according to the actual needs of intelligent paper composition. In addition, this paper uses an intelligent algorithm to input the relevant constraint parameters and combines the original parameters to select the most suitable test questions from the database and combine them into test papers. Finally, this paper constructs the system structure based on the requirements of intelligent test paper composition. The experimental research shows that the computer automatic test paper composition system based on the hybrid fuzzy clustering algorithm proposed in this paper has a good test paper composition function, which can effectively promote the progress of the intelligent examination mode in colleges and universities. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Computer-aided accurate calculation of interacted volumes for 3D isosurface point clouds of molecular electrostatic potential.
- Author
-
Lv K, Zhang J, Liu X, Zhou Y, and Liu K
- Subjects
- Static Electricity, Computer Simulation, Algorithms, Computers
- Abstract
The quality of chiral environment (i.e. catalytic pocket) is directly related to the performance of chiral catalysts. The existing methods need super computing power and time, i.e., it is difficult to quickly judge the interaction between chiral catalysts and substrates for accurately evaluating the effects of chiral catalytic pockets. In this paper, for the 3D isosurface point clouds of molecular electrostatic potential, by using computer simulations, we propose a robust method to detect interacted points, and then accurately have the corresponding interacted volumes. First, by using the existing marching cubes algorithm, we construct the 3D models with triangular surface for isosurface point clouds of molecular electrostatic potentials. Second, by using our improved hierarchical bounding boxes algorithm, we significantly filter out most redundant non-collision points. Third, by using the normal vectors of the remaining points and related triangles, we robustly determine the interacted points to construct interacted sets. And finally, by combining the classical slicing with our multi-contour segmenting, we accurately calculate the interacted volumes. Over three groups of the point clouds of the chemical molecules, experimental results show that our method effectively removes the non-interacted points at average rates of 71.65%, 77.76%, and 71.82%, and calculates the interacted volumes with the average relative errors of 1.7%, 1.6%, and 1.9%, respectively., Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2023 Elsevier Inc. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
7. Algorithm-Oriented SIMD Computer Mathematical Model and Its Application
- Author
-
Jiang, Yongfeng and Li, Yuan
- Abstract
This paper has designed a professional and practical SIMD computer mathematical model based on the SIMD physical machine model combined with the variable addition method. Furthermore, the model is applied in image collection, processing, and display operations, and a SIMD data parallel image processing system is finally established by absorbing the parallel computing advantages of the mathematical model. In addition, the data-parallel image processing algorithm is introduced and the convolutional neural network algorithm is optimized to promote the significant improvement of the main performance such as the accuracy of the application system. The final experimental results have shown that the highest accuracy of the data-parallel image processing algorithm reaches 93.3% and the lowest error rate reaches 0.11%, which proves the superiority of the SIMD computer mathematical model in image processing applications.
- Published
- 2022
- Full Text
- View/download PDF
8. Scientific papers and artificial intelligence. Brave new world?
- Author
-
Nexøe, Jørgen
- Subjects
COMPUTERS ,MANUSCRIPTS ,ARTIFICIAL intelligence ,MACHINE learning ,DATA analysis ,MEDICAL literature ,MEDICAL research ,ALGORITHMS - Published
- 2023
- Full Text
- View/download PDF
9. Data from Norwegian University of Science and Technology (NTNU) Provide New Insights into Information Technology (Classification of Forensic Hyperspectral Paper Data Using Hybrid Spectral Similarity Algorithms)
- Subjects
Algorithms ,Algorithm ,Computers - Abstract
2022 FEB 1 (VerticalNews) -- By a News Reporter-Staff News Editor at Information Technology Newsweekly -- Investigators publish new report on Information Technology. According to news reporting originating in Gjovik, [...]
- Published
- 2022
10. The SHARK integral generation and digestion system.
- Author
-
Neese F
- Subjects
- Electrons, Digestion, Algorithms, Computers
- Abstract
In this paper, the SHARK integral generation and digestion engine is described. In essence, SHARK is based on a reformulation of the popular McMurchie/Davidson approach to molecular integrals. This reformulation leads to an efficient algorithm that is driven by BLAS level 3 operations. The algorithm is particularly efficient for high angular momentum basis functions (up to L = 7 is available by default, but the algorithm is programmed for arbitrary angular momenta). SHARK features a significant number of specific programming constructs that are designed to greatly simplify the workflow in quantum chemical program development and avoid undesirable code duplication to the largest possible extent. SHARK can handle segmented, generally and partially generally contracted basis sets. It can be used to generate a host of one- and two-electron integrals over various kernels including, two-, three-, and four-index repulsion integrals, integrals over Gauge Including Atomic Orbitals (GIAOs), relativistic integrals and integrals featuring a finite nucleus model. SHARK provides routines to evaluate Fock like matrices, generate integral transformations and related tasks. SHARK is the essential engine inside the ORCA package that drives essentially all tasks that are related to integrals over basis functions in version ORCA 5.0 and higher. Since the core of SHARK is based on low-level basic linear algebra (BLAS) operations, it is expected to not only perform well on present day but also on future hardware provided that the hardware manufacturer provides a properly optimized BLAS library for matrix and vector operations. Representative timings and comparisons to the Libint library used by ORCA are reported for Intel i9 and Apple M1 max processors., (© 2022 The Author. Journal of Computational Chemistry published by Wiley Periodicals LLC.)
- Published
- 2023
- Full Text
- View/download PDF
11. Video Sequence Segmentation Based on K-Means in Air-Gap Data Transmission for a Cluttered Environment.
- Author
-
Mazurek P and Bak D
- Subjects
- Communication, Information Systems, Cluster Analysis, Image Processing, Computer-Assisted methods, Algorithms, Computers
- Abstract
An air gap is a technique that increases the security of information systems. The use of unconventional communication channels allows for obtaining communication that is of interest to the attacker as well as to cybersecurity engineers. One of the very dangerous forms of attack is the use of computer screen brightness modulation, which is not visible to the user but can be observed from a distance by the attacker. Once infected, the computer can transmit data over long distances. Even in the absence of direct screen visibility, transmission can be realized by analyzing the modulated reflection of the monitor's afterglow. The paper presents a new method for the automatic segmentation of video sequences to retrieve the transmitted data that does not have the drawbacks of the heretofore known method of growth (filling) based on an analysis of adjacent pixels. A fast camera operating at 380 fps was used for image acquisition. The method uses the characteristics of the amplitude spectrum for individual pixels, which is specific to the light sources in the room, and clustering with the k-means algorithm to group pixels into larger areas. Then, using the averaging of values for individual areas, it is possible to recover the 2-PAM (pulse-amplitude modulation) signal even at a 1000 times greater level of interference in the area to the transmitted signal, as shown in the experiments. The method does not require high-quality lenses.
- Published
- 2023
- Full Text
- View/download PDF
12. Phylogenetic tree reconstruction via graph cut presented using a quantum-inspired computer.
- Author
-
Onodera W, Hara N, Aoki S, Asahi T, and Sawamura N
- Subjects
- Phylogeny, Cluster Analysis, Databases, Protein, Algorithms, Computers
- Abstract
Phylogenetic trees are essential tools in evolutionary biology that present information on evolutionary events among organisms and molecules. From a dataset of n sequences, a phylogenetic tree of (2n-5)!! possible topologies exists, and determining the optimum topology using brute force is infeasible. Recently, a recursive graph cut on a graph-represented-similarity matrix has proven accurate in reconstructing a phylogenetic tree containing distantly related sequences. However, identifying the optimum graph cut is challenging, and approximate solutions are currently utilized. Here, a phylogenetic tree was reconstructed with an improved graph cut using a quantum-inspired computer, the Fujitsu Digital Annealer (DA), and the algorithm was named the "Normalized-Minimum cut by Digital Annealer (NMcutDA) method". First, a criterion for the graph cut, the normalized cut value, was compared with existing clustering methods. Based on the cut, we verified that the simulated phylogenetic tree could be reconstructed with the highest accuracy when sequences were diverged. Moreover, for some actual data from the structure-based protein classification database, only NMcutDA could cluster sequences into correct superfamilies. Conclusively, NMcutDA reconstructed better phylogenetic trees than those using other methods by optimizing the graph cut. We anticipate that when the diversity of sequences is sufficiently high, NMcutDA can be utilized with high efficiency., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2022 Elsevier Inc. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
13. Simulation of Neural Firing Dynamics: A Student Project.
- Author
-
Kletsky, E. J.
- Abstract
This paper describes a student project in digital simulation techniques that is part of a graduate systems analysis course entitled Biosimulation. The students chose different simulation techniques to solve a problem related to the neuron model. (MLH)
- Published
- 1975
14. An Ensemble Learning Aided Computer Vision Method with Advanced Color Enhancement for Corroded Bolt Detection in Tunnels.
- Author
-
Tan L, Tang T, and Yuan D
- Subjects
- Machine Learning, Algorithms, Computers
- Abstract
Bolts, as the basic units of tunnel linings, are crucial to safe tunnel service. Caused by the moist and complex environment in the tunnel, corrosion becomes a significant defect of bolts. Computer vision technology is adopted because manual patrol inspection is inefficient and often misses the corroded bolts. However, most current studies are conducted in a laboratory with good lighting conditions, while their effects in actual practice have yet to be considered, and the accuracy also needs to be improved. In this paper, we put forward an Ensemble Learning approach combining our Improved MultiScale Retinex with Color Restoration (IMSRCR) and You Only Look Once (YOLO) based on truly acquired tunnel image data to detect corroded bolts in the lining. The IMSRCR sharpens and strengthens the features of the lining pictures, weakening the bad effect of a dim environment compared with the existing MSRCR. Furthermore, we combine models with different parameters that show different performance using the ensemble learning method, greatly improving the accuracy. Sufficient comparisons and ablation experiments based on a dataset collected from the tunnel in service are conducted to prove the superiority of our proposed algorithm.
- Published
- 2022
- Full Text
- View/download PDF
15. Digital Image Decoder for Efficient Hardware Implementation.
- Author
-
Savić G, Prokin M, Rajović V, and Prokin D
- Subjects
- Algorithms, Computers
- Abstract
Increasing the resolution of digital images and the frame rate of video sequences leads to an increase in the amount of required logical and memory resources necessary for digital image and video decompression. Therefore, the development of new hardware architectures for digital image decoder with a reduced amount of utilized logical and memory resources become a necessity. In this paper, a digital image decoder for efficient hardware implementation, has been presented. Each block of the proposed digital image decoder has been described. Entropy decoder, decoding probability estimator, dequantizer and inverse subband transformer (parts of the digital image decoder) have been developed in such way which allows efficient hardware implementation with reduced amount of utilized logic and memory resources. It has been shown that proposed hardware realization of inverse subband transformer requires 20% lower memory capacity and uses less logic resources compared with the best state-of-the-art realizations. The proposed digital image decoder has been implemented in a low-cost FPGA device and it has been shown that it requires at least 32% less memory resources in comparison to the other state-of-the-art decoders which can process high-definition frame size. The proposed solution also requires effectively lower memory size than state-of-the-art architectures which process frame size or tile size smaller than high-definition size. The presented digital image decoder has maximum operating frequency comparable with the highest maximum operating frequencies among the state-of-the-art solutions., Competing Interests: The authors declare no conflict of interest.
- Published
- 2022
- Full Text
- View/download PDF
16. Communication-efficient algorithms for solving pressure Poisson equation for multiphase flows using parallel computers.
- Author
-
Ghosh S, Lu J, Gupta V, and Tryggvason G
- Subjects
- Communication, Algorithms, Computers
- Abstract
Numerical solution of partial differential equations on parallel computers using domain decomposition usually requires synchronization and communication among the processors. These operations often have a significant overhead in terms of time and energy. In this paper, we propose communication-efficient parallel algorithms for solving partial differential equations that alleviate this overhead. First, we describe an asynchronous algorithm that removes the requirement of synchronization and checks for termination in a distributed fashion while maintaining the provision to restart iterations if necessary. Then, we build on the asynchronous algorithm to propose an event-triggered communication algorithm that communicates the boundary values to neighboring processors only at certain iterations, thereby reducing the number of messages while maintaining similar accuracy of solution. We demonstrate our algorithms on a successive over-relaxation solver for the pressure Poisson equation arising from variable density incompressible multiphase flows in 3-D and show that our algorithms improve time and energy efficiency., Competing Interests: The authors have declared that no competing interests exist., (Copyright: © 2022 Ghosh et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)
- Published
- 2022
- Full Text
- View/download PDF
17. BIOS-Based Server Intelligent Optimization.
- Author
-
Qi X, Yang J, Zhang Y, and Xiao B
- Subjects
- Markov Chains, Algorithms, Computers
- Abstract
Servers are the infrastructure of enterprise applications, and improving server performance under fixed hardware resources is an important issue. Conducting performance tuning at the application layer is common, but it is not systematic and requires prior knowledge of the running application. Some works performed tuning by dynamically adjusting the hardware prefetching configuration with a predictive model. Similarly, we design a BIOS (Basic Input/Output System)-based dynamic tuning framework for a Taishan 2280 server, including dynamic identification and static optimization. We simulate five workload scenarios (CPU-instance, etc.) with benchmark tools and perform scenario recognition dynamically with performance monitor counters (PMCs). The adjustable configurations provided by Kunpeng processing reach 2N(N>100). Therefore, we propose a joint BIOS optimization algorithm using a deep Q-network. Configuration optimization is modeled as a Markov decision process starting from a feasible solution and optimizing gradually. To improve the continuous optimization capabilities, the neighborhood search method of state machine control is added. To assess its performance, we compare our algorithm with the genetic algorithm and particle swarm optimization. Our algorithm shows that it can also improve performance up to 1.10× compared to experience configuration and perform better in reducing the probability of server downtime. The dynamic tuning framework in this paper is extensible, can be trained to adapt to different scenarios, and is more suitable for servers with many adjustable configurations. Compared with the heuristic intelligent search algorithm, the proposed joint BIOS optimization algorithm can generate fewer infeasible solutions and is not easily disturbed by initialization.
- Published
- 2022
- Full Text
- View/download PDF
18. Motion-Based Object Location on a Smart Image Sensor Using On-Pixel Memory.
- Author
-
Valenzuela W, Saavedra A, Zarkesh-Ha P, and Figueroa M
- Subjects
- Motion, Algorithms, Computers
- Abstract
Object location is a crucial computer vision method often used as a previous stage to object classification. Object-location algorithms require high computational and memory resources, which poses a difficult challenge for portable and low-power devices, even when the algorithm is implemented using dedicated digital hardware. Moving part of the computation to the imager may reduce the memory requirements of the digital post-processor and exploit the parallelism available in the algorithm. This paper presents the architecture of a Smart Imaging Sensor (SIS) that performs object location using pixel-level parallelism. The SIS is based on a custom smart pixel, capable of computing frame differences in the analog domain, and a digital coprocessor that performs morphological operations and connected components to determine the bounding boxes of the detected objects. The smart-pixel array implements on-pixel temporal difference computation using analog memories to detect motion between consecutive frames. Our SIS can operate in two modes: (1) as a conventional image sensor and (2) as a smart sensor which delivers a binary image that highlights the pixels in which movement is detected between consecutive frames and the object bounding boxes. In this paper, we present the design of the smart pixel and evaluate its performance using post-parasitic extraction on a 0.35 µm mixed-signal CMOS process. With a pixel-pitch of 32 µm × 32 µm, we achieved a fill factor of 28%. To evaluate the scalability of the design, we ported the layout to a 0.18 µm process, achieving a fill factor of 74%. On an array of 320×240 smart pixels, the circuit operates at a maximum frame rate of 3846 frames per second. The digital coprocessor was implemented and validated on a Xilinx Artix-7 XC7A35T field-programmable gate array that runs at 125 MHz, locates objects in a video frame in 0.614 µs, and has a power consumption of 58 mW.
- Published
- 2022
- Full Text
- View/download PDF
19. Pictorial Solutions in Advanced Mechanics.
- Author
-
Craft, William J.
- Abstract
A visual problem-solving technique applicable to several different classes of mechanics time-dependent problems is discussed. The computer is used to solve the equations of motion of various mechanical systems by one of several standard methods, and the solutions are displayed in time increments. A specific example is provided to illustrate this technique. (MLH)
- Published
- 1975
20. Future Directions in Computational Mathematics, Algorithms, and Scientific Software. Report of the Panel.
- Author
-
Society for Industrial and Applied Mathematics, Philadelphia, PA.
- Abstract
The critical role of computers in scientific advancement is described in this panel report. With the growing range and complexity of problems that must be solved and with demands of new generations of computers and computer architecture, the importance of computational mathematics is increasing. Multidisciplinary teams are needed; these are found in most advanced and industrial laboratories, but rarely in universities. The existing educational opportunities are not producing the required personnel to meet substantial shortages. Therefore, the panel strongly recommends increased federal support for: (1) research in computational mathematics, methods, algorithms, and software for scientific computing; (2) the development of interdisciplinary research teams; (3) the establishment and continued operation of a suitable research infrastructure for the teams; (4) graduate and post-doctoral students directly involved in the research of some interdisciplinary team; and (5) young researchers and cross-disciplinary visitors. In the second section, research opportunities in a number of mathematical areas are described. New modes of research are discussed next, followed by comments on educational needs and a final section on funding considerations. Appendices contain a list of related reports, information on laboratory facilities for scientific computing, and letters and position papers. (MNS)
- Published
- 1985
21. Hardware Acceleration of the STRIKE String Kernel Algorithm for Estimating Protein to Protein Interactions.
- Author
-
Sibai FN, El-Moursy A, Asaduzzaman A, and Majzoub S
- Subjects
- Acceleration, Computational Biology methods, Proteins, Algorithms, Computers
- Abstract
Protein-protein interaction (PPI) is an important field in bioinformatics which helps in understanding diseases and devising therapy. PPI aims at estimating the similarity of protein sequences and their common regions. STRIKE was introduced as a PPI algorithm which was able to achieve reasonable improvement over existing PPI prediction methods. Although it consumes a lower execution time than most of other state-of the-art PPI prediction methods, its compute-intensive nature and the large volume of protein sequences in protein databases necessitate further time acceleration. In this paper, we develop hardware accelerator designs for the STRIKE algorithm. Results indicate that the weighted STRIKE accelerator execution times are about 10x longer than the unweighted STRIKE accelerator execution times. To further accelerate the performance of the weighted STRIKE, a parallel module accelerator organization duplicating the weighted STRIKE modules is introduced, achieving near linear speedups for long sequences of 100 or more characters. As demonstrated by Verilog simulations and FPGA runs, the weighted STRIKE module accelerator exhibits three orders of magnitude speed improvement over multi-core and cluster computers. Much higher speedups are possible with the parallel module accelerator.
- Published
- 2022
- Full Text
- View/download PDF
22. LIPSHOK: LIARA Portable Smart Home Kit.
- Author
-
Chapron K, Thullier F, Lapointe P, Maître J, Bouchard K, and Gaboury S
- Subjects
- Technology, Algorithms, Computers
- Abstract
Several smart home architecture implementations have been proposed in the last decade. These architectures are mostly deployed in laboratories or inside real habitations built for research purposes to enable the use of ambient intelligence using a wide variety of sensors, actuators and machine learning algorithms. However, the major issues for most related smart home architectures are their price, proprietary hardware requirements and the need for highly specialized personnel to deploy such systems. To tackle these challenges, lighter forms of smart home architectures known as smart homes in a box (SHiB) have been proposed. While SHiB remain an encouraging first step towards lightweight yet affordable solutions, they still suffer from few drawbacks. Indeed, some of these kits lack hardware support for some technologies, and others do not include enough sensors and actuators to cover most smart homes' requirements. Thus, this paper introduces the LIARA Portable Smart Home Kit (LIPSHOK). It has been designed to provide an affordable SHiB solution that anyone is able to install in an existing home. Moreover, LIPSHOK is a generic kit that includes a total of four specialized sensor modules that were introduced independently, as our laboratory has been working on their development over the last few years. This paper first provides a summary of each of these modules and their respective benefits within a smart home context. Then, it mainly focus on the introduction of the LIPSHOK architecture that provides a framework to unify the use of the proposed sensors thanks to a common modular infrastructure capable of managing heterogeneous technologies. Finally, we compare our work to the existing SHiB kit solutions and outline that it offers a more affordable, extensible and scalable solution whose resources are distributed under an open-source license.
- Published
- 2022
- Full Text
- View/download PDF
23. Quantum Speedup for Inferring the Value of Each Bit of a Solution State in Unsorted Databases Using a Bio-Molecular Algorithm on IBM Quantum's Computers.
- Author
-
Chang WL, Chung WY, Hsiao CY, Wong R, Chen JC, Feng M, and Vasilakos AV
- Subjects
- DNA chemistry, Databases, Factual, Algorithms, Computers
- Abstract
In this paper, we propose a bio-molecular algorithm with O( n
2 ) biological operations, O( 2n-1 ) DNA strands, O( n ) tubes and the longest DNA strand, O( n ), for inferring the value of a bit from the only output satisfying any given condition in an unsorted database with 2n items of n bits. We show that the value of each bit of the outcome is determined by executing our bio-molecular algorithm n times. Then, we show how to view a bio-molecular solution space with 2n-1 DNA strands as an eigenvector and how to find the corresponding unitary operator and eigenvalues for inferring the value of a bit in the output. We also show that using an extension of the quantum phase estimation and quantum counting algorithms computes its unitary operator and eigenvalues from bio-molecular solution space with 2n-1 DNA strands. Next, we demonstrate that the value of each bit of the output solution can be determined by executing the proposed extended quantum algorithms n times. To verify our theorem, we find the maximum-sized clique to a graph with two vertices and one edge and the solution b that satisfies b2 ≡ 1 (mod 15) and using IBM Quantum's backend.- Published
- 2022
- Full Text
- View/download PDF
24. Parallel Genetic Algorithms' Implementation Using a Scalable Concurrent Operation in Python.
- Author
-
Skorpil V and Oujezsky V
- Subjects
- Algorithms, Computers
- Abstract
This paper presents an implementation of the parallelization of genetic algorithms. Three models of parallelized genetic algorithms are presented, namely the Master-Slave genetic algorithm, the Coarse-Grained genetic algorithm, and the Fine-Grained genetic algorithm. Furthermore, these models are compared with the basic serial genetic algorithm model. Four modules, Multiprocessing, Celery, PyCSP, and Scalable Concurrent Operation in Python, were investigated among the many parallelization options in Python. The Scalable Concurrent Operation in Python was selected as the most favorable option, so the models were implemented using the Python programming language, RabbitMQ, and SCOOP. Based on the implementation results and testing performed, a comparison of the hardware utilization of each deployed model is provided. The results' implementation using SCOOP was investigated from three aspects. The first aspect was the parallelization and integration of the SCOOP module into the resulting Python module. The second was the communication within the genetic algorithm topology. The third aspect was the performance of the parallel genetic algorithm model depending on the hardware.
- Published
- 2022
- Full Text
- View/download PDF
25. Design and Implementation of a Ball-Plate Control System and Python Script for Educational Purposes in STEM Technologies.
- Author
-
Tudić V, Kralj D, Hoster J, and Tropčić T
- Subjects
- Feedback, Algorithms, Computers
- Abstract
This paper presents the process of designing, fabricating, assembling, programming and optimizing a prototype nonlinear mechatronic Ball-Plate System (BPS) as a laboratory platform for engineering education STEM. Due to the nonlinearity and complexity of BPS, the task presents challenges such as: (1) difficulty in controlling the stabilization of a particular position point, known as steady-state error, (2) position resolution, known as specific distance error, and (3) adverse environmental effects-light-shadow error, which is also discussed in this paper. The laboratory prototype BPS for education was designed, manufactured and installed at Karlovac University of Applied Sciences in the Department of Mechanical Engineering, Mechatronics program. The low-cost two-degree BPS uses a USB HD camera for computer vision as a feedback sensor and two DC servo motors as actuators. Due to control problems, an advanced block diagram of the control system is proposed and discussed. An open-source control system based on Python scripts, which allows the use of ready-made functions from the library, allows the color of the ball and the parameters of the PID controller to be changed, indirectly simplifying the control system and performing mathematical calculations directly. The authors will continue their research on this BPS mechatronic platform and control algorithms.
- Published
- 2022
- Full Text
- View/download PDF
26. A full-parallel implementation of Self-Organizing Maps on hardware.
- Author
-
Dias LA, Damasceno AMP, Gaura E, and Fernandes MAC
- Subjects
- Cluster Analysis, Algorithms, Computers
- Abstract
Self-Organizing Maps (SOMs) are extensively used for data clustering and dimensionality reduction. However, if applications are to fully benefit from SOM based techniques, high-speed processing is demanding, given that data tends to be both highly dimensional and yet "big". Hence, a fully parallel architecture for the SOM is introduced to optimize the system's data processing time. Unlike most literature approaches, the architecture proposed here does not contain sequential steps - a common limiting factor for processing speed. The architecture was validated on FPGA and evaluated concerning hardware throughput and the use of resources. Comparisons to the state of the art show a speedup of 8.91× over a partially serial implementation, using less than 15% of hardware resources available. Thus, the method proposed here points to a hardware architecture that will not be obsolete quickly., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2021 Elsevier Ltd. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
27. Artificial Intelligence in ADA: Pattern-Directed Processing. Final Report.
- Author
-
Air Force Human Resources Lab., Lowry AFB, CO. and Reeker, Larry H.
- Abstract
To demonstrate to computer programmers that the programming language Ada provides superior facilities for use in artificial intelligence applications, the three papers included in this report investigate the capabilities that exist within Ada for "pattern-directed" programming. The first paper (Larry H. Reeker, Tulane University) is designed to serve as an introduction to pattern-directed programming and to the significance of the two papers that follow. It includes discussions of artificial intelligence programming and the facilities provided by the Ada language, pattern-directed computation, pattern matching, and parsing. The second paper (John Kreuter, Tulane University) describes a project which was part of an overall effort to add useful artificial intelligence tools to Ada through use of pattern-directed string processing of the sort available in the language Post-X (Bailes and Reeker, 1980). The third paper (Kenneth Wauchope, Tulane Unversity) presents a pattern-directed list processing facility for the Ada programming language. Pattern lists for matching against source lists are constructed from a set of SNOBOL4-derived primitives which have been extended to be applicable to arbitrarily complex LISP-like data structures. A list of references completes the document. (JB)
- Published
- 1985
28. Are Mathematical Understanding and Algorithmic Performance Related?
- Author
-
Nesher, Pearla
- Abstract
An algorithm is first defined by an example of making pancakes and then through discussion of how computers operate. The understanding that human beings bring to a task is contrasted with this algorithmic processing. In the second section, the question of understanding is related to learning algorithmic performance, with counting used as the vehicle. It is noted that perhaps subskills must be mastered algorithmically before understanding can develop; perhaps the separation between algorithmic performance and understanding is impossible at any stage of the learning. In the third section, two studies dealing with the relation between algorithmic performance and understanding are described. Both fail to demonstrate that understanding improves algorithmic performance. The fourth section addresses the question of teaching for understanding and leaving algorithms to computers. Deciding what kind of division of labor we wish to establish between ourselves and our computers is the focal point. (MNS)
- Published
- 1986
29. Fully Parallel Implementation of Otsu Automatic Image Thresholding Algorithm on FPGA.
- Author
-
Barros WKP, Dias LA, and Fernandes MAC
- Subjects
- Algorithms, Computers
- Abstract
This work proposes a high-throughput implementation of the Otsu automatic image thresholding algorithm on Field Programmable Gate Array (FPGA), aiming to process high-resolution images in real-time. The Otsu method is a widely used global thresholding algorithm to define an optimal threshold between two classes. However, this technique has a high computational cost, making it difficult to use in real-time applications. Thus, this paper proposes a hardware design exploiting parallelization to optimize the system's processing time. The implementation details and an analysis of the synthesis results concerning the hardware area occupation, throughput, and dynamic power consumption, are presented. Results have shown that the proposed hardware achieved a high speedup compared to similar works in the literature.
- Published
- 2021
- Full Text
- View/download PDF
30. What Should Be the Role of Calculators and Computers in Mathematics Education?
- Author
-
Baggett, Patricia and Ehrenfeucht, Andrzej
- Abstract
Examines the issue of how calculators and computers can best be used in mathematics education. Contends that practicing a procedure is noncognitive and does not produce learning. Suggests utilizing customized computerized tools in schools for getting answers to algorithmic problems instantly, thus allowing teachers to explain and students to think. (Author/MDH)
- Published
- 1992
31. MC-LSTM: Real-Time 3D Human Action Detection System for Intelligent Healthcare Applications.
- Author
-
Yin J, Han J, Xie R, Wang C, Duan X, Rong Y, Zeng X, and Tao J
- Subjects
- Delivery of Health Care, Humans, Movement, Algorithms, Computers
- Abstract
Due to the movement expressiveness and privacy assurance of human skeleton data, 3D skeleton-based action inference is becoming popular in healthcare applications. These scenarios call for more advanced performance in application-specific algorithms and efficient hardware support. Warnings on health emergencies sensitive to response speed require low latency output and action early detection capabilities. Medical monitoring that works in an always-on edge platform needs the system processor to have extreme energy efficiency. Therefore, in this paper, we propose the MC-LSTM, a functional and versatile 3D skeleton-based action detection system, for the above demands. Our system achieves state-of-the-art accuracy on trimmed and untrimmed cases of general-purpose and medical-specific datasets with early-detection features. Further, the MC-LSTM accelerator supports parallel inference on up to 64 input channels. The implementation on Xilinx ZCU104 reaches a throughput of 18 658 Frames-Per-Second (FPS) and an inference latency of 3.5 ms with the batch size of 64. Accordingly, the power consumption is 3.6 W for the whole FPGA+ARM system, which is 37.8x and 10.4x more energy-efficient than the high-end Titan X GPU and i7-9700 CPU, respectively. Meanwhile, our accelerator also keeps a 4 ∼ 5x energy efficiency advantage against the low-power high-performance Firefly-RK3399 board carrying an ARM Cortex-A72+A53 CPU. We further synthesize an 8-bit quantized version on the same hardware, providing a 48.8% increase in energy efficiency under the same throughput.
- Published
- 2021
- Full Text
- View/download PDF
32. Data on Diabetic Ketoacidosis Published by a Researcher at University of New Mexico (Evaluation of Computer-Based Insulin Infusion Algorithm Compared With a Paper-Based Protocol in the Treatment of Diabetic Ketoacidosis).
- Abstract
Keywords: Acid-Base Imbalance; Algorithms; Computers; Diabetes Complications; Diabetes Mellitus; Diabetic Ketoacidosis; Glucose Metabolism Disorders; Health and Medicine; Hospitals; Nutritional and Metabolic Diseases and Conditions; Peptide Hormones; Peptide Proteins; Proinsulin EN Acid-Base Imbalance Algorithms Computers Diabetes Complications Diabetes Mellitus Diabetic Ketoacidosis Glucose Metabolism Disorders Health and Medicine Hospitals Nutritional and Metabolic Diseases and Conditions Peptide Hormones Peptide Proteins Proinsulin 53 53 1 04/17/23 20230420 NES 230420 2023 APR 17 (NewsRx) -- By a News Reporter-Staff News Editor at Diabetes Week -- New study results on diabetic ketoacidosis have been published. Acid-Base Imbalance, Algorithms, Computers, Diabetes Complications, Diabetes Mellitus, Diabetic Ketoacidosis, Glucose Metabolism Disorders, Health and Medicine, Hospitals, Nutritional and Metabolic Diseases and Conditions, Peptide Hormones, Peptide Proteins, Proinsulin. [Extracted from the article]
- Published
- 2023
33. Memory hierarchy characterization of SPEC CPU2006 and SPEC CPU2017 on the Intel Xeon Skylake-SP.
- Author
-
Navarro-Torres A, Alastruey-Benedé J, Ibáñez-Marín P, and Viñals-Yúfera V
- Subjects
- Algorithms, Benchmarking, Computer Systems standards, Computers standards, Software
- Abstract
SPEC CPU is one of the most common benchmark suites used in computer architecture research. CPU2017 has recently been released to replace CPU2006. In this paper we present a detailed evaluation of the memory hierarchy performance for both the CPU2006 and single-threaded CPU2017 benchmarks. The experiments were executed on an Intel Xeon Skylake-SP, which is the first Intel processor to implement a mostly non-inclusive last-level cache (LLC). We present a classification of the benchmarks according to their memory pressure and analyze the performance impact of different LLC sizes. We also test all the hardware prefetchers showing they improve performance in most of the benchmarks. After comprehensive experimentation, we can highlight the following conclusions: i) almost half of SPEC CPU benchmarks have very low miss ratios in the second and third level caches, even with small LLC sizes and without hardware prefetching, ii) overall, the SPEC CPU2017 benchmarks demand even less memory hierarchy resources than the SPEC CPU2006 ones, iii) hardware prefetching is very effective in reducing LLC misses for most benchmarks, even with the smallest LLC size, and iv) from the memory hierarchy standpoint the methodologies commonly used to select benchmarks or simulation points do not guarantee representative workloads., Competing Interests: The authors have declared that no competing interests exist.
- Published
- 2019
- Full Text
- View/download PDF
34. The simultaneous evolution of author and paper networks.
- Author
-
Börner, Katy, Maru, Jeegar T., and Goldstone, Robert L.
- Subjects
- *
SCIENTIFIC literature , *COMPUTERS , *INTELLECTUAL property , *STATICS , *ALGORITHMS , *RESEARCH - Abstract
There has been a long history of research into the structure and evolution of mankind's scientific endeavor. However, recent progress in applying the tools of science to understand science itself has been unprecedented because only recently has there been access to high-volume and high-quality data sets of scientific output (e.g., publications, patents, grants) and computers and algorithms capable of handling this enormous stream of data. This article reviews major work on models that aim to capture and recreate the structure and dynamics of scientific evolution. We then introduce a general process model that aim to capture and recreate the structure and dynamics of scientific evolution. We then introduce a general process model that simultaneously grows coauthor and paper citation networks. The statistical and dynamic property of the networks generated by this model are validated against a 20-year data set of article published in PNAS. Systematic deviations from a power law distribution of citations to papers are well fit by a model that incorporates a partitioning of authors and papers into topics, a bias for authors to cite recent papers, and a tendency for authors to cite papers cited by papers, and a tendency for authors to cite papers cited by papers that they have read. In this TARL Model (for topics, aging, and recursive linking), the number of topics is linearly related to the clustering coefficient of the simulated paper citation network. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
35. Conferences Versus Journals in Computer Science.
- Author
-
Vrettas, George and Sanderson, Mark
- Subjects
ALGORITHMS ,COMPUTERS ,CONFERENCES & conventions ,SCHOLARLY method ,PUBLISHING ,SERIAL publications ,T-test (Statistics) ,CITATION analysis - Abstract
The question of which type of computer science (CS) publication-0conference or journal-0is likely to result in more citations for a published paper is addressed. A series of data sets are examined and joined in order to analyze the citations of over 195,000 conference papers and 108,000 journal papers. Two means of evaluating the citations of journals and conferences are explored: h5 and average citations per paper; it was found that h5 has certain biases that make it a difficult measure to use (despite it being the main measure used by Google Scholar). Results from the analysis show that CS, as a discipline, values conferences as a publication venue more highly than any other academic field of study. The analysis also shows that a small number of elite CS conferences have the highest average paper citation rate of any publication type, although overall, citation rates in conferences are no higher than in journals. It is also shown that the length of a paper is correlated with citation rate. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
36. FTCS 16th annual international symposium on fault-tolerant computing systems (Digest of papers)
- Published
- 1986
37. Computer Security and the Data Encryption Standard. Proceedings of the Conference on Computer Security and the Data Encryption Standard.
- Author
-
National Bureau of Standards (DOC), Washington, DC. Inst. for Computer Sciences and Technology. and Branstad, Dennis K.
- Abstract
The 15 papers and summaries of presentations in this collection provide technical information and guidance offered by representatives from federal agencies and private industry. Topics discussed include physical security, risk assessment, software security, computer network security, and applications and implementation of the Data Encryption Standard. A list of questions submitted in writing at the conference together with responses prepared by either the speaker, the session chairman, or the editor, are appended. (CMV)
- Published
- 1978
38. Computers and Mathematical Programming. Proceedings of the Bicentennial Conference on Mathematical Programming, November 29-December 1, 1976.
- Author
-
National Bureau of Standards (DOC), Washington, DC. and White, William W.
- Abstract
The proceedings of this conference, which examined the relationship between mathematical programming and the computer, contains the texts of most of the 50 papers presented and a summary of one discussion panel addressing such facets of this theme as the design for, use of, implementation of, and implications for mathematical programming software and computations. Particular emphasis was placed on bringing out computer-oriented subject matter not ordinarily presented in a mathematical programming context. (Author/CMV)
- Published
- 1978
39. Additional paper: computational resources for metabolomics
- Author
-
Masanori, Arita
- Subjects
Proteomics ,Internet ,Databases as Topic ,Proteome ,Computers ,Animals ,RNA, Messenger ,Algorithms ,Gas Chromatography-Mass Spectrometry ,Mass Spectrometry ,Software - Abstract
Metabolomics, a comprehensive extension of traditional targeted metabolite analysis, has recently attracted much attention as the biological jigsaw puzzle's missing piece that can complement transcriptome and proteome analysis. This tutorial survey introduces practical web resources with special emphasis on the computational aspects involved in processing and navigating metabolome data. The introduced materials are also accessible from the author's web directory (Atomic Reconstruction of Metabolism or ARM).
- Published
- 2004
40. Components of the item selection algorithm in computerized adaptive testing.
- Author
-
Han KCT
- Subjects
- Educational Measurement methods, Humans, Models, Statistical, Algorithms, Computers, Educational Measurement statistics & numerical data
- Abstract
Computerized adaptive testing (CAT) greatly improves measurement efficiency in high-stakes testing operations through the selection and administration of test items with the difficulty level that is most relevant to each individual test taker. This paper explains the 3 components of a conventional CAT item selection algorithm: test content balancing, the item selection criterion, and item exposure control. Several noteworthy methodologies underlie each component. The test script method and constrained CAT method are used for test content balancing. Item selection criteria include the maximized Fisher information criterion, the b -matching method, the a -stratification method, the weighted likelihood information criterion, the efficiency balanced information criterion, and the Kullback-Leibler information criterion. The randomesque method, the Sympson-Hetter method, the unconditional and conditional multinomial methods, and the fade-away method are used for item exposure control. Several holistic approaches to CAT use automated test assembly methods, such as the shadow test approach and the weighted deviation model. Item usage and exposure count vary depending on the item selection criterion and exposure control method. Finally, other important factors to consider when determining an appropriate CAT design are the computer resources requirement, the size of item pools, and the test length. The logic of CAT is now being adopted in the field of adaptive learning, which integrates the learning aspect and the (formative) assessment aspect of education into a continuous, individualized learning experience. Therefore, the algorithms and technologies described in this review may be able to help medical health educators and high-stakes test developers to adopt CAT more actively and efficiently.
- Published
- 2018
- Full Text
- View/download PDF
41. Selecting a Population Sample from a Business Computer File.
- Author
-
Wirt, Edgar
- Abstract
In negotiating to obtain a sample of records from a computer file, it is important to be able to present a simple program that will produce a representative and valid sample. This article describes two procedures: (1) an interval selection method; and (2) a random numbers file. (JAZ)
- Published
- 1987
42. Problem Solving with Generic Algorithms and Computers.
- Author
-
Larson, Jay
- Abstract
Success in using a computer in education as a problem-solving tool requires a change in the way of thinking or of approaching a problem. An algorithm, i.e., a finite step-by-step solution to a problem, can be designed around the data processing concepts of input, processing, and output to provide a basis for classifying problems. If educators concentrate on the how to, rather than on the correctness of the results, computers in the classroom can speed up the testing of specific algorithms. Developing an adequate, workable algorithm requires the critical steps of precisely defining the problem; careful attention to clarity, detail, and sequence; and a knowledge of the problem in order to explain the phases and smaller steps needed to solve the problem. The concepts of pseudocode and flowcharting standardize, solidify, and simplify the entire process. The ability to classify problems into one of three general categories (narratives, computational/verbal, and inputs/outputs) provides researchers with the framework upon which to build new and refine existing algorithms. Suggestions for the problem definition phase and an outline for this presentation are included. (DJR)
- Published
- 1986
43. How to Do Arithmetic.
- Author
-
Robertson, Jane I.
- Abstract
Three types of arithmetic algorithms are discussed and compared. These are algorithms designed to get the right answer, computer algorithms, and algorithms designed to get the right answer and understand why. (MP)
- Published
- 1979
44. Numerical Algorithms.
- Author
-
Engel, Arthur
- Abstract
The need for incorporating algorithmics into mathematics instruction is presented. The proliferation of computers is seen to have made the designing of algorithms an essential skill. Examples are given, and the view that mathematics will lose much prestige and importance if algorithmics is not integrated into it is presented. (MP)
- Published
- 1981
45. Automatic Identification of Duplicates after Multidatabase Online Searching.
- Author
-
Onorato, Eveline and Bianchi, Gianfranco
- Abstract
Discusses the problem of duplicate citations resulting from file overlaps in multidatabase searching and shows that such duplicates could be identified automatically and eliminated by a host computer as a complementary service to online retrieval. Steps involved in the realization of this service are described, and 11 references are listed. (RBF)
- Published
- 1981
46. Computers and Creativity.
- Author
-
Ten Dyke, Richard P.
- Abstract
A traditional question is whether or not computers shall ever think like humans. This question is redirected to a discussion of whether computers shall ever be truly creative. Creativity is defined and a program is described that is designed to complete creatively a series problem in mathematics. (MP)
- Published
- 1982
47. Enhancing computer image recognition with improved image algorithms.
- Author
-
Huang, Lanqing, Yao, Cheng, Zhang, Lingyan, Luo, Shijian, Ying, Fangtian, and Ying, Weiqiang
- Subjects
IMAGE recognition (Computer vision) ,IMAGE processing ,ALGORITHMS ,COMPUTERS ,COMPUTER algorithms - Abstract
Advances in computer image recognition have significantly impacted many industries, including healthcare, security and autonomous systems. This paper aims to explore the potential of improving image algorithms to enhance computer image recognition. Specifically, we will focus on regression methods as a means to improve the accuracy and efficiency of identifying images. In this study, we will analyze various regression techniques and their applications in computer image recognition, as well as the resulting performance improvements through detailed examples and data analysis. This paper deals with the problems related to visual image processing in outdoor unstructured environment. Finally, the heterogeneous patterns are converted into the same pattern, and the heterogeneous patterns are extracted from the fusion features of data modes. The simulation results show that the perception ability and recognition ability of outdoor image recognition in complex environment are improved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Design and Optimization of Motion Training System Assisted by Human Posture Estimation Algorithm.
- Author
-
Dang, Zijun, Dong, Huan, Li, Tong, and Kong, Kai
- Subjects
POSTURE ,FEATURE extraction ,TRAJECTORY optimization ,PHYSICAL training & conditioning ,ALGORITHMS ,COMPUTERS ,COMPUTER engineering ,MOTION - Abstract
With the rapid development of computer technology and electronic information technology, the sports training system no longer depends on the traditional algorithm for operation support, and various advanced posture algorithms are emerging. At the same time, it also further optimizes the intelligence and accuracy of the sports training algorithm. As an advanced algorithm combined with virtual reality technology, human posture estimation algorithm plays an obvious role in optimizing the effect of sports training. This paper will design a motion training system based on the optimized and improved human posture trajectory algorithm, use the depth image correlation theory to solve the problem of non-Gaussian noise crosstalk in the depth image of the traditional human posture algorithm in principle, improve the accurate feature extraction of the depth image by the algorithm, and solve the problem of human feature redundancy, so as to further improve the accuracy of the establishment of a single human model; on the problem of multi-person posture estimation algorithm, this paper proposes a high-resolution multi-person posture high-precision network model and adds the focus mechanism. Based on this, this paper realizes the high-precision and high-speed modeling of multi-person posture, so as to provide an accurate model for the multi-person function of sports training system and improve the efficiency of the algorithm. In the experimental part, this paper takes tennis as a typical case to design the sports training system and experiments based on the system designed in this paper. The experimental results show that the system under the proposed algorithm has obvious advantages in accuracy and training effect. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications.
- Author
-
Hossain MS, Saeedi E, and Kong Y
- Subjects
- United States, United States Government Agencies, Algorithms, Computer Security instrumentation, Computers
- Abstract
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.
- Published
- 2017
- Full Text
- View/download PDF
50. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers.
- Author
-
Milankovic IL, Mijailovic NV, Filipovic ND, and Peulic AS
- Subjects
- Breast diagnostic imaging, Female, Humans, Algorithms, Breast Neoplasms diagnostic imaging, Computers, Mammography instrumentation, Mammography methods
- Abstract
Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.
- Published
- 2017
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.