5,171 results
Search Results
2. Researchers have finally created a tool to spot duplicated images across thousands of papers.
- Author
-
Butler D
- Subjects
- Automation, Benchmarking, Computer Graphics, Editorial Policies, Periodicals as Topic standards, Plagiarism, Research Personnel, Retraction of Publication as Topic, Algorithms, Duplicate Publications as Topic, Photography, Research Report standards, Scientific Misconduct statistics & numerical data, Software
- Published
- 2018
- Full Text
- View/download PDF
3. New IOF-ESCEO position paper offers practical guidance for osteoporosis management
- Subjects
Osteoporosis -- Care and treatment ,Osteoarthritis -- Care and treatment ,Women's health ,Postmenopausal women -- Care and treatment ,Algorithms ,Fractures (Injuries) ,Editors ,Company business management ,Health ,Women's issues/gender studies - Abstract
2019 DEC 5 (NewsRx) -- By a News Reporter-Staff News Editor at Women's Health Weekly -- In 2018 the International Osteoporosis Foundation (IOF) and European Society for Clinical and Economic [...]
- Published
- 2019
4. Roll assortment optimization in a paper mill: An integer programming approach
- Author
-
Chauhan, S.S., Martel, Alain, and D'Amour, Sophie
- Subjects
Algorithm ,Algorithms - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.cor.2006.03.026 Byline: S.S. Chauhan, Alain Martel, Sophie D'Amour Abstract: Fine paper mills produce a variety of paper grades to satisfy demand for a large number of sheeted products. Huge reels of different paper grades are produced on a cyclical basis on paper machines. These reels are then cut into rolls of smaller size which are then either sold as such, or sheeted into finished products in converting plants. A huge number of roll sizes would be required to cut all finished products without trim loss and they cannot all be inventoried. An assortment of rolls is inventoried with the implication that the sheeting operations may yield trim loss. The selection of the assortment of roll sizes to stock and the assignment of these roll sizes to finished products have a significant impact on performances. This paper presents a model to decide the parent roll assortment and assignments to finished products based on these products demand processes, desired service levels, trim loss and inventory holding costs. Risk pooling economies made by assigning several finished products to a given roll size is a fundamental aspect of the problem. The overall model is a binary non-linear program. Two solution methods are developed: a branch and price algorithm based on column generation and a fast pricing heuristic, and a marginal cost heuristic. The two methods are tested on real data and also on randomly generated problem instances. The approach proposed was implemented by a large pulp and paper company. Author Affiliation: FOR@C Research Consortium, Network Organization Technology Research Center (CENTOR), Universite Laval, Sainte-Foy, Que., Canada G1K 7P4
- Published
- 2008
5. Logic via Computer Programming.
- Author
-
Wieschenberg, Agnes A.
- Abstract
This paper proposed the question "How do we teach logical thinking and sophisticated mathematics to unsophisticated college students?" One answer among many is through the writing of computer programs. The writing of computer algorithms is mathematical problem solving and logic in disguise and it may attract students who would otherwise stop taking mathematics courses after their required sequence is finished. In college classrooms in the United States, there is often an over-involvement with the calculation aspect of mathematics, especially in today's technical environment. The emphasis should fall on the teachers' developing of logic in students. Just like mathematical algorithms, computer algorithms however simple, employ logical steps which will result in the desired conclusion. Mathematics teachers should take advantage of the inumerable opportunities, even in a beginner's computer programming course, to play with algorithms that may aid students in the development of logical ways to approach mathematical problems. (MA)
- Published
- 1999
6. Critical lab values: A 50-year perspective honoring the MLO anniversary of publishing the laboratory panic values paper.
- Author
-
Lundberg, George D.
- Subjects
- *
SERIAL publications , *GENERATIVE artificial intelligence , *DOCUMENTATION , *LABORATORIES , *MEDICARE , *LEADERSHIP , *DECISION making , *SPECIAL days , *PUBLISHING , *ATTITUDES of medical personnel , *COLLECTION & preservation of biological specimens , *TIME , *LABOR supply , *ALGORITHMS - Abstract
The article focuses on the significance of critical laboratory values and their role in preventing life-threatening situations, highlighting the historical development of a systematic approach to manage these values. Topics include the implementation of the original critical value system at Los Angeles County/USC Medical Center, the contributions of Dr. Sol Bernstein to laboratory utilization, and the broader sociologic and economic factors influencing this advancement in the 1960s.
- Published
- 2024
7. FDA Releases Two Discussion Papers to Spur Conversation about Artificial Intelligence and Machine Learning in Drug Development and Manufacturing.
- Subjects
ARTIFICIAL intelligence ,MACHINE learning ,DRUG factories ,DRUG development ,RECOMBINANT proteins - Abstract
The regulatory uses are real: In 2021, more than 100 drug and biologic applications submitted to the FDA included AI/ML components. Keywords: Algorithms; Artificial Intelligence; Bioengineering; Biologics; Biotechnology; Cybersecurity; Cyborgs; Drug Development; Drug Manufacturing; Drugs and Therapies; Emerging Technologies; FDA; Genetic Engineering; Genetically-Engineered Proteins; Government Agencies Offices and Entities; Health and Medicine; Machine Learning; Office of the FDA Commissioner; Public Health; Technology; U.S. Food and Drug Administration EN Algorithms Artificial Intelligence Bioengineering Biologics Biotechnology Cybersecurity Cyborgs Drug Development Drug Manufacturing Drugs and Therapies Emerging Technologies FDA Genetic Engineering Genetically-Engineered Proteins Government Agencies Offices and Entities Health and Medicine Machine Learning Office of the FDA Commissioner Public Health Technology U.S. Food and Drug Administration 497 497 1 05/22/23 20230523 NES 230523 2023 MAY 22 (NewsRx) -- By a News Reporter-Staff News Editor at Clinical Trials Week -- By: Patrizia Cavazzoni, M.D., Director of the Center for Drug Evaluation and Research Artificial intelligence (AI) and machine learning (ML) are no longer futuristic concepts; they are now part of how we live and work. [Extracted from the article]
- Published
- 2023
8. Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits.
- Author
-
Limaye, Nutan, Srinivasan, Srikanth, and Tavenas, Sébastien
- Subjects
ALGEBRA ,POLYNOMIALS ,CIRCUIT complexity ,ALGORITHMS ,DIRECTED acyclic graphs ,LOGIC circuits - Abstract
An Algebraic Circuit for a multivariate polynomial P is a computational model for constructing the polynomial P using only additions and multiplications. It is a syntactic model of computation, as opposed to the Boolean Circuit model, and hence lower bounds for this model are widely expected to be easier to prove than lower bounds for Boolean circuits. Despite this, we do not have superpolynomial lower bounds against general algebraic circuits of depth 3 (except over constant-sized finite fields) and depth 4 (over any field other than F
2 ), while constant-depth Boolean circuit lower bounds have been known since the early 1980s. In this paper, we prove the first superpolynomial lower bounds against algebraic circuits of all constant depths over all fields of characteristic 0. We also observe that our super-polynomial lower bound for constant-depth circuits implies the first deterministic sub-exponential time algorithm for solving the Polynomial Identity Testing (PIT) problem for all small-depth circuits using the known connection between algebraic hardness and randomness. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
9. Research on Intrahepatic Cholestasis Published by Researchers at Birmingham Women's and Children's NHS Foundation Trust (Opinion paper on the diagnosis and treatment of progressive familial intrahepatic cholestasis).
- Subjects
RESEARCH personnel ,CHOLESTASIS ,CONSCIOUSNESS raising ,DIGESTIVE system diseases ,BILIOUS diseases & biliousness - Abstract
A recent report from researchers at Birmingham Women's and Children's NHS Foundation Trust discusses the diagnosis and treatment of progressive familial intrahepatic cholestasis (PFIC), a rare liver disorder that primarily affects children. The researchers aimed to provide recommendations for the management of PFIC in clinical practice. They developed an algorithm for the diagnosis and treatment of children with suspected PFIC, which includes the use of licensed inhibitors of ileal bile acid transporters as the first-line treatment. The authors hope that these recommendations will help standardize the management of PFIC and raise awareness of current developments in the field. [Extracted from the article]
- Published
- 2024
10. Using Response-Time Constraints in Item Selection To Control for Differential Speededness in Computerized Adaptive Testing. Research Report 98-06.
- Author
-
Twente Univ., Enschede (Netherlands). Faculty of Educational Science and Technology., van der Linden, Wim J., Scrams, David J., and Schnipke, Deborah L.
- Abstract
An item-selection algorithm to neutralize the differential effects of time limits on scores on computerized adaptive tests is proposed. The method is based on a statistical model for the response-time distributions of the examinees on items in the pool that is updated each time a new item has been administered. Predictions from the model are used as constraints in a 0-1 linear programming (LP) model for constrained adaptive testing that maximizes the accuracy of the ability estimator. The method is demonstrated empirically using an item pool from the Armed Services Vocational Aptitude Battery and the responses of 38,357 examinees. The empirical example suggests that the algorithm is able to reduce the speededness of the test for the examinees who otherwise would have suffered from the time limit. Also, the algorithm did not seem to introduce any differential effects on the statistical properties of the theta estimator. (Contains 9 figures and 14 references.) (SLD)
- Published
- 1998
11. Future Directions in Computational Mathematics, Algorithms, and Scientific Software. Report of the Panel.
- Author
-
Society for Industrial and Applied Mathematics, Philadelphia, PA.
- Abstract
The critical role of computers in scientific advancement is described in this panel report. With the growing range and complexity of problems that must be solved and with demands of new generations of computers and computer architecture, the importance of computational mathematics is increasing. Multidisciplinary teams are needed; these are found in most advanced and industrial laboratories, but rarely in universities. The existing educational opportunities are not producing the required personnel to meet substantial shortages. Therefore, the panel strongly recommends increased federal support for: (1) research in computational mathematics, methods, algorithms, and software for scientific computing; (2) the development of interdisciplinary research teams; (3) the establishment and continued operation of a suitable research infrastructure for the teams; (4) graduate and post-doctoral students directly involved in the research of some interdisciplinary team; and (5) young researchers and cross-disciplinary visitors. In the second section, research opportunities in a number of mathematical areas are described. New modes of research are discussed next, followed by comments on educational needs and a final section on funding considerations. Appendices contain a list of related reports, information on laboratory facilities for scientific computing, and letters and position papers. (MNS)
- Published
- 1985
12. New tool detects fake, AI-produced scientific articles.
- Subjects
GENERATIVE artificial intelligence ,ALZHEIMER'S disease ,COMPUTATIONAL intelligence ,SYSTEMS theory ,CHATGPT - Abstract
A new machine-learning algorithm called xFakeSci has been developed by Ahmed Abdeen Hamed, a visiting research fellow at Binghamton University, to detect fake scientific articles produced by artificial intelligence. The algorithm can detect up to 94% of bogus papers, which is nearly twice as successful as other data-mining techniques. Hamed and collaborator Xindong Wu created 50 fake articles for each of three medical topics and compared them to real articles on the same topics. The algorithm analyzes the number of bigrams and how they are linked to other words and concepts in the text to identify patterns that distinguish fake articles from real ones. Hamed plans to expand the range of topics to further develop the algorithm and raise awareness about the issue of fake research papers. [Extracted from the article]
- Published
- 2024
13. A shock-fitting technique for 2D unstructured grids
- Author
-
Paciorri, Renato and Bonfiglioli, Aldo
- Subjects
- *
PAPER , *GRIDS (Cartography) , *COORDINATES , *ALGORITHMS - Abstract
Abstract: A new floating shock-fitting technique featuring the explicit computation of shocks by means of the Rankine–Hugoniot relations has been implemented on two-dimensional unstructured grids. This paper illustrates the algorithmic features of this original technique and the results obtained in the computation of the hypersonic flow past a circular cylinder and a steady Mach reflection. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
14. Taming Algorithmic Priority Inversion in Mission-Critical Perception Pipelines.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
ALGORITHMS ,SYSTEMS design ,CYBER physical systems ,COMPUTER scheduling ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,FIRST in, first out (Queuing theory) - Abstract
The paper discusses algorithmic priority inversion in mission-critical machine inference pipelines used in modern neural-network-based perception subsystems and describes a solution to mitigate its effect. In general, priority inversion occurs in computing systems when computations that are "less important" are performed together with or ahead of those that are "more important." Significant priority inversion occurs in existing machine inference pipelines when they do not differentiate between critical and less critical data. We describe a framework to resolve this problem and demonstrate that it improves a perception system's ability to react to critical inputs, while at the same time reducing platform cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Improving Refugees' Integration with Online Resource Allocation: Technical Perspective.
- Author
-
Freund, Daniel
- Subjects
REFUGEE resettlement ,RESOURCE allocation ,ALGORITHMS ,EMPLOYMENT - Abstract
The article discusses a research paper that applies online resource allocation algorithms to refugee resettlement, aiming to improve refugees' integration into local communities and employment prospects. By utilizing concepts from algorithm design, such as balancing resource utilization and maintaining capacity for future refugees, the authors were able to enhance the employability metric for resettlement agencies like the Hebrew Immigrant Aid Society (HIAS) by approximately 10%. This research not only addresses critical societal issues but also highlights the potential of algorithms to positively impact real-world outcomes for vulnerable populations, encouraging collaboration between algorithm designers and practitioners on important societal problems.
- Published
- 2024
- Full Text
- View/download PDF
16. Findings in Fibromyalgia Reported from Federal University of Rio Grande do Norte [Spectrochemical approach combined with symptoms data to diagnose fibromyalgia through paper spray ionization mass spectrometry (PSI-MS) and multivariate...].
- Subjects
FIBROMYALGIA ,MASS spectrometry ,FISHER discriminant analysis ,SYMPTOMS ,DIAGNOSIS ,NEUROMUSCULAR diseases - Abstract
Algorithms, Diagnostics and Screening, Emerging Technologies, Fibromyalgia, Health and Medicine, Linear Discriminant Analysis, Machine Learning, Muscular Diseases and Conditions, Musculoskeletal Diseases and Conditions, Neuromuscular Diseases and Conditions, Rheumatic Diseases and Conditions Keywords: Algorithms; Diagnostics and Screening; Emerging Technologies; Fibromyalgia; Health and Medicine; Linear Discriminant Analysis; Machine Learning; Muscular Diseases and Conditions; Musculoskeletal Diseases and Conditions; Neuromuscular Diseases and Conditions; Rheumatic Diseases and Conditions EN Algorithms Diagnostics and Screening Emerging Technologies Fibromyalgia Health and Medicine Linear Discriminant Analysis Machine Learning Muscular Diseases and Conditions Musculoskeletal Diseases and Conditions Neuromuscular Diseases and Conditions Rheumatic Diseases and Conditions 158 158 1 04/10/23 20230413 NES 230413 2023 APR 13 (NewsRx) -- By a News Reporter-Staff News Editor at Hematology Week -- Research findings on fibromyalgia are discussed in a new report. [Extracted from the article]
- Published
- 2023
17. The Programmable Calculator in the Classroom.
- Author
-
Stolarz, Theodore J.
- Abstract
The uses of programable calculators in the mathematics classroom are presented. A discussion of the "microelectronics revolution" that has brought programable calculators into our society is also included. Pointed out is that the logical or mental processes used to program the programable calculator are identical to those used to program any computer. A list and description of thirteen mathematical- and computer-related concepts that students can learn by working with programable calculators is presented. The report concludes with four additional uses of these electronic devices by teachers and pupils in the classroom. (MP)
- Published
- 1979
18. The Relationship between Volume Conservation and the Learning of a Volume Algorithm for a Cuboid.
- Author
-
Feghali, Issa
- Abstract
This study investigated the relationship between the level of conservation of displaced volume and the degree to which sixth graders learn the volume algorithm of a cuboid, i.e., volume = length x width x height (v = l x w x h). The problem is a consequence of an apparent discrepancy between the present school programs and the theory of Piaget concerning the time to introduce the volume algorithm. Data showed that sixth graders could apply the algorithm to computation and comprehension questions regardless of their volume conservation level. There was also an improvement of students' conservation levels regardless of their volume achievement scores or their treatments. (Author/MK)
- Published
- 1980
19. A Development System for Augmented Transition Network Grammars and a Large Grammar for Technical Prose. Technical Report No. 25.
- Author
-
Michigan Univ., Ann Arbor., Mayer, John, and Kieras, David E.
- Abstract
Using a system based on standard augmented transition network (ATN) parsing approach, this report describes a technique for the rapid development of natural language parsing, called High-Level Grammar Specification Language (HGSL). The first part of the report describes the syntax and semantics of HGSL and the network implementation of each of its constructs, while the second section discusses the algorithms used in the HGSL compiler and the ATN interpreter. The third section presents a large grammar for technical prose that was developed with the system and which allows parsing of technical training materials in the draft stage of writing as part of a computer-based comprehensible writing aid. The report concludes with a review of some of the results on the coverage of the grammar. The grammar for technical training materials is appended. (FL)
- Published
- 1987
20. Teaching for the Future with Algorithms, Learning Modules, and Microcomputers: An Accounting Example.
- Author
-
Dillaway, Manson P.
- Abstract
The illustrative method of teaching employed in most undergraduate accounting courses is becoming increasingly burdensome to professors and students due to the rapid proliferation of accounting and auditing professional standards and the increased complexity of the tax law. This teaching method may be near the breaking point in upper division courses in the accounting curriculum. Not only does this condition prevent professors and students from reaching teaching or learning goals, it also prevents many capable students from considering accounting as their major or minor curriculum choice. A shift in teaching approach away from examples and toward explicit algorithms would enable the current quantity of material to be maintained with less burden placed on all participants. The algorithmic teaching approach is readily adaptable to accounting courses, particularly when the latter have been broken into specific learning modules. Microcomputers may be used to simplify the teaching and learning process by allowing students to explore complex accounting and taxation algorithms even before the components of the algorithms are fully understood. Discovery learning techniques, in which students derive their own algorithms, are also facilitated by microcomputers. Some examples of an algorithm-based learning module approach in an undergraduate taxation course are provided as well as a list of 14 references. (Author/MES)
- Published
- 1986
21. Mathematical Fluency: The Nature of Practice and the Role of Subordination.
- Author
-
Hewitt, Dave
- Abstract
Considers traditional ways in which attempts have been made to help students become fluent in mathematics and offers a model for ways in which fluency can be achieved with a more economic use of students' time and effort than through traditional models of exercises based on repetition. (MKR)
- Published
- 1996
22. Language, Arithmetic, and the Negotiation of Meaning.
- Author
-
Anghileri, Julia
- Abstract
Limitations in children's understanding of the symbols of arithmetic may inhibit choice of appropriate solution procedures. The teacher's role involves negotiation of new meanings for words and symbols to match extensions to solution procedures. (MKR)
- Published
- 1995
23. Symbol Sense: Informal Sense-Making in Formal Mathematics.
- Author
-
Arcavi, Abraham
- Abstract
Attempts to describe a notion parallel to number sense, called symbol sense, incorporating the following components: making friends with symbols, reading through symbols, engineering symbolic expressions, equivalent expressions for non-equivalent meanings, choice of symbols, flexible manipulation skills, symbols in retrospect, and symbols in context. (24 references) (MKR)
- Published
- 1994
24. An Interpolation Approach to Developing Mathematical Functions for Business Simulations.
- Author
-
Goosen, Kenneth R. and Kusel, Jimie
- Abstract
Presents an interpolation methodology that duplicates and improves the results of mathematical functional relationships useful for designing business enterprise simulations and argues that interpolation is effective over the entire range of production and sales activity. (six references) (EA)
- Published
- 1993
25. Resolution of the Burrows-Wheeler Transform Conjecture.
- Author
-
Kempa, Dominik and Kociumaka, Tomasz
- Subjects
COMPUTER programming ,COMPUTERS in lexicography ,ALGORITHMS ,DATA structures ,COMPUTER science - Abstract
The Burrows-Wheeler Transform (BWT) is an invertible text transformation that permutes symbols of a text according to the lexicographical order of its suffixes. BWT is the main component of popular lossless compression programs (such as bzip2) as well as recent powerful compressed indexes (such as the r-index
7 ), central in modern bioinformatics. The compressibility of BWT is quantified by the number r of equal-letter runs in the output. Despite the practical significance of BWT, no nontrivial upper bound on r is known. By contrast, the sizes of nearly all other known compression methods have been shown to be either always within a polylog n factor (where n is the length of the text) from z, the size of Lempel--Ziv (LZ77) parsing of the text, or much larger in the worst case (by an nε factor for ε > 0). In this paper, we show that r = O (z log² n) holds for every text. This result has numerous implications for text indexing and data compression; in particular: (1) it proves that many results related to BWT automatically apply to methods based on LZ77, for example, it is possible to obtain functionality of the suffix tree in O (z polylog n) space; (2) it shows that many text processing tasks can be solved in the optimal time assuming the text is compressible using LZ77 by a sufficiently large polylog n factor; and (3) it implies the first nontrivial relation between the number of runs in the BWT of the text and of its reverse. In addition, we provide an O (z polylog n)-time algorithm converting the LZ77 parsing into the run-length compressed BWT. To achieve this, we develop several new data structures and techniques of independent interest. In particular, we define compressed string synchronizing sets (generalizing the recently introduced powerful technique of string synchronizing sets11) and show how to efficiently construct them. Next, we propose a new variant of wavelet trees for sequences of long strings, establish a nontrivial bound on their size, and describe efficient construction algorithms. Finally, we develop new indexes that can be constructed directly from the LZ77 parsing and efficiently support pattern matching queries on text substrings. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
26. Experiments as Research Validation: Have We Gone Too Far?
- Author
-
Ullman, Jeffrey D.
- Subjects
COMPUTER science research ,EXPERIMENTS ,ALGORITHMS ,COMPUTER scientists ,SCIENCE - Abstract
The article offers the author's comments on the role of experimental evidence in computer science research. According to the author, sorting algorithms were a major issue for computer scientists in the 1960s. He adds that experiments conducted on specialized data should not be accepted under any circumstances.
- Published
- 2015
- Full Text
- View/download PDF
27. Opening the Door to SSD Algorithmics.
- Author
-
Sitaraman, Ramesh K.
- Subjects
ALGORITHMS ,SOLID state drives ,EVOLUTIONARY computation - Abstract
The article addresses the topic of algorithm design in the solid-state drive (SSD) model of computation, noting its limitations as well as its benefits. The author references an accompanying paper which addresses one shortcoming in particular -- the phenomenon of "write amplification" -- by proposing a more accurate theoretical model of SSDs that incorporates read, write and erase operations.
- Published
- 2023
- Full Text
- View/download PDF
28. Multi-Itinerary Optimization as Cloud Service.
- Author
-
Cristian, Alexandru, Marshall, Luke, Negrea, Mihai, Stoichescu, Flavius, Cao, Peiwei, and Menache, Ishai
- Subjects
CLOUD computing ,TRAFFIC flow ,ALGORITHMS ,TRAVELING salesman problem ,TRAVEL time (Traffic engineering) - Abstract
In this paper, we describe multi-itinerary optimization (MIO)--a novel Bing Maps service that automates the process of building itineraries for multiple agents while optimizing their routes to minimize travel time or distance. MIO can be used by organizations with a fleet of vehicles and drivers, mobile salesforce, or a team of personnel in the field, to maximize workforce efficiency. It supports a variety of constraints, such as service time windows, duration, priority, pickup and delivery dependencies, and vehicle capacity. MIO also considers traffic conditions between locations, resulting in algorithmic challenges at multiple levels (e.g., calculating time-dependent travel-time distance matrices at scale and scheduling services for multiple agents). To support an end-to-end cloud service with turnaround times of a few seconds, our algorithm design targets a sweet spot between accuracy and performance. Toward that end, we build a scalable approach based on the ALNS metaheuristic. Our experiments show that accounting for traffic significantly improves solution quality: MIO finds efficient routes that avoid late arrivals, whereas traffic-agnostic approaches result in a 15% increase in the combined travel time and the lateness of an arrival. Furthermore, our approach generates itineraries with substantially higher quality than a cutting-edge heuristic (LKH), with faster running times for large instances. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
29. Feedforward FFT Hardware Architectures Based on Rotator Allocation.
- Author
-
Garrido, Mario, Huang, Shen-Jui, and Chen, Sau-Gee
- Subjects
FAST Fourier transforms ,DIGITAL signal processing ,ALGORITHMS ,DISCRETE Fourier transforms ,HARDWARE - Abstract
In this paper, we present new feedforward FFT hardware architectures based on rotator allocation. The rotator allocation approach consists in distributing the rotations of the FFT in such a way that the number of edges in the FFT that need rotators and the complexity of the rotators are reduced. Radix-2 and radix-2k feedforward architectures based on rotator allocation are presented in this paper. Experimental results show that the proposed architectures reduce the hardware cost significantly with respect to previous FFT architectures. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
30. CORDIC-Based Architecture for Computing Nth Root and Its Implementation.
- Author
-
Luo, Yuanyong, Wang, Yuxuan, Sun, Huaqing, Zha, Yi, Wang, Zhongfeng, and Pan, Hongbing
- Subjects
DIGITAL computer simulation ,ALGORITHMS ,HARDWARE ,COMPUTER simulation ,DIGITAL signal processing - Abstract
This paper presents a COordinate Rotation Digital Computer (CORDIC)-based architecture for the computation of Nth root and proves its feasibility by hardware implementation. The proposed architecture performs the task of Nth root simply by shift-add operations and enables easy tradeoff between the speed (or precision) and the area. Technically, we divide the Nth root computation into three different subtasks, and map them onto three different classes of the CORDIC accordingly. To overcome the drawback of narrow convergence range of the CORDIC algorithm, we adopt several innovative methods to yield a much improved convergence range. Subsequently, in terms of convergence range and precision, a flexible architecture is developed. The architecture is validated using MATLAB with extensive vector matching. Finally, using a pipelined structure with fixed-point input data, we implement the example circuits of the proposed architecture with radicand ranging from zero to one million, and achieve an average mean of approximately 10−7 for the relative error. The design is modeled using Verilog HDL and synthesized under the TSMC 40-nm CMOS technology. The report shows a maximum frequency of 2.083 GHz with $197421.00~{\mu }\text{m}^{2}$ area. The area decreases to $169689.98~{\mu }\text{m}^{2}$ when the frequency lowers to 1.00 GHz. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. Event-Triggered Optimized Control for Nonlinear Delayed Stochastic Systems.
- Author
-
Zhang, Guoping and Zhu, Quanxin
- Subjects
STOCHASTIC systems ,ADAPTIVE fuzzy control ,FUZZY logic ,ALGORITHMS ,DYNAMIC programming ,FUZZY systems - Abstract
This paper is concerned with the problem of event-triggered optimized control for uncertain nonlinear Itô-type stochastic systems with time-delay and unknown dynamic. By using fuzzy logic systems to approximate two unknown nonlinear functions with the delayed state and current state, respectively. The adaptive identifier is constructed to determine the stochastic system, and the optimized control is designed by using the identifier and adaptive dynamic programming (ADP) of actor-critic architecture. Almost all of the works are concentrated on ADP-based optimal control and it will inevitably cause the complexity of computation and requirements of persistence excitation (PE) assumption. In this paper, the ADP algorithm is obtained based on the negative gradient of a simple positive function (equivalent to the HJB equation), and so the proposed optimal control is simple and can release the PE assumption. Moreover, the event-triggered control approach is proposed to reduce computing burden and communication resources. Furthermore, we prove that the states of system and FLSs parameter errors are semi-globally uniformly ultimately bounded (SGUUB) in mean square via the adaptive identifier and the Lyapunov direct method as well as identifier-actor-critic architecture-based ADP algorithm. Finally, the effectiveness of the proposed method is illustrated through two numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. FPGA Implementation of Reconfigurable CORDIC Algorithm and a Memristive Chaotic System With Transcendental Nonlinearities.
- Author
-
Mohamed, Sara M., Sayed, Wafaa S., Radwan, Ahmed G., and Said, Lobna A.
- Subjects
TRANSCENDENTAL functions ,MATHEMATICAL functions ,FIELD programmable gate arrays ,ALGORITHMS - Abstract
Coordinate Rotation Digital Computer (CORDIC) is a robust iterative algorithm that computes many transcendental mathematical functions. This paper proposes a reconfigurable CORDIC hardware design and FPGA realization that includes all possible configurations of the CORDIC algorithm. The proposed architecture is introduced in two approaches: multiplier-less and single multiplier approaches, each with its advantages. Compared to recent related works, the proposed implementation overpasses them in the included number of configurations. Additionally, it demonstrates efficient hardware utilization and suitability for potential applications. Furthermore, the proposed design is applied to a memristive chaotic system with different transcendental functions computed using the proposed reconfigurable block. The memristive system design is realized on the Artix-7 FPGA board, yielding throughputs of 0.4483 and 0.3972 Gbit/s for the two approaches of reconfigurable CORDIC. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Analyzing the Impact of Memristor Variability on Crossbar Implementation of Regression Algorithms With Smart Weight Update Pulsing Techniques.
- Author
-
Afshari, Sahra, Musisi-Nkambwe, Mirembe, and Sanchez Esqueda, Ivan Sanchez
- Subjects
ALGORITHMS ,MEMRISTORS ,COMPUTER architecture ,MATHEMATICAL models ,INTEGRATING circuits - Abstract
This paper presents an extensive study of linear and logistic regression algorithms implemented with 1T1R memristor crossbars arrays. Using a sophisticated simulation platform that wraps circuit-level simulations of 1T1R crossbars and physics-based models of RRAM (memristors), we elucidate the impact of device variability on algorithm accuracy, convergence rate and precision. Moreover, a smart pulsing strategy is proposed for practical implementation of synaptic weight updates that can accelerate training in real crossbar architectures. Stochastic multi-variable linear regression shows robustness to memristor variability in terms of prediction accuracy but reveals impact on convergence rate and precision. Similarly, the stochastic logistic regression crossbar implementation reveals immunity to memristor variability as determined by negligible effects on image classification accuracy but indicates an impact on training performance manifested as reduced convergence rate and degraded precision. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. On Sampled Metrics for Item Recommendation.
- Author
-
Krichene, Walid and Rendle, Steffen
- Subjects
RECOMMENDER systems ,INFORMATION filtering systems ,INTERNET ,ALGORITHMS ,SOFTWARE measurement - Abstract
Recommender systems personalize content by recommending items to users. Item recommendation algorithms are evaluated by metrics that compare the positions of truly relevant items among the recommended items. To speed up the computation of metrics, recent work often uses sampled metrics where only a smaller set of random items and the relevant items are ranked. This paper investigates such sampled metrics in more detail and shows that they are inconsistent with their exact counterpart, in the sense that they do not persist relative statements, for example, recommender A is better than B, not even in expectation. Moreover, the smaller the sample size, the less difference there is between metrics, and for very small sample size, all metrics collapse to the AUC metric. We show that it is possible to improve the quality of the sampled metrics by applying a correction, obtained by minimizing different criteria. We conclude with an empirical evaluation of the naive sampled metrics and their corrected variants. To summarize, our work suggests that sampling should be avoided for metric calculation, however if an experimental study needs to sample, the proposed corrections can improve the quality of the estimate. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Dualityfree Methods for Stochastic Composition Optimization.
- Author
-
Liu, Liu, Liu, Ji, and Tao, Dacheng
- Subjects
REINFORCEMENT learning ,STATISTICAL learning ,MACHINE learning ,CONJUGATE gradient methods ,EMBEDDINGS (Mathematics) ,ARTIFICIAL intelligence ,ALGORITHMS - Abstract
In this paper, we consider the composition optimization with two expected-value functions in the form of $({1}/{n})\sum _{i = 1}^{n} F_{i}\left({({1}/{m})\sum _{j = 1}^{m} G_{j}(x)}\right)+R(x)$ , which formulates many important problems in statistical learning and machine learning such as solving Bellman equations in reinforcement learning and nonlinear embedding. Full gradient- or classical stochastic gradient descent-based optimization algorithms are unsuitable or computationally expensive to solve this problem due to the inner expectation $({1}/{m})\sum _{j = 1}^{m} G_{j}(x)$. We propose a dualityfree-based stochastic composition method that combines the variance reduction methods to address the stochastic composition problem. We apply the stochastic variance reduction gradient- and stochastic average gradient algorithm-based methods to estimate the inner function and the dualityfree method to estimate the outer function. We prove the linear convergence rate not only for the convex composition problem but also for the case that the individual outer functions are nonconvex, while the objective function is strongly convex. We also provide the results of experiments that show the effectiveness of our proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
36. APPLIED STATISTICS ALGORITHMS SECTION.
- Subjects
MATHEMATICS ,ALGORITHMS ,PAPER ,COMPUTER programming ,TECHNICAL specifications - Abstract
The article presents information on the publication of a book "Applied Statistics, Algorithms," relevant to statistics, by the Royal Statistical Society in cooperation with the Science Research Council's Working Party on Statistical Computing. A policy statement describing the editorial policy appears in "Applied Statistics," Vol. 1. No. 1 (1968). A support paper describing the expected contents of the external specification and making recommendation for the layout of algorithms and for programming strategy will appear in the following issue.
- Published
- 1968
37. Good Algorithms Make Good Neighbors: Many computer scientists doubted ad hoc methods would ever give way to a more general approach to finding nearest neighbors. They were wrong.
- Author
-
Klarreich, Erica
- Subjects
NEAREST neighbor analysis (Statistics) ,ALGORITHMS ,NORMED rings ,DATA analysis ,MEASUREMENT of distances ,GRAPHIC methods - Abstract
The article discusses the development of the algorithm known as norms to address the statistic problem known as nearest neighbor, referencing papers in the "Annual Symposium on Foundations of Computer Science" and "Proceedings of the ACM Symposium on Theory of Computing" journals. An overview of researchers' designing of normed spaces is provided. The uses of data analysis and expander graphs, including in regard to measuring the distance from data points, are discussed.
- Published
- 2019
- Full Text
- View/download PDF
38. Helping to Shape the Future: Rama Akkiraju: As an IBM Fellow, Rama Akkiraju helps shape the company's future
- Subjects
International Business Machines Corp. ,Optimization theory ,Computer industry ,Algorithms ,Career opportunities ,Paper mills ,Outsourcing ,Production management ,Microcomputer industry ,Computer industry ,Business, general ,Business ,Engineering and manufacturing industries - Abstract
I started my career in IBM Research in New York, after getting my master's degree in computer science. I spent the early part of my career on optimization algorithms to [...]
- Published
- 2020
- Full Text
- View/download PDF
39. Compensation Network Optimal Design Based on Evolutionary Algorithm for Inductive Power Transfer System.
- Author
-
Chen, Weiming, Lu, Weiguo, Iu, Herbert Ho-Ching, and Fernando, Tyrone
- Subjects
EVOLUTIONARY algorithms ,CURRENT fluctuations ,EVOLUTIONARY computation ,ALGORITHMS ,MATHEMATICAL models ,EXPERIMENTAL design - Abstract
Conventional design and optimization of passive compensation network (PCN) for inductive power transfer (IPT) system are based on specific topologies. The demerits of this design method are: i) The topology is mostly chosen by experience; ii) The design parameters are not multi-objective optimal. Aiming at these issues, this paper proposes an optimal PCN design scheme based on evolutionary algorithm (EA) to synchronously optimize the topology and parameters of PCN for IPT system. Firstly, a unified mathematical model of the PCN is presented and derived by transmission matrix. Then, according to the mathematical model, the multi-objective functions (such as output fluctuation and efficiency) as well as the constraints (such as load and coupling coefficient) for the optimal PCN design are established. The EA based multi-objective optimal PCN design algorithm is further constructed. Six optimal results are obtained using the algorithm, and one optimized PCN having minimum output current fluctuation and high-efficiency is chosen to validate the effectiveness of the proposed design scheme in experiment. For the given IPT system with the optimized PCN, the maximum fluctuation of output current is no more than 11%, within 200% of load variation and about 77% of coupling variation. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
40. A New Full Chaos Coupled Mapping Lattice and Its Application in Privacy Image Encryption.
- Author
-
Wang, Xingyuan and Liu, Pengbo
- Subjects
IMAGE encryption ,PRIVACY ,DYNAMICAL systems ,HEURISTIC algorithms ,CRYPTOGRAPHY ,ALGORITHMS - Abstract
Since chaotic cryptography has a long-term problem of dynamic degradation, this paper presents proof that chaotic systems resist dynamic degradation through theoretical analysis. Based on this proof, a novel one-dimensional two-parameter with a wide-range system mixed coupled map lattice model (TWMCML) is given. The evaluation of TWMCML shows that the system has the characteristics of strong chaos, high sensitivity, broader parameter ranges and wider chaos range, which helps to enhance the security of chaotic sequences. Based on the excellent performance of TWMCML, it is applied to the newly proposed encryption algorithm. The algorithm realizes double protection of private images under the premise of ensuring efficiency and safety. First, the important information of the image is extracted by edge detection technology. Then the important area is scrambled by the three-dimensional bit-level coupled XOR method. Finally, the global image is more fully confused by the dynamic index diffusion formula. The simulation experiment verified the effectiveness of the algorithm for grayscale and color images. Security tests show that the application of TWMCML makes the encryption algorithm have a better ability to overcome conventional attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Toward Practical Code-Based Signature: Implementing Fast and Compact QC-LDGM Signature Scheme on Embedded Hardware.
- Author
-
Hu, Jingwei and Cheung, Ray C. C.
- Subjects
PUBLIC key cryptography ,CODING theory ,ALGORITHMS ,FIELD programmable gate arrays ,DATA encryption - Abstract
In this paper, fast and compact implementations for code-based signature are presented. Existing designs are either using enormous memory storage or suffering from slow issuing speed of signatures. A vastly optimized new design solving these problems is proposed by exploiting quasi-cyclic low-density generator matrix codes at different levels. In particular, this paper provides a new algorithmic enhancement of signature generation and gives detailed and optimized solutions for critical steps of this algorithm. The design presented in this paper is the fastest implementation of code-based signatures in open literature. It is shown, for instance, that our implementation of signature generation engine can generate approximately 60 000 signatures per second on a Xilinx Virtex-6 FPGA, requiring only 5992 slices and 60 memory blocks. In addition, a very compact implementation is also provided, producing 5438 signatures per second with only 18 memory blocks. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
42. THE ALGORITHM SERIES: LIVE-EVENT SCALING.
- Author
-
Siglin, Tim
- Subjects
ALGORITHMS ,STREAMING video & television ,LOCAL area networks ,MULTICASTING (Computer networks) ,DIGITAL rights management - Abstract
The article discusses concerts, festivals, and in-person gatherings for 2020, has plenty of pent-up demand for large-scale events as head to 2021. Topics include Algorithm Series delves to the math and workflow decisioning used to finetune live video event delivery at scale; and live-streaming solutions focus on unicast delivery, in a single connection between the video streaming server and the end user's streaming player.
- Published
- 2020
43. Finite-/Fixed-Time Synchronization of Memristor Chaotic Systems and Image Encryption Application.
- Author
-
Wang, Leimin, Jiang, Shan, Ge, Ming-Feng, Hu, Cheng, and Hu, Junhao
- Subjects
SLIDING mode control ,CHAOS synchronization ,IMAGE encryption ,IMAGING systems ,LYAPUNOV stability ,ALGORITHMS - Abstract
In this paper, a unified framework is proposed to address the synchronization problem of memristor chaotic systems (MCSs) via the sliding-mode control method. By employing the presented unified framework, the finite-time and fixed-time synchronization of MCSs can be realized simultaneously. On the one hand, based on the Lyapunov stability and sliding-mode control theories, the finite-/fixed-time synchronization results are obtained. It is proved that the trajectories of error states come near and get to the designed sliding-mode surface, stay on it accordingly and approach the origin in a finite/fixed time. On the other hand, we develop an image encryption algorithm as well as its implementation process to show the application of the synchronization. Finally, the theoretical results and the corresponding image encryption application are carried out by numerical simulations and statistical performances. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. Fault Modeling and Efficient Testing of Memristor-Based Memory.
- Author
-
Liu, Peng, You, Zhiqiang, Wu, Jigang, Liu, Bosheng, Han, Yinhe, and Chakrabarty, Krishnendu
- Subjects
BRIDGE defects ,MEMORY testing ,ALGORITHMS ,DISCRETE Fourier transforms ,OPTICAL disks ,MEMRISTORS - Abstract
Memristor-based memory technology is one of the emerging memory technologies, which is a potential candidate to replace traditional memories. Efficient test solutions are required to enable the quality and reliability of such products. In previous works, fault models are caused by open, short and bridge defects and parametric variations during the fabrication. However, these fault models cannot describe the bridge defects that cause the state of the faulty cell to an undefined state. In this paper, we analyze the different effects of bridge defects and aggregate their faulty behavior into new fault models, undefined coupling fault and dynamic undefined coupling fault. In addition, an enhanced March algorithm is designed to detect all the modeled faults. In one resistor crossbar with $N$ memristors, the enhanced March algorithm requires $8N$ write and $7N$ read operations with negligible hardware overhead. To reduce the test time, a March RC algorithm is proposed based on read operations with new reference currents, which requires $4N+2$ write and $6N$ read operations. Analytical results show that the proposed test algorithms can detect all the modeled faults outperforming all the previous methods. Subsequently, a Design-for-Testability scheme is proposed to implement March RC algorithm with a little area overhead. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
45. Joint Sparsity and Order Optimization Based on ADMM With Non-Uniform Group Hard Thresholding.
- Author
-
Matsuoka, Ryo, Kyochi, Seisuke, Ono, Shunsuke, and Okuda, Masahiro
- Subjects
FINITE impulse response filters ,DIGITAL signal processing ,LEAST squares ,PROGRAM transformation ,MULTIPLIERS (Mathematical analysis) ,ALGORITHMS - Abstract
This paper proposes a new optimization framework for the joint optimization of sparsity and filter order (JOSFO) for FIR filter design. Since the cost function for JOSFO involves \ell 0 and non-uniform overlapped group \ell 0 norms, which are not convex, a global optimal solution is difficult to obtain. To find an approximate solution of the non-convex problem, existing approaches repeat the following steps: 1) approximate the cost function; 2) find candidates of zero coefficients by minimizing the cost function; and 3) set them to zero. On the other hand, this paper directly solves the optimization problem, without any approximation to the cost function, by using the alternating direction method of multipliers with the pseudo-proximity operators of \ell 0 and non-uniform non-overlapped group \ell 0 norms. Experimental results show that resulting filters designed by the proposed method have sparser coefficients and lower orders, while satisfying filter specifications, such as an error from a desired frequency response. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
46. Extended Polynomial Growth Transforms for Design and Training of Generalized Support Vector Machines.
- Author
-
Gangopadhyay, Ahana, Chatterjee, Oindrila, and Chakrabartty, Shantanu
- Subjects
SUPPORT vector machines ,MACHINE learning ,POLYNOMIALS ,NONLINEAR programming ,ALGORITHMS - Abstract
Growth transformations constitute a class of fixed-point multiplicative update algorithms that were originally proposed for optimizing polynomial and rational functions over a domain of probability measures. In this paper, we extend this framework to the domain of bounded real variables which can be applied towards optimizing the dual cost function of a generic support vector machine (SVM). The approach can, therefore, not only be used to train traditional soft-margin binary SVMs, one-class SVMs, and probabilistic SVMs but can also be used to design novel variants of SVMs with different types of convex and quasi-convex loss functions. In this paper, we propose an efficient training algorithm based on polynomial growth transforms, and compare and contrast the properties of different SVM variants using several synthetic and benchmark data sets. The preliminary experiments show that the proposed multiplicative update algorithm is more scalable and yields better convergence compared to standard quadratic and nonlinear programming solvers. While the formulation and the underlying algorithms have been validated in this paper only for SVM-based learning, the proposed approach is general and can be applied to a wide variety of optimization problems and statistical learning models. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
47. A Smoothed LASSO-Based DNN Sparsification Technique.
- Author
-
Koneru, Basava Naga Girish, Chandrachoodan, Nitin, and Vasudevan, Vinita
- Subjects
ERROR functions ,ALGORITHMS ,APPROXIMATION algorithms ,SMOOTHNESS of functions ,COST functions - Abstract
Deep Neural Networks (DNNs) are increasingly being used in a variety of applications. However, DNNs have huge computational and memory requirements. One way to reduce these requirements is to sparsify DNNs by using smoothed LASSO (Least Absolute Shrinkage and Selection Operator) functions. In this paper, we show that irrespective of error profile, the sparsity values obtained using various smoothed LASSO functions are similar, provided the maximum error of these functions with respect to the LASSO function is the same. We also propose a layer-wise DNN pruning algorithm, where the layers are pruned based on their individual allocated accuracy loss budget, determined by estimates of the reduction in number of multiply-accumulate operations (in convolutional layers) and weights (in fully connected layers). Further, the structured LASSO variants in both convolutional and fully connected layers are explored within the smoothed LASSO framework and the tradeoffs involved are discussed. The efficacy of proposed algorithm in enhancing the sparsity within the allowed degradation in DNN accuracy and results obtained on structured LASSO variants are shown on MNIST, SVHN, CIFAR-10, and Imagenette datasets and on larger networks such as ResNet-50 and Mobilenet. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
48. The Impact of Device Uniformity on Functionality of Analog Passively-Integrated Memristive Circuits.
- Author
-
Fahimi, Z., Mahmoodi, M. R., Klachko, M., Nili, H., and Strukov, D. B.
- Subjects
UNIFORMITY ,MEMRISTORS ,COMPUTER systems ,ANALOG circuits ,ALGORITHMS ,NEUROMORPHICS - Abstract
Passively-integrated memristors are the most prospective candidates for designing high-speed, energy-efficient, and compact neuromorphic circuits. Despite all the promising properties, experimental demonstrations of passive memristive crossbars have been limited to circuits with few thousands of devices until now, which stems from the strict uniformity requirements on the IV characteristics of memristors. This paper expands upon this vital challenge and investigates how uniformity impacts the computing accuracy of analog memristive circuits, focusing on neuromorphic applications. Specifically, the paper explores the tradeoffs between computing accuracy, crossbar size, switching threshold variations, and target precision. All-embracing simulations of matrix multipliers and deep neural networks on CIFAR-10 and ImageNet datasets have been carried out to evaluate the role of uniformity on the accuracy of computing systems. Further, we study three post-fabrication methods that increase the accuracy of nonuniform 0T1R neuromorphic circuits: hardware-aware training, improved tuning algorithm, and switching threshold modification. The application of these techniques allows us to implement advanced deep neural networks with almost no accuracy drop, using current state-of-the-art analog 0T1R technology. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
49. Constructing Higher-Dimensional Digital Chaotic Systems via Loop-State Contraction Algorithm.
- Author
-
Wang, Qianxue, Yu, Simin, Guyeux, Christophe, and Wang, Wei
- Subjects
PROBLEM solving ,ALGORITHMS ,TIME series analysis ,COMPACT spaces (Topology) - Abstract
This paper aims to refine and expand the theoretical and application framework of higher-dimensional digital chaotic system (HDDCS). Topological mixing for HDDCS is strictly proved theoretically at first. Topological mixing implies Devaney’s definition of chaos in a compact space, but not vice versa. Therefore, the proof of topological mixing promotes the theoretical research of HDDCS. Then, a general design method for constructing HDDCS via loop-state contraction algorithm is given. The construction of the iterative function uncontrolled by random sequences (hereafter called iterative function) is the starting point of this research. On this basis, this paper put forward a general design method to solve the construction problem of HDDCS, and several examples illustrate the effectiveness and feasibility of this method. The adjacency matrix corresponding to the designed HDDCS is used to construct the chaotic Echo State Network (ESN) for predicting Mackey-Glass time series. Compared with other ESNs, the chaotic ESN has better prediction performance and is able to accurately predict a much longer period of time. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
50. Efficient Row-Layered Decoder for Sparse Code Multiple Access.
- Author
-
Pang, Xu, Song, Wenqing, Shen, Yifei, You, Xiaohu, and Zhang, Chuan
- Subjects
BIT error rate ,MESSAGE passing (Computer science) ,ALGORITHMS ,WIRELESS communications ,TECHNOLOGY convergence ,BARBELLS ,VERY large scale circuit integration ,JACOBIAN matrices - Abstract
Sparse code multiple access (SCMA) is a promising technology for the development of wireless communication, which supports a large number of overloading users and enjoys high spectral efficiency. However, conventional SCMA decoders suffer very high complexity in implementations. Changing the updating scheme is a superior approach to reduce complexity, which guarantees the updated information immediately join in the following message propagating of the current iteration and accelerates the decoding convergence. In this paper, a row-layered message passing algorithm (MPA) is proposed, which offers a good trade-off between the hardware complexity and the bit error rate (BER) performance. Simulation results show that the proposed decoder saves 66.7% computation complexity compared with the original MPA with the similar BER performance. Pipelining and folding technology are adopted in VLSI implementations. The synthesis results with 45-nm CMOS technology show that the proposed decoder can achieve higher hardware efficiency and throughput under a high frequency than the existing decoders, achieving 1777.78 Mb/s throughput with 1.112 mm
2 area consumption. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.