85 results on '"Approximate Computation"'
Search Results
2. Overview of Research on Bayesian Inference and Parallel Tempering
- Author
-
ZHAN Jin, WANG Xuefei, CHENG Yurong, YUAN Ye
- Subjects
variational inference ,sampling methods ,parallel tempering ,approximate computation ,Computer software ,QA76.75-76.765 ,Technology (General) ,T1-995 - Abstract
Bayesian inference is one of the main problems in statistics.It aims to update the prior knowledge of the probability distribution model based on the observation data.For the posterior probability that cannot be observed or is difficult to directly calculate,which is often encountered in real situations,Bayesian inference can obtain a good approximation.It is a kind of important method based on Bayesian theorem.Many machine learning problems involve the process of simulating and approximating the target distribution of various types of feature data,such as classification models,topic modeling,and data mining.Therefore,Bayesian inference has shown important and unique research value in the field of machine learning.With the beginning of the big data era,the experimental data collected by researchers through actual information is very large,resulting in the complex distribution of targets to be simulated and calculated.How to perform accurate and time-efficient approximation inferences on target distributions under complex data has become a major and difficult point in Bayesian inference problems today.Aiming at the infe-rence problem under this complex distribution model,this paper systematically introduces and summarizes the two main methods for solving Bayesian inference problems in recent years,which are variational inference and sampling methods.Firsly,this paper gives the problem definition and theoretical knowledge of variational inference,introduces in detail the variational inference algorithm based on coordinate ascent,and gives the existing applications and future prospects of this method.Next,it reviews the research results of existing sampling methods at home and abroad,gives the specific algorithm procedure of various main sampling methods,as well as summarizes and compares the characteristics,advantages and disadvantages of these methods.Finally,this paper introduces parallel tempering technique,outlines its basic theories and methods,discusses the combination and application of parallel tempering and sampling methods,and explores new research directions for the future development of Bayesian inference problems.
- Published
- 2023
- Full Text
- View/download PDF
3. Approximate Computation for Baseband Processing
- Author
-
Zhang, Chuan, Wang, Huizheng, Liu, Weiqiang, editor, and Lombardi, Fabrizio, editor
- Published
- 2022
- Full Text
- View/download PDF
4. Performance Analysis of 32-Bit DADDA Multiplier Using 15–4 Compressor
- Author
-
Gidd, Avinash, Ghasti, Shivani, Jadhav, Snehal, Sivasankaran, K., Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Arunachalam, V., editor, and Sivasankaran, K., editor
- Published
- 2021
- Full Text
- View/download PDF
5. Neuron Fault Tolerance Capability Based Computation Reuse in DNNs
- Author
-
Qi, Pengnian, Wang, Jing, Zhu, Xiaoyan, Zhang, Weigong, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wen, Sheng, editor, Zomaya, Albert, editor, and Yang, Laurence T., editor
- Published
- 2020
- Full Text
- View/download PDF
6. AirNN: A Featherweight Framework for Dynamic Input-Dependent Approximation of CNNs.
- Author
-
Hemmat, Maedeh, San Miguel, Joshua, and Davoodi, Azadeh
- Subjects
- *
CONVOLUTIONAL neural networks - Abstract
In this work, we propose AirNN, a novel framework which enables dynamic approximation of an already-trained convolutional neural network (CNN) in hardware during inference. AirNN enables input-dependent approximation of the CNN to achieve energy saving without much degradation in its classification accuracy at runtime. For each input, AirNN uses only a fraction of the CNN’s weights based on that input (with the rest remaining 0) to conduct the inference. Consequently, energy saving is possible due to fewer number of fetches from off-chip memory as well as fewer multiplications for majority of the inputs. To achieve per-input approximation, we propose a clustering algorithm that groups similar weights in the CNN based on their importance, and design an iterative framework that decides dynamically how many clusters of weights should be fetched from off-chip memory for each individual input. We also propose new hardware structures to implement our framework on top of a recently proposed FPGA-based CNN accelerator. In our experiments with popular CNNs, we, on average, show 49% energy saving with less than 3% degradation in classification accuracy due to doing inference with only a fraction of the weights for the majority of the inputs. We also propose a greedy interleaving scheme, implemented in hardware, in order to improve the performance of the iterative procedure and compensate for its latency overhead. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Parallel VLSI Architecture for Approximate Computation of Discrete Hadamard Transform.
- Author
-
Mohanty, Basant Kumar
- Subjects
- *
HADAMARD matrices , *PARALLEL processing , *VERY large scale circuit integration - Abstract
Conventionally, decimation-in-time (DIT) data-flow-graph (DFG) is used to obtain computing structures for discrete Hadamard transform (DHT). The fixed-width structures based on conventional approach, however, offers a marginal bit-saving and introduces relatively more truncation error. In this paper, we have derived decimation-in-frequency (DIF) DFG for DHT which is more convenient for approximation using truncation than the DIT-DFG. To achieve higher bit-saving with relatively less truncation error, we propose an efficient truncation model comprising of pre-truncation ($\delta $) and post-truncation ($\gamma $). Also, we present a logic optimized adder-subtractor design customized for DHT structure. Using the proposed truncation model, we have three variants of proposed structures for parallel computation of 2D DHT. The proposed structure-2 for DHT size $N=8$ , word-size $w=8$ , and approximation $(\delta _{1}=2, \gamma _{1}=1, \delta _{2}=2, \gamma _{2}=1)$ involves nearly 38% less area-delay-product (ADP) and 46% less energy per sample (EPS), and calculates outputs with almost the same accuracy as the existing fixed-width structure based on post-truncation. For the same word-size and $N=16$ , the proposed structure-2 for approximation $(\delta _{1}=2, \gamma _{1}=2, \delta _{2}=2, \gamma _{2}=2)$ involves nearly 36% less ADP and 44% less EPS than the existing structure and calculates outputs with the same accuracy. We have observed that reconstructed images obtained using the proposed structure-2 are less by 7 dB and 2 dB PSNR than those obtained using the existing fixed-width structures for $N=8$ and 16, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
8. Analog Error-Correcting Codes.
- Author
-
Roth, Ron M.
- Subjects
- *
ERROR-correcting codes , *LINEAR codes , *FAULT-tolerant computing , *RANDOM access memory , *POLYGONS , *MULTIPLICATION - Abstract
Coding schemes are presented that provide the ability to locate computational errors above a prescribed threshold while using analog resistive devices for approximate real vector–matrix multiplication. In such devices, the matrix is programmed into the device by setting an array of resistors to have conductances proportional to the respective entries in the matrix. In the coding scheme that is considered in this work, redundancy columns are appended so that each row in the programmed matrix forms a codeword of a prescribed linear code ${\mathcal {C}}$ over the real field; the result of the multiplication of any input real row vector by the matrix is then also a codeword of ${\mathcal {C}}$. While error values within $\pm \delta $ in the entries of the result are tolerable (for some prescribed $\delta > 0$), outlying errors, with values outside the range $\pm \Delta $ (for a prescribed $\Delta \ge \delta $) should be located and corrected. As a design and analysis tool for such a setting, a certain functional is defined for the code ${\mathcal {C}}$ , through which a characterization is obtained for the number of outlying errors that can be handled, as a function of the ratio $\Delta /\delta $. Several code constructions are then presented, primarily for the case of single outlying error handling. For this case, the coding problem is shown to be related to certain extremal problems on convex polygons. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
9. An Efficient Parallel DA-Based Fixed-Width Design for Approximate Inner-Product Computation.
- Author
-
Mohanty, Basant Kumar and Meher, Pramod Kumar
- Subjects
DIGITAL signal processing ,VERY large scale circuit integration ,HARDWARE - Abstract
Parallel distributed arithmetic (PDA)-based structures are widely used for high-speed computation of inner product in digital signal processing (DSP) applications. In this article, we have proposed novel PDA-based structures based on an efficient truncation model. To achieve higher bit saving with relatively less truncation error, we present here a novel approach using approximate look-up tables (LUTs), adder trees (ATs), and Wallace-like shift-AT (SAT) with truncated operands to obtain hardware-efficient fixed-width PDA-based inner-product structures. We have three variants of proposed structures based on the proposed truncation approach. We find that the proposed inner-product structure-1 using approximate LUT (ALUT) and approximate AT offers nearly 20% higher bit saving, 20% saving in area-delay product (ADP) and offers relatively less truncation error than the existing structures. The proposed structure-2 using ALUT, ATs, and proposed SAT offers nearly 50% higher bit-saving, 61% ADP saving and offers nearly the same accuracy compared to the existing approximate DA-based structures. Proposed structure-3 offers nearly 60% higher bit saving and calculates outputs with almost the same or marginally less accuracy than the existing structures for higher coefficient word lengths. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. Reasoning and Computation with Flexible Linguistic Rules
- Author
-
Lian, Shiyou and Lian, Shiyou
- Published
- 2016
- Full Text
- View/download PDF
11. ProSPECT: Proactive Storage Using Provenance for Efficient Compute and Tiering
- Author
-
Murugan, Muthukumar, Bhattacharya, Suparna, Voigt, Doug, Bharde, Madhumita, and Tom, Ancy
- Published
- 2022
- Full Text
- View/download PDF
12. Stateful adaptive streams with approximate computing and elastic scaling
- Author
-
Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Universitat Politècnica de Catalunya. CNDS - Xarxes de Computadors i Sistemes Distribuïts, Francisco, João, Coimbra, Miguel E., Neto, Pedro F., Freitag, Fèlix, Veiga, Luis, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Universitat Politècnica de Catalunya. CNDS - Xarxes de Computadors i Sistemes Distribuïts, Francisco, João, Coimbra, Miguel E., Neto, Pedro F., Freitag, Fèlix, and Veiga, Luis
- Abstract
The model of approximate computing can be used to increase performance or optimize resource usage in stream and graph processing. It can be used to satisfy performance requirements (e.g., throughput, lag) in stream processing by reducing the effort that applications need to process datasets. There are currently multiple stream processing platforms, and most of them do not natively support approximate results. A recent one, Stateful Functions, is an API that uses Flink to enable developers to easily build stream and graph processing applications. It also retains Flink's features like stateful computations, fault-tolerance, scalability, control events and its graph processing library Gelly. Herein we present Approxate, an extension over this platform to support approximate results. It can also support more efficient stream and graph processing by allocating available resources adaptively, driven by user-defined requirements on throughput, lag, and latency. This extension enables flexibility in computational trade-offs such as trading accuracy for performance. The user can choose which metrics should be guaranteed at the cost of others, and/or the accuracy. Approxate incorporates approximate computing (using load shedding) with adaptive accuracy and resource manegement in state-of-the-art stream processing platforms, which are not targeted in other relevant related work. It does not require significant modifications to application code, and minimizes imbalance in data source representation when dropping events., This work was supported by national funds through FCT, Fundação para a Ciência e a Tecnologia, under project UIDB/50021/2020. DL 60/2018, de 3-08 – Aquisição necessária para a atividade de I&D do INESC-ID, no âmbito do projeto SmartRetail (02/C-05i01/2022). This work was partially supported by the Spanish Government under research contracts PID2019-106774RB-C21 and PCI2019-111850-2 (DiPET CHIST-ERA)., Peer Reviewed, Postprint (author's final draft)
- Published
- 2023
13. GPGPU Linear Complexity t-SNE Optimization.
- Author
-
Pezzotti, Nicola, Thijssen, Julian, Mordvintsev, Alexander, Hollt, Thomas, Van Lew, Baldur, Lelieveldt, Boudewijn P.F., Eisemann, Elmar, and Vilanova, Anna
- Subjects
COMPUTATIONAL complexity ,MAGNITUDE (Mathematics) ,VISUAL analytics ,DATA analysis ,KERNEL (Mathematics) ,C++ - Abstract
In recent years the t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm has become one of the most used and insightful techniques for exploratory data analysis of high-dimensional data. It reveals clusters of high-dimensional data points at different scales while only requiring minimal tuning of its parameters. However, the computational complexity of the algorithm limits its application to relatively small datasets. To address this problem, several evolutions of t-SNE have been developed in recent years, mainly focusing on the scalability of the similarity computations between data points. However, these contributions are insufficient to achieve interactive rates when visualizing the evolution of the t-SNE embedding for large datasets. In this work, we present a novel approach to the minimization of the t-SNE objective function that heavily relies on graphics hardware and has linear computational complexity. Our technique decreases the computational cost of running t-SNE on datasets by orders of magnitude and retains or improves on the accuracy of past approximated techniques. We propose to approximate the repulsive forces between data points by splatting kernel textures for each data point. This approximation allows us to reformulate the t-SNE minimization problem as a series of tensor operations that can be efficiently executed on the graphics card. An efficient implementation of our technique is integrated and available for use in the widely used Google TensorFlow.js, and an open-source C++ library. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. Learning Recursive Probability Trees from Data
- Author
-
Cano, Andrés, Gómez-Olmedo, Manuel, Moral, Serafín, Pérez-Ariza, Cora Beatriz, Salmerón, Antonio, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Bielza, Concha, editor, Salmerón, Antonio, editor, Alonso-Betanzos, Amparo, editor, Hidalgo, J. Ignacio, editor, Martínez, Luis, editor, Troncoso, Alicia, editor, Corchado, Emilio, editor, and Corchado, Juan M., editor
- Published
- 2013
- Full Text
- View/download PDF
15. Approximate Lazy Evaluation of Influence Diagrams
- Author
-
Cabañas, Rafael, Cano, Andrés, Gómez-Olmedo, Manuel, Madsen, Anders L., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Bielza, Concha, editor, Salmerón, Antonio, editor, Alonso-Betanzos, Amparo, editor, Hidalgo, J. Ignacio, editor, Martínez, Luis, editor, Troncoso, Alicia, editor, Corchado, Emilio, editor, and Corchado, Juan M., editor
- Published
- 2013
- Full Text
- View/download PDF
16. Approximate Computation
- Author
-
Sakr, Sherif, editor and Zomaya, Albert Y., editor
- Published
- 2019
- Full Text
- View/download PDF
17. Recursive Probability Trees for Bayesian Networks
- Author
-
Cano, Andrés, Gómez-Olmedo, Manuel, Moral, Serafín, Pérez-Ariza, Cora B., Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Meseguer, Pedro, editor, Mandow, Lawrence, editor, and Gasca, Rafael M., editor
- Published
- 2010
- Full Text
- View/download PDF
18. An Approximate Computation of the Dominant Region Diagram for the Real-Time Analysis of Group Behaviors
- Author
-
Nakanishi, Ryota, Maeno, Junya, Murakami, Kazuhito, Naruse, Tadashi, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Baltes, Jacky, editor, Lagoudakis, Michail G., editor, Naruse, Tadashi, editor, and Ghidary, Saeed Shiry, editor
- Published
- 2010
- Full Text
- View/download PDF
19. A rough set approach for approximating differential dependencies.
- Author
-
Tran, Anh Duy, Arch-int, Somjit, and Arch-int, Ngamnij
- Subjects
- *
ROUGH sets , *APPROXIMATION algorithms , *MEASUREMENT errors , *DECISION making , *PROBLEM solving - Abstract
Highlights • A differential-relation-based rough set model on relational databases is proposed. • Using this model, the measures for differential dependencies (DDs) are expressed. • An efficient method to compute the approximate error measure of g 3 for DDs is developed. • The rough sets on the differential decision systems (DDSs) are introduced. • A bridge between DDs in databases and attribute dependencies in DDSs is formed. Abstract Data dependencies in databases and attribute dependencies in decision systems are important when addressing problems concerning data quality and attribute reduction, in which measures play a significant role in approximating these dependencies to achieve better adaptation to uncertain data. This paper proposes a differential-relation-based rough set model from the perspective of relational databases to express the dependency degree, error measures, confidence, information granulation and differential class distance for differential dependencies (DDs) and the relationships among them in a unified framework. Moreover, the error measure g 3 has been widely studied and applied for data dependencies. However, the computation of g 3 for DDs is NP-complete. Therefore, based on the proposed rough set, we introduce a new method that can compute the approximate error measure g 3 ˜ of g 3 in polynomial time. This study demonstrates that our approach can provide a substantially better approximation, that is, an approximation closer to the optimal solution g 3 , compared to the existing greedy method. We also introduce the differential-relation-based rough set from the perspective of information systems and make a connection to the rough sets induced by non-equivalence relations. The two views of the differential-relation-based rough sets form an essential bridge between the DDs in databases and attribute dependencies in differential decision systems (DDSs) that allows sharing measures for approximating the dependencies. These results are meaningful for approximate computations, the development of algorithms for attribute reduction in decision systems and the discovery of approximate differential dependencies (ADDs) in databases. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. Cost-Constrained QoS Optimization for Approximate Computation Real-Time Tasks in Heterogeneous MPSoCs.
- Author
-
Wei, Tongquan, Zhou, Junlong, Cao, Kun, Cong, Peijin, Chen, Mingsong, Zhang, Gongxuan, Hu, Xiaobo Sharon, and Yan, Jianming
- Subjects
- *
INTERNET of things , *REAL-time computing , *MULTIPROCESSORS , *GRID energy storage , *QUALITY of service - Abstract
Internet of Things devices, such as video-based detectors or road side units are being deployed in emerging applications like sustainable and intelligent transportation systems. Oftentimes, stringent operation and energy cost constraints are exerted on this type of applications, necessitating a hybrid supply of renewable and grid energy. The key issue of a cost-constrained hybrid of renewable and grid power is its uncertainty in energy availability. The characteristic of approximate computation that accepts an approximate result when energy is limited and executes more computations yielding better results if more energy is available, can be exploited to intelligently handle the uncertainty. In this paper, we first propose an energy-adaptive task allocation scheme that optimally assigns real-time approximate-computation tasks to individual processors and subsequently enables a matching of the cost-constrained hybrid supply of energy with the energy demand of the resultant task schedule. We then present a quality of service (QoS)-driven task scheduling scheme that determines the optional execution cycles of tasks on individual processors for optimization of system QoS. A dynamic task scheduling scheme is also designed to adapt at runtime the task execution to the varying amount of the available energy. Simulation results show that our schemes can reduce system energy consumption by up to 29% and improve system QoS by up to 108% as compared to benchmarking algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
21. Exploiting Approximate Feature Extraction via Genetic Programming for Hardware Acceleration in a Heterogeneous Microprocessor.
- Author
-
Jia, Hongyang and Verma, Naveen
- Subjects
MICROPROCESSORS ,FEATURE extraction ,GENETIC programming - Abstract
This paper presents a heterogeneous microprocessor for low-energy sensor-inference applications. Hardware acceleration has shown to enable substantial energy-efficiency and throughput gains, but raises significant challenges where programmable computations are required, as in the case of feature extraction. To overcome this, a programmable feature-extraction accelerator (FEA) is presented that exploits genetic programming for automatic program synthesis. This leads to approximate, but highly structured, computations, enabling: 1) a high degree of specialization; 2) systematic mapping of programs to the accelerator; and 3) energy scalability via user-controllable approximation knobs. A microprocessor integrating a CPU with feature-extraction and classification accelerators is prototyped in 130-nm CMOS. Two medical-sensor applications (electroencephalogram-based seizure detection and electrocardiogram-based arrhythmia detection) demonstrate 325 $\times $ and 156 $\times $ energy reduction, respectively, for programmable feature extraction implemented on the accelerator versus a CPU-only architecture, and 7.6 $\times $ and 6.5 $\times $ energy reduction, respectively, versus a CPU-with-coprocessor architecture. Furthermore, 20 $\times $ and 9 $\times $ energy scalability, respectively, is demonstrated via the approximation knobs. The energy-efficiency of the programmable FEA is 220 GOPS/W, near that of fixed-function accelerators in the same technology, exceeding typical programmable accelerators. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
22. Binary Probability Trees for Bayesian Networks Inference
- Author
-
Cano, Andrés, Gómez-Olmedo, Manuel, Moral, Serafín, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Sossai, Claudio, editor, and Chemello, Gaetano, editor
- Published
- 2009
- Full Text
- View/download PDF
23. Approximate Polynomial gcd: Small Degree and Small Height Perturbations
- Author
-
von zur Gathen, Joachim, Shparlinski, Igor E., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Laber, Eduardo Sany, editor, Bornstein, Claudson, editor, Nogueira, Loana Tito, editor, and Faria, Luerbio, editor
- Published
- 2008
- Full Text
- View/download PDF
24. Conclusion
- Author
-
Ying, Mingsheng and Ying, Mingsheng
- Published
- 2001
- Full Text
- View/download PDF
25. A Low-Error Energy-Efficient Fixed-Width Booth Multiplier With Sign-Digit-Based Conditional Probability Estimation.
- Author
-
Zhang, Ziji and He, Yajuan
- Abstract
Fixed-width multipliers are intensively used in many DSP applications whose accuracy and energy efficiency affect the whole digital system to a large extent. To improve the computation accuracy, a Booth-encoded sign-digit-based conditional probability estimation approach is proposed. A symmetric error distribution is obtained by taking the sign bit of the Booth-encoded multiplier into consideration when applying the conditional probability. In addition, a more generalized mux-based estimation method is formulated for the circuit implementation, which reduces the delay time and power dissipation. Simulation results show that the proposed multiplier exhibits the best computation accuracy with the least energy per operation. It performs even better for those operand lengths that are not multiples of 4. The maximum reduction on energy-delay-error product can reach 14.8% compared with all its contenders among various operand lengths. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
26. Approximated and User Steerable tSNE for Progressive Visual Analytics.
- Author
-
Pezzotti, Nicola, Lelieveldt, Boudewijn P. F., Maaten, Laurens van der, Hollt, Thomas, Eisemann, Elmar, and Vilanova, Anna
- Subjects
VISUAL analytics ,DATA modeling ,DATA visualization ,COMPUTATIONAL complexity ,APPROXIMATION algorithms ,REAL-time computing - Abstract
Progressive Visual Analytics aims at improving the interactivity in existing analytics techniques by means of visualization as well as interaction with intermediate results. One key method for data analysis is dimensionality reduction, for example, to produce 2D embeddings that can be visualized and analyzed efficiently. t-Distributed Stochastic Neighbor Embedding (tSNE) is a well-suited technique for the visualization of high-dimensional data. tSNE can create meaningful intermediate results but suffers from a slow initialization that constrains its application in Progressive Visual Analytics. We introduce a controllable tSNE approximation (A-tSNE), which trades
off speed and accuracy, to enable interactive data exploration. We offer real-time visualization techniques, including a density-based solution and a Magic Lens to inspect the degree of approximation. With this feedback, the user can decide on local refinements and steer the approximation level during the analysis. We demonstrate our technique with several datasets, in a real-world research scenario and for the real-time analysis of high-dimensional streams to illustrate its effectiveness for interactive data analysis. [ABSTRACT FROM AUTHOR] - Published
- 2017
- Full Text
- View/download PDF
27. Approximate Sensory Data Collection: A Survey.
- Author
-
Siyao Cheng, Zhipeng Cai, and Jianzhong Li
- Subjects
- *
INTERNET of things , *WIRELESS sensor networks , *ACQUISITION of data , *BANDWIDTHS , *GRID computing - Abstract
With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximate data collection algorithms. We classify them into three categories: the model-based ones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
28. Computing Sugeno integrals.
- Author
-
Chiţescu, Ion, Giurgescu, Mǎdǎlina, and Plǎviţu, Anca
- Subjects
- *
INTEGRALS - Abstract
The main goal of the present paper is to furnish practical procedures for the approximate computation (with preassigned precision) of the Sugeno integral of a general positive measurable function with respect to a general monotone measure. To this end, we use our two-steps methods (the simple two-steps method and the topologic method using the iterated convergence theorem). The aforementioned methods reduce the general computation to the well-known computation in the finite discrete case, which can be done with computer-aided methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Cost effective Tanh activation function circuits based on fast piecewise linear logic.
- Author
-
Liu, Kezhu, Shi, Weiwei, Huang, Canji, and Zeng, Dongdong
- Subjects
- *
DIGITAL signal processing , *COMPLEMENTARY metal oxide semiconductors , *LOGIC circuits , *NONLINEAR functions , *LOGIC - Abstract
In this paper, an approximate 16-bit non-linear Tanh function circuit design is proposed for activation function in neural networks and digital signal processing, which is based on piecewise linear calculation. By sacrificing acceptable accuracy in computation, the proposed logic circuit structure can furtherly improve the overall performance of the design, with marked reduction in delay, area and power consumption. Linear calculation segments with simple shift-and-add logic architectures are adopted for easy hardware implementation. Additionally, tailored bias tuning and approximate accumulator design make a fine balance between complexity and error distribution. The mean absolute error (MAE) of the entire design is 0.0049. Compared with other state-of-art works in 90 nm CMOS process, the area of the circuit is 582 μ m 2 , and the energy is 0.0996 mW/GHz at 0.71 GHz. Moreover, the area-delay product is reduced by over 79.7% and the energy-delay product is reduced by over 42.4%. In the evaluation of its suitability in large applications, the accurate Tanh circuit and the proposed Tanh circuit are adopted in two neural networks to classify the Fashion-MNIST data and emotional EEG data. When the model converges, the accuracies of the test set only differs with a maximum of 0.2% and 0.3%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. A DIFFERENT LOOK AT THE RICHARDSON EXTRAPOLATION LEADING TO A NEW PROPOSITION.
- Author
-
Soroushian, A.
- Subjects
RICHARDSON extrapolation ,TAYLOR'S series ,ACCURACY ,MATHEMATICAL sequences ,EXTRAPOLATION - Abstract
Richardson extrapolation is proposed about a century ago and broadly accepted in the scientific community as a mean to accelerate the convergence of approximate computations. Many implementations, generalizations, and applications, of Richardson extrapolation, are reported in the literature. In this paper, considering the Taylor series expansion of approximate computations with respect to the algorithmic parameters, Richardson extrapolation is introduced as a tool towards higher accuracies, and the higher orders of accuracy are achieved as by-products. On this basis, a new proposition on Richardson extrapolation is stated, explained, and demonstrated, in an example. [ABSTRACT FROM AUTHOR]
- Published
- 2016
31. Algebra of Approximate Computation
- Author
-
Aberer, Karl, Book, Ronald V., editor, and Engeler, Erwin
- Published
- 1995
- Full Text
- View/download PDF
32. The complexity of counting locally maximal satisfying assignments of Boolean CSPs.
- Author
-
Goldberg, Leslie Ann and Jerrum, Mark
- Subjects
- *
COMPUTATIONAL complexity , *BOOLEAN functions , *CONSTRAINT satisfaction , *SET theory , *APPROXIMATION theory - Abstract
We investigate the computational complexity of the problem of counting the locally maximal satisfying assignments of a Constraint Satisfaction Problem (CSP) over the Boolean domain { 0 , 1 } . A satisfying assignment is locally maximal if any new assignment which is obtained from it by changing a 0 to a 1 is unsatisfying. For each constraint language Γ, # LocalMaxCSP ( Γ ) denotes the problem of counting the locally maximal satisfying assignments, given an input CSP with constraints in Γ. We give a complexity dichotomy for the problem of exactly counting the locally maximal satisfying assignments and a complexity trichotomy for the problem of approximately counting them. Relative to the problem # CSP ( Γ ) , which is the problem of counting all satisfying assignments, the locally maximal version can sometimes be easier but never harder. This finding contrasts with the recent discovery that approximately counting locally maximal independent sets in a bipartite graph is harder (under the usual complexity-theoretic assumptions) than counting all independent sets. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
33. Exact and approximate multiplications for signal processing applications.
- Author
-
Patali, Pramod and Kassim, Shahana Thottathikkulam
- Subjects
- *
SIGNAL processing , *FINITE impulse response filters , *DELAY lines , *ELECTRIC lines , *NAND gates - Abstract
Approximate computing is an emerging technique that can be used for developing power, area and delay efficient circuits at the cost of loss of accuracy. This paper investigates the possibility of efficient exact multipliers through the use of improved compressors and counters and thereafter deriving approximate multiplier structures to achieve a higher percentage of logic optimization at the expense of the lowest possible error. NAND gate based array structure with reduced complexity is used for generating the partial products for both the exact and approximate multipliers. Two variants of the efficient exact multiplier are proposed. The proposed exact multiplier EM-1 is area and power efficient, whereas EM-2 is delay and energy efficient. Two variants of 4:2 compressors are introduced for developing approximate multipliers. The proposed multipliers excel most of the state-of-the-art existing multipliers in terms of the important performance metrics. The proposed exact and approximate multiplier designs are employed for developing ECG denoising FIR filter for the elimination of power line interference and the corresponding performance improvement is illustrated. Cadence RTL compiler v11.10 with GPDK 45 nm standard cell library is used for the synthesis. The simulation of all the multipliers is done by running a test bench program using modelsim. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Approximate computation of post-synaptic spikes reduces bandwidth to synaptic storage in a model of cortex
- Author
-
Stathis, Dimitrios, Yang, Yu, Hemani, Ahmed, Lansner, Anders, Stathis, Dimitrios, Yang, Yu, Hemani, Ahmed, and Lansner, Anders
- Abstract
The Bayesian Confidence Propagation Neural Network (BCPNN) is a spiking model of the cortex. The synaptic weights of BCPNN are organized as matrices. They require substantial synaptic storage and a large bandwidth to it. The algorithm requires a dual access pattern to these matrices, both row-wise and column-wise, to access its synaptic weights. In this work, we exploit an algorithmic optimization that eliminates the column-wise accesses. The new computation model approximates the post-synaptic spikes computation with the use of a predictor. We have adopted this approximate computational model to improve upon the previously reported ASIC implementation, called eBrainII. We also present the error analysis of the approximation to show that it is negligible. The reduction in storage and bandwidth to the synaptic storage results in a 48% reduction in energy compared to eBrainII. The reported approximation method also applies to other neural network models based on a Hebbian learning rule., Part of proceedings ISBN: 978-3-9819263-5-4QC 20220413
- Published
- 2021
- Full Text
- View/download PDF
35. Using Binary Trees for the Evaluation of Influence Diagrams.
- Author
-
Cabañas, Rafael, Gómez-Olmedo, Manuel, and Cano, Andrés
- Subjects
- *
TREE graphs , *APPROXIMATION theory , *PROBABILITY theory , *MATHEMATICAL models , *LATTICE theory - Abstract
This paper proposes the use of binary trees for representing and managing the potentials involved in Influence Diagrams. This kind of tree allows representing context-specific independencies that are finer-grained compared to those encoded using other representations. This enhanced capability can be used to improve the efficiency of the inference algorithms used for Influence Diagrams. Moreover, binary trees allow computing approximate solutions when exact inference is not feasible. In this work we describe how binary trees can be used to perform this approximate evaluation and we compare them with other structures present in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
36. Heuristic Algorithm for Approximation Betweenness Centrality Using Graph Coarsening.
- Author
-
Chernoskutov, Mikhail, Ineichen, Yves, and Bekas, Costas
- Subjects
HEURISTIC algorithms ,APPROXIMATION theory ,BETWEENNESS relations (Mathematics) ,GRAPH theory ,COMPUTATIONAL biology - Abstract
Nowadays, graph analytics are widely used in many research fields and applications. One important analytic that measures the influence of each vertex on flows through the network, is the betweenness centrality. It is used to analyze real-world networks like for example social networks and networks in computational biology. Unfortunately this centrality metric is rather expensive to compute and there is a number of studies devoted to approximate it. Here we focus on approximating the computation of betweenness centrality for dynamically changing graphs. We present a novel approach based on graph coarsening for approximating values of betweenness centrality, when new edges are inserted. Unlike other approaches, we reduce the cost (but not complexity) of the betweenness centrality computation step by working on a coarser graph. Our approach demonstrates more than 60% speedup compared to the exact recalculation of the betweenness centrality for dynamically changing graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
37. Arquitetura de hardware aproximada para o filtro de interpolação de pixel fracionário do padrão de codificação de vídeo Versatile Video Coding
- Author
-
SILVA, Giovane Gomes, DINIZ, Cláudio Machado, ALMEIDA, Sérgio José Melo de, and SOARES, Leonardo Bandeira
- Subjects
codificação de vídeo ,Versatile Video Coding ,computação aproximada ,estimação de movimento fracionário ,filtro de interpolação ,video coding ,approximate computation ,fractional motion estimation ,interpolation filter ,CIENCIA DA COMPUTACAO [CIENCIAS EXATAS E DA TERRA] - Abstract
Submitted by Cristiane Chim (cristiane.chim@ucpel.edu.br) on 2021-09-09T12:46:06Z No. of bitstreams: 1 Dissertação - Giovane Gomes Silva.pdf: 1181385 bytes, checksum: ead882f21191640c9dc013baf1622a82 (MD5) Made available in DSpace on 2021-09-09T12:46:06Z (GMT). No. of bitstreams: 1 Dissertação - Giovane Gomes Silva.pdf: 1181385 bytes, checksum: ead882f21191640c9dc013baf1622a82 (MD5) Previous issue date: 2020-12-08 Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES The distribution of videos with very high definitions and videos with immersive technologies are constantly growing. This is only possible because of video encoding and decoding, which with efficient techniques make the videos more compact that allow devices to store, process and transmit. The most efficient coder developed today is HEVC, however research groups are working to make video compression more efficient. Therefore, a new video encoder is being developed, Versatile Video Coding (VVC). VVC supports 3D videos, 360 degree videos, virtual reality (VR), augmented reality (AR) and mixed reality (MR). For better video compression, there is a higher computational cost. We will see that the most complex part of a video encoder is a motion estimation (ME), most of which is spent in its fractional part (Fractional Motion Estimation - FME). This project aims to develop a textit hardware architecture close to the fractional interpolation filters of the VVC encoder. It will be applying approximate computing techniques on the filters. By applying approximate filters they reduce the computational complexity of fractional interpolation, however, with insignificant loss of PSNR and an increase in the bits rate. With the proposed architecture at work, it was able to operate at a frequency of 522MHz and process videos from 2560x1600 pixels at 30 fps with an average compression efficiency degradation of only 0.41% when compared to the VTM 10.0. A distribuição de vídeos com altíssimas definições e vídeos com tecnologias imersivas não param de crescer. Isso só é possível, por causa da codificação e decodificação de vídeo, que com técnicas eficientes deixam os vídeos mais compactos, permitindo que os dispositivos sejam capazes de armazenar, processar e transmitir esses conteúdos de vídeo. O codificador desenvolvido mais eficiente nos dias de hoje é o HEVC, porém, grupos de pesquisa trabalham para tornar a compressão de vídeo mais eficiente. Assim, um novo codificador de vídeo está sendo desenvolvido, o Versatile Video Coding (VVC). O VVC suporta vídeos 3D, vídeos em 360 graus, realidade virtual (Virtual Reality – VR), realidade aumentada (Augmented Reality – AR) e realidade mista (Mixed Reality – MR). Para compressão de vídeo de melhor definição, há um custo computacional maior. Será mostrado que a parte mais complexa de um codificador de vídeo é a estimação de movimento (ME), sendo a maior parte gasta em sua parte fracionária (Estimativa de Movimento Fracionário - FME). Esse projeto tem como proposta, desenvolver uma arquitetura de hardware aproximada dos filtros de interpolação fracionário do codificador VVC. Será aplicando técnicas de computação aproximada nos filtros. Ao aplicar filtros aproximados eles reduzem a complexidade computacional da interpolação fracionária, porém, com perda, insignificante, de PSNR e aumento da taxa de bits. Com a arquitetura proposta no trabalho, conseguiu operar a uma frequência de 522MHz e processar vídeos de 2560x1600 pixels a 30 fps com uma degradação de eficiência de compressão média de apenas 0,41% quando comparado com o VTM 10.0.
- Published
- 2020
38. Approximate calculation of Nash equilibria for two-person games.
- Author
-
Nikol'skii, M.
- Abstract
An approximate method for calculating Nash equilibrium points in a two-person game is developed. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
39. GPGPU Linear Complexity t-SNE Optimization
- Author
-
Pezzotti, N. (author), Thijssen, J.G.L. (author), Mordvinstev, Alexander (author), Höllt, T. (author), Van Lew, B. (author), Lelieveldt, B.P.F. (author), Eisemann, E. (author), Vilanova Bartroli, A. (author), Pezzotti, N. (author), Thijssen, J.G.L. (author), Mordvinstev, Alexander (author), Höllt, T. (author), Van Lew, B. (author), Lelieveldt, B.P.F. (author), Eisemann, E. (author), and Vilanova Bartroli, A. (author)
- Abstract
In recent years the t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm has become one of the most used and insightful techniques for exploratory data analysis of high-dimensional data. It reveals clusters of high-dimensional data points at different scales while only requiring minimal tuning of its parameters. However, the computational complexity of the algorithm limits its application to relatively small datasets. To address this problem, several evolutions of t-SNE have been developed in recent years, mainly focusing on the scalability of the similarity computations between data points. However, these contributions are insufficient to achieve interactive rates when visualizing the evolution of the t-SNE embedding for large datasets. In this work, we present a novel approach to the minimization of the t-SNE objective function that heavily relies on graphics hardware and has linear computational complexity. Our technique decreases the computational cost of running t-SNE on datasets by orders of magnitude and retains or improves on the accuracy of past approximated techniques. We propose to approximate the repulsive forces between data points by splatting kernel textures for each data point. This approximation allows us to reformulate the t-SNE minimization problem as a series of tensor operations that can be efficiently executed on the graphics card. An efficient implementation of our technique is integrated and available for use in the widely used Google TensorFlow.js, and an open-source C++ library., Accepted author manuscript, Comp Graphics & Visualisation, Pattern Recognition and Bioinformatics
- Published
- 2020
- Full Text
- View/download PDF
40. Throughput Scaling Of Convolution For Error-Tolerant Multimedia Applications.
- Author
-
Anam, Mohammad Ashraful and Andreopoulos, Yiannis
- Abstract
Convolution and cross-correlation are the basis of filtering and pattern or template matching in multimedia signal processing. We propose two throughput scaling options for any one-dimensional convolution kernel in programmable processors by adjusting the imprecision (distortion) of computation. Our approach is based on scalar quantization, followed by two forms of tight packing in floating-point (one of which is proposed in this paper) that allow for concurrent calculation of multiple results. We illustrate how our approach can operate as an optional pre- and post-processing layer for off-the-shelf optimized convolution routines. This is useful for multimedia applications that are tolerant to processing imprecision and for cases where the input signals are inherently noisy (error tolerant multimedia applications). Indicative experimental results with a digital music matching system and an MPEG-7 audio descriptor system demonstrate that the proposed approach offers up to 175% increase in processing throughput against optimized (full-precision) convolution with virtually no effect in the accuracy of the results. Based on marginal statistics of the input data, it is also shown how the throughput and distortion can be adjusted per input block of samples under constraints on the signal-to-noise ratio against the full-precision convolution. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
41. Approximate GCD of several univariate polynomials with small degree perturbations
- Author
-
Elkadi, Mohamed, Galligo, André, and Ba, Thang Luu
- Subjects
- *
UNIVARIATE analysis , *PERTURBATION theory , *ALGORITHMS , *POLYNOMIALS , *SYMBOLIC computation , *MULTIVARIATE analysis - Abstract
Abstract: We consider the following computational problem: given a family of generic univariate polynomials , construct an algorithm to find polynomial perturbations with “small” degrees such that the greater common divisor of the family of polynomials has a “large” degree. In this paper, we propose an algorithm which solves this problem in polynomial time under a generic condition generalizing the normal degree sequence for the case . [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
42. Approximate polynomial : Small degree and small height perturbations
- Author
-
von zur Gathen, Joachim, Mignotte, Maurice, and Shparlinski, Igor E.
- Subjects
- *
POLYNOMIALS , *APPROXIMATION theory , *PERTURBATION theory , *COMPUTATIONAL complexity , *TOPOLOGICAL degree , *MATHEMATICAL sequences - Abstract
Abstract: We consider the following computational problem: we are given two coprime univariate polynomials and over a ring and want to find whether after a small perturbation we can achieve a large . We solve this problem in polynomial time for two notions of “large” (and “small”): large degree (when is an arbitrary field, in the generic case when and have a so-called normal degree sequence), and large height (when ). Our work adds to the existing notions of “approximate gcd”. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
43. CENTRALITY ESTIMATION IN LARGE NETWORKS.
- Author
-
BRANDES, ULRIK and PICH, CHRISTIAN
- Subjects
- *
COMPUTER networks , *DIGITAL communications , *ELECTRONIC data processing , *CENTRALITY , *OPERATIONS research - Abstract
Centrality indices are an essential concept in network analysis. For those based on shortest-path distances the computation is at least quadratic in the number of nodes, since it usually involves solving the single-source shortest-paths (SSSP) problem from every node. Therefore, exact computation is infeasible for many large networks of interest today. Centrality scores can be estimated, however, from a limited number of SSSP computations. We present results from an experimental study of the quality of such estimates under various selection strategies for the source vertices. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
44. Computing the sign or the value of the determinant of an integer matrix, a complexity survey
- Author
-
Kaltofen, Erich and Villard, Gilles
- Subjects
- *
MATRICES (Mathematics) , *ALGORITHMS , *NUMERICAL analysis , *ARITHMETIC - Abstract
Computation of the sign of the determinant of a matrix and the determinant itself is a challenge for both numerical and exact methods. We survey the complexity of existing methods to solve these problems when the input is an
n×n matrixA with integer entries. We study the bit-complexities of the algorithms asymptotically inn and the norm ofA . Existing approaches rely on numerical approximate computations, on exact computations, or on both types of arithmetic in combination. [Copyright &y& Elsevier]- Published
- 2004
- Full Text
- View/download PDF
45. GPGPU Linear Complexity t-SNE Optimization
- Author
-
Boudewijn P. F. Lelieveldt, Julian Thijssen, Elmar Eisemann, Thomas Höllt, Anna Vilanova, Alexander Mordvintsev, Nicola Pezzotti, Baldur van Lew, Algorithms, Geometry and Applications, Visualization, and EAISI Health
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Linear programming ,Computational complexity theory ,Computer Science - Artificial Intelligence ,Computer science ,Computation ,Graphics hardware ,Machine Learning (stat.ML) ,Progressive Visual Analytics ,High Dimensional Data ,Machine Learning (cs.LG) ,Kernel (linear algebra) ,Data visualization ,Statistics - Machine Learning ,Dimensionality Reduction ,business.industry ,GPGPU ,Approximation algorithm ,Computer Graphics and Computer-Aided Design ,Artificial Intelligence (cs.AI) ,Data point ,Approximate Computation ,Signal Processing ,Computer Vision and Pattern Recognition ,General-purpose computing on graphics processing units ,business ,Algorithm ,Software - Abstract
In recent years the t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm has become one of the most used and insightful techniques for exploratory data analysis of high-dimensional data. It reveals clusters of high-dimensional data points at different scales while only requiring minimal tuning of its parameters. However, the computational complexity of the algorithm limits its application to relatively small datasets. To address this problem, several evolutions of t-SNE have been developed in recent years, mainly focusing on the scalability of the similarity computations between data points. However, these contributions are insufficient to achieve interactive rates when visualizing the evolution of the t-SNE embedding for large datasets. In this work, we present a novel approach to the minimization of the t-SNE objective function that heavily relies on graphics hardware and has linear computational complexity. Our technique decreases the computational cost of running t-SNE on datasets by orders of magnitude and retains or improves on the accuracy of past approximated techniques. We propose to approximate the repulsive forces between data points by splatting kernel textures for each data point. This approximation allows us to reformulate the t-SNE minimization problem as a series of tensor operations that can be efficiently executed on the graphics card. An efficient implementation of our technique is integrated and available for use in the widely used Google TensorFlow.js, and an open-source C++ library.
- Published
- 2020
46. Approximate inference in Bayesian networks using binary probability trees
- Author
-
Cano, Andrés, Gémez-Olmedo, Manuel, and Moral, Serafén
- Subjects
- *
APPROXIMATION theory , *BAYESIAN analysis , *BINARY number system , *PROBABILITY theory , *TREE graphs , *MATHEMATICAL variables , *POTENTIAL theory (Mathematics) , *ALGORITHMS - Abstract
Abstract: The present paper introduces a new kind of representation for the potentials in a Bayesian network: Binary Probability Trees. They enable the representation of context-specific independences in more detail than probability trees. This enhanced capability leads to more efficient inference algorithms for some types of Bayesian networks. This paper explains the procedure for building a binary probability tree from a given potential, which is similar to the one employed for building standard probability trees. It also offers a way of pruning a binary tree in order to reduce its size. This allows us to obtain exact or approximate results in inference depending on an input threshold. This paper also provides detailed algorithms for performing the basic operations on potentials (restriction, combination and marginalization) directly to binary trees. Finally, some experiments are described where binary trees are used with the variable elimination algorithm to compare the performance with that obtained for standard probability trees. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
47. Combining Restoring Array and Logarithmic Dividers into an Approximate Hybrid Design
- Author
-
Fabrizio Lombardi, Tao Xu, Chenghua Wang, Paolo Montuschi, Weiqiang Liu, and Jing Li
- Subjects
Adder ,Logarithm ,Computer science ,020208 electrical & electronic engineering ,Image processing ,computer arithmetic, approximate computation, division ,02 engineering and technology ,Division (mathematics) ,Electronic mail ,020202 computer hardware & architecture ,computer arithmetic ,0202 electrical engineering, electronic engineering, information engineering ,approximate computation ,division ,Figure of merit ,Algorithm ,Image restoration ,Quotient - Abstract
This paper proposes a new design of an approximate hybrid divider (AXHD), which combines the restoring array and the logarithmic dividers to achieve an excellent tradeoff between accuracy and hardware performance. Exact restoring divider cells (EXDCrs) are used to generate the MSBs of the quotient for attaining a high accuracy; the other quotient digits are processed by a logarithmic divider as inexact scheme to improve figures of merit such as power consumption, area and delay. The proposed AXHD is evaluated and analyzed using error and hardware metrics. The proposed design is also compared with the exact restoring divider (EXDr) and previous approximate restoring dividers (AXDrs). The results show that the proposed design achieves very good performance in terms of accuracy and hardware; case studies for image processing also show the validity of the proposed designs.
- Published
- 2018
- Full Text
- View/download PDF
48. Approximated and user steerable tSNE for progressive visual analytics
- Author
-
Elmar Eisemann, Thomas Höllt, Anna Vilanova, Laurens van der Maaten, Nicola Pezzotti, Boudewijn P. F. Lelieveldt, Medical Image Analysis, and Visualization
- Subjects
0301 basic medicine ,FOS: Computer and information sciences ,Visual analytics ,Computer science ,High dimensional data ,media_common.quotation_subject ,Computer Vision and Pattern Recognition (cs.CV) ,Approximate computation ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,Machine learning ,computer.software_genre ,Machine Learning (cs.LG) ,03 medical and health sciences ,Text mining ,Data visualization ,Interactive visual analysis ,Progressive visual analytics ,0202 electrical engineering, electronic engineering, information engineering ,media_common ,Creative visualization ,business.industry ,Dimensionality reduction ,Approximation algorithm ,020207 software engineering ,Computer Graphics and Computer-Aided Design ,Visualization ,Computer Science - Learning ,030104 developmental biology ,Analytics ,Signal Processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Data mining ,business ,computer ,Software - Abstract
Progressive Visual Analytics aims at improving the interactivity in existing analytics techniques by means of visualization as well as interaction with intermediate results. One key method for data analysis is dimensionality reduction, for example, to produce 2D embeddings that can be visualized and analyzed efficiently. t-Distributed Stochastic Neighbor Embedding (tSNE) is a well-suited technique for the visualization of several high-dimensional data. tSNE can create meaningful intermediate results but suffers from a slow initialization that constrains its application in Progressive Visual Analytics. We introduce a controllable tSNE approximation (A-tSNE), which trades off speed and accuracy, to enable interactive data exploration. We offer real-time visualization techniques, including a density-based solution and a Magic Lens to inspect the degree of approximation. With this feedback, the user can decide on local refinements and steer the approximation level during the analysis. We demonstrate our technique with several datasets, in a real-world research scenario and for the real-time analysis of high-dimensional streams to illustrate its effectiveness for interactive data analysis.
- Published
- 2017
49. Approximate calculation of Nash equilibria for two-person games
- Author
-
Nikol’skii, M. S.
- Published
- 2014
- Full Text
- View/download PDF
50. Approximate Computations
- Author
-
Bronshtein, I. N., Semendyayev, K. A., Bronshtein, I. N., and Semendyayev, K. A.
- Published
- 1973
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.