Search

Your search keyword '"Nia, Vahid Partovi"' showing total 89 results

Search Constraints

Start Over You searched for: Author "Nia, Vahid Partovi" Remove constraint Author: "Nia, Vahid Partovi"
89 results on '"Nia, Vahid Partovi"'

Search Results

1. OAC: Output-adaptive Calibration for Accurate Post-training Quantization

2. AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs

3. Understanding Neural Network Binarization with Forward and Backward Proximal Quantizers

4. Mitigating Outlier Activations in Low-Precision Fine-Tuning of Language Models

5. Mathematical Challenges in Deep Learning

6. Scaling Deep Networks with the Mesh Adaptive Direct Search algorithm

7. On the Convergence of Stochastic Gradient Descent in Low-precision Number Formats

8. EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models

9. Training Integer-Only Deep Recurrent Neural Networks

10. KronA: Parameter Efficient Tuning with Kronecker Adapter

11. SeKron: A Decomposition Method Supporting Many Factorization Structures

12. Towards Fine-tuning Pre-trained Language Models with Integer Forward and Backward Propagation

13. DenseShift: Towards Accurate and Efficient Low-Bit Power-of-Two Quantization

14. Is Integer Arithmetic Enough for Deep Learning Training?

15. Rethinking Pareto Frontier for Performance Evaluation of Deep Neural Networks

16. Demystifying and Generalizing BinaryConnect

17. Kronecker Decomposition for GPT Compression

18. Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition

19. iRNN: Integer-only Recurrent Neural Network

20. KroneckerBERT: Learning Kronecker Decomposition for Pre-trained Language Models via Knowledge Distillation

21. $S^3$: Sign-Sparse-Shift Reparametrization for Effective Training of Low-bit Shift Networks

22. A Twin Neural Model for Uplift

23. Tensor train decompositions on recurrent networks

24. A Causal Direction Test for Heterogeneous Populations

25. Batch Normalization in Quantized Networks

26. Importance of Data Loading Pipeline in Training Deep Neural Networks

29. Qini-based Uplift Regression

30. Random Bias Initialization Improves Quantized Training

31. Adaptive Binary-Ternary Quantization

32. How Does Batch Normalization Help Binary Training?

33. Differentiable Mask for Pruning Convolutional and Recurrent Networks

34. Parallel Coordinate Order for High-Dimensional Data

35. Active Learning for High-Dimensional Binary Features

36. Uplift Regression: The R Package tools4uplift

37. Activation Adaptation in Neural Networks

38. Foothill: A Quasiconvex Regularization for Edge Computing of Deep Neural Networks

39. Regularized Binary Network Training

40. Causal Inference and Mechanism Clustering of A Mixture of Additive Noise Models

41. A Convergence Diagnostic for Bayesian Clustering

42. Multiomics modeling of the immunome, transcriptome, microbiome, proteome and metabolome adaptations during human pregnancy

46. DenseShift: Towards Accurate and Transferable Low-Bit Shift Network

Catalog

Books, media, physical & digital resources