Search

Your search keyword '"weight quantization"' showing total 74 results

Search Constraints

Start Over You searched for: Descriptor "weight quantization" Remove constraint Descriptor: "weight quantization"
74 results on '"weight quantization"'

Search Results

1. Study of Weight Quantization Associations over a Weight Range for Application in Memristor Devices.

2. Comprehensive SNN Compression Using ADMM Optimization and Activity Regularization

3. Enhancing in-situ updates of quantized memristor neural networks: a Siamese network learning approach.

4. Analysis of Computational Costs in Classification Datasets Using Neural Network Quantization Techniques

6. Study of Weight Quantization Associations over a Weight Range for Application in Memristor Devices

7. High-Performance and Lightweight AI Model for Robot Vacuum Cleaners with Low Bitwidth Strong Non-Uniform Quantization

8. Towards Indoor Suctionable Object Classification and Recycling: Developing a Lightweight AI Model for Robot Vacuum Cleaners.

9. High-Performance and Lightweight AI Model for Robot Vacuum Cleaners with Low Bitwidth Strong Non-Uniform Quantization.

10. Efficient and Compact Representations of Deep Neural Networks via Entropy Coding

11. Unified Scaling-Based Pure-Integer Quantization for Low-Power Accelerator of Complex CNNs.

12. Weight-Quantized SqueezeNet for Resource-Constrained Robot Vacuums for Indoor Obstacle Classification

13. Deep neural networks compression: A comparative survey and choice recommendations.

15. Towards Indoor Suctionable Object Classification and Recycling: Developing a Lightweight AI Model for Robot Vacuum Cleaners

16. TP-MobNet: A Two-pass Mobile Network for Low-complexity Classification of Acoustic Scene.

17. Dynamic Rate Neural Acceleration Using Multiprocessing Mode Support.

18. Intrinsic variation effect in memristive neural network with weight quantization.

19. Effect of Program Error in Memristive Neural Network With Weight Quantization.

20. Towards Convolutional Neural Network Acceleration and Compression Based on Simon k -Means.

21. Environment-aware knowledge distillation for improved resource-constrained edge speech recognition.

22. Weight-Quantized SqueezeNet for Resource-Constrained Robot Vacuums for Indoor Obstacle Classification.

23. DP-Nets: Dynamic programming assisted quantization schemes for DNN compression and acceleration.

24. Quantized rewiring: hardware-aware training of sparse deep neural networks

25. Quantized Weight Transfer Method Using Spike-Timing-Dependent Plasticity for Hardware Spiking Neural Network.

26. Learning Sparse Convolutional Neural Network via Quantization With Low Rank Regularization

27. A CNN channel pruning low-bit framework using weight quantization with sparse group lasso regularization.

28. Exponential Discretization of Weights of Neural Network Connections in Pre-Trained Neural Network. Part II: Correlation Maximization.

29. Deep Neural Network for Respiratory Sound Classification in Wearable Devices Enabled by Patient Specific Model Tuning.

30. Deep Neural Network Compression by In-Parallel Pruning-Quantization.

31. Optimized Near-Zero Quantization Method for Flexible Memristor Based Neural Network

32. Unified Scaling-Based Pure-Integer Quantization for Low-Power Accelerator of Complex CNNs

33. Retrain-Less Weight Quantization for Multiplier-Less Convolutional Neural Networks.

34. Exponential Discretization of Weights of Neural Network Connections in Pre-Trained Neural Networks.

35. STDP-Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy-Efficient Recognition.

36. Quantized Weight Transfer Method Using Spike-Timing-Dependent Plasticity for Hardware Spiking Neural Network

37. Implementation of Convolutional Neural Networks in Memristor Crossbar Arrays with Binary Activation and Weight Quantization.

38. A TensorFlow Extension Framework for Optimized Generation of Hardware CNN Inference Engines

40. Optimized programming algorithms for multilevel RRAM in hardware neural networks

41. Optimized programming algorithms for multilevel RRAM in hardware neural networks

42. Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding

43. Learning Sparse Convolutional Neural Network via Quantization With Low Rank Regularization

44. Optimized Near-Zero Quantization Method for Flexible Memristor Based Neural Network

45. A TensorFlow Extension Framework for Optimized Generation of Hardware CNN Inference Engines

46. Comprehensive SNN Compression Using ADMM Optimization and Activity Regularization

47. Optimized Near-Zero Quantization Method for Flexible Memristor Based Neural Network

48. Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding.

49. A TensorFlow Extension Framework for Optimized Generation of Hardware CNN Inference Engines.

50. 3-bit multilevel operation with accurate programming scheme in TiO x /Al 2 O 3 memristor crossbar array for quantized neuromorphic system.

Catalog

Books, media, physical & digital resources