Search

Your search keyword '"weight quantization"' showing total 37 results

Search Constraints

Start Over You searched for: Descriptor "weight quantization" Remove constraint Descriptor: "weight quantization" Search Limiters Available in Library Collection Remove constraint Search Limiters: Available in Library Collection
37 results on '"weight quantization"'

Search Results

1. Study of Weight Quantization Associations over a Weight Range for Application in Memristor Devices.

2. Comprehensive SNN Compression Using ADMM Optimization and Activity Regularization

3. Study of Weight Quantization Associations over a Weight Range for Application in Memristor Devices

4. High-Performance and Lightweight AI Model for Robot Vacuum Cleaners with Low Bitwidth Strong Non-Uniform Quantization

5. Towards Indoor Suctionable Object Classification and Recycling: Developing a Lightweight AI Model for Robot Vacuum Cleaners.

6. High-Performance and Lightweight AI Model for Robot Vacuum Cleaners with Low Bitwidth Strong Non-Uniform Quantization.

7. Efficient and Compact Representations of Deep Neural Networks via Entropy Coding

8. Unified Scaling-Based Pure-Integer Quantization for Low-Power Accelerator of Complex CNNs.

9. Weight-Quantized SqueezeNet for Resource-Constrained Robot Vacuums for Indoor Obstacle Classification

10. Towards Indoor Suctionable Object Classification and Recycling: Developing a Lightweight AI Model for Robot Vacuum Cleaners

11. Effect of Program Error in Memristive Neural Network With Weight Quantization.

12. Towards Convolutional Neural Network Acceleration and Compression Based on Simon k -Means.

13. Environment-aware knowledge distillation for improved resource-constrained edge speech recognition.

14. Weight-Quantized SqueezeNet for Resource-Constrained Robot Vacuums for Indoor Obstacle Classification.

15. DP-Nets: Dynamic programming assisted quantization schemes for DNN compression and acceleration.

16. Quantized rewiring: hardware-aware training of sparse deep neural networks

17. Quantized Weight Transfer Method Using Spike-Timing-Dependent Plasticity for Hardware Spiking Neural Network.

18. Learning Sparse Convolutional Neural Network via Quantization With Low Rank Regularization

19. Deep Neural Network Compression by In-Parallel Pruning-Quantization.

21. Optimized Near-Zero Quantization Method for Flexible Memristor Based Neural Network

22. Unified Scaling-Based Pure-Integer Quantization for Low-Power Accelerator of Complex CNNs

23. Retrain-Less Weight Quantization for Multiplier-Less Convolutional Neural Networks.

24. STDP-Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy-Efficient Recognition.

25. Quantized Weight Transfer Method Using Spike-Timing-Dependent Plasticity for Hardware Spiking Neural Network

26. A TensorFlow Extension Framework for Optimized Generation of Hardware CNN Inference Engines

27. Optimized programming algorithms for multilevel RRAM in hardware neural networks

28. Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding

29. Learning Sparse Convolutional Neural Network via Quantization With Low Rank Regularization

30. Optimized Near-Zero Quantization Method for Flexible Memristor Based Neural Network

31. A TensorFlow Extension Framework for Optimized Generation of Hardware CNN Inference Engines

32. Comprehensive SNN Compression Using ADMM Optimization and Activity Regularization

33. Optimized Near-Zero Quantization Method for Flexible Memristor Based Neural Network

34. Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding.

35. A TensorFlow Extension Framework for Optimized Generation of Hardware CNN Inference Engines.

36. Compact ConvNets with Ternary Weights and Binary Activations

37. Parameter quantization effects in Gaussian potential function neural networks

Catalog

Books, media, physical & digital resources