Search

Your search keyword '"Qin, Minghai"' showing total 109 results

Search Constraints

Start Over You searched for: Author "Qin, Minghai" Remove constraint Author: "Qin, Minghai" Publication Year Range Last 50 years Remove constraint Publication Year Range: Last 50 years
109 results on '"Qin, Minghai"'

Search Results

1. The Uniqueness of LLaMA3-70B with Per-Channel Quantization: An Empirical Study

2. Data Overfitting for On-Device Super-Resolution with Dynamic Algorithm and Compiler Co-Design

3. Towards High-Quality and Efficient Video Super-Resolution via Spatial-Temporal Data Overfitting

4. DISCO: Distributed Inference with Sparse Communications

5. All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management

6. Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors

7. Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

8. Data Level Lottery Ticket Hypothesis for Vision Transformers

9. Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution

10. CHEX: CHannel EXploration for CNN Model Compression

11. Shfl-BW: Accelerating Deep Neural Network Inference with Tensor-Core Aware Weight Pruning

12. Adaptive Read Thresholds for NAND Flash

13. SPViT: Enabling Faster Vision Transformers via Soft Token Pruning

14. Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks

15. Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting

16. MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge

17. Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?

18. Effective Model Sparsification by Scheduled Grow-and-Prune Methods

19. Computation on Sparse Neural Networks: an Inspiration for Future Hardware

20. A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods

21. Learning in the Frequency Domain

22. Non-Volatile Memory Array Based Quantization- and Noise-Resilient LSTM Neural Networks

23. Noisy Computations during Inference: Harmful or Helpful?

24. SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning

25. You Already Have It: A Generator-Free Low-Precision DNN Training Framework Using Stochastic Rounding

26. Training Recurrent Neural Networks against Noisy Computations during Inference

27. Robustness of Neural Networks against Storage Media Errors

30. Joint Source-Channel Decoding of Polar Codes for Language-Based Source

31. Time-Space Constrained Codes for Phase-Change Memories

34. Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

37. All-in-One

39. Hardware-efficient stochastic rounding unit design for DNN training

40. Shfl-BW

41. CHEX: CHannel EXploration for CNN Model Compression

43. Constrained Codes and Signal Processing for Non-Volatile Memories

Catalog

Books, media, physical & digital resources