Search

Your search keyword '"Qin, Minghai"' showing total 33 results

Search Constraints

Start Over You searched for: Author "Qin, Minghai" Remove constraint Author: "Qin, Minghai" Publication Year Range Last 3 years Remove constraint Publication Year Range: Last 3 years
33 results on '"Qin, Minghai"'

Search Results

1. The Uniqueness of LLaMA3-70B Series with Per-Channel Quantization

2. Data Overfitting for On-Device Super-Resolution with Dynamic Algorithm and Compiler Co-Design

3. Towards High-Quality and Efficient Video Super-Resolution via Spatial-Temporal Data Overfitting

4. DISCO: Distributed Inference with Sparse Communications

5. All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management

6. Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors

7. Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

8. Data Level Lottery Ticket Hypothesis for Vision Transformers

9. Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution

10. CHEX: CHannel EXploration for CNN Model Compression

11. Shfl-BW: Accelerating Deep Neural Network Inference with Tensor-Core Aware Weight Pruning

12. Adaptive Read Thresholds for NAND Flash

13. SPViT: Enabling Faster Vision Transformers via Soft Token Pruning

14. Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks

15. Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting

16. MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge

17. SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning

18. You Already Have It: A Generator-Free Low-Precision DNN Training Framework Using Stochastic Rounding

23. Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

26. Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting

27. All-in-One

28. Hardware-efficient stochastic rounding unit design for DNN training

29. Shfl-BW

30. CHEX: CHannel EXploration for CNN Model Compression

31. Effective Model Sparsi cation by Scheduled Grow-and-Prune Methods

32. Shfl-BW: Accelerating Deep Neural Network Inference with Tensor-Core AwareWeight Pruning

Catalog

Books, media, physical & digital resources