Search

Your search keyword '"Qin, Minghai"' showing total 110 results

Search Constraints

Start Over You searched for: Author "Qin, Minghai" Remove constraint Author: "Qin, Minghai"
110 results on '"Qin, Minghai"'

Search Results

1. Data Overfitting for On-Device Super-Resolution with Dynamic Algorithm and Compiler Co-Design

2. Towards High-Quality and Efficient Video Super-Resolution via Spatial-Temporal Data Overfitting

3. DISCO: Distributed Inference with Sparse Communications

4. All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management

5. Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors

6. Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

7. Data Level Lottery Ticket Hypothesis for Vision Transformers

8. Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution

9. CHEX: CHannel EXploration for CNN Model Compression

10. Shfl-BW: Accelerating Deep Neural Network Inference with Tensor-Core Aware Weight Pruning

11. Adaptive Read Thresholds for NAND Flash

12. SPViT: Enabling Faster Vision Transformers via Soft Token Pruning

13. Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks

14. Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting

15. MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge

16. Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?

17. Effective Model Sparsification by Scheduled Grow-and-Prune Methods

18. Computation on Sparse Neural Networks: an Inspiration for Future Hardware

19. A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods

20. Learning in the Frequency Domain

21. Non-Volatile Memory Array Based Quantization- and Noise-Resilient LSTM Neural Networks

22. Noisy Computations during Inference: Harmful or Helpful?

23. SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning

24. You Already Have It: A Generator-Free Low-Precision DNN Training Framework Using Stochastic Rounding

25. Training Recurrent Neural Networks against Noisy Computations during Inference

26. Robustness of Neural Networks against Storage Media Errors

29. Joint Source-Channel Decoding of Polar Codes for Language-Based Source

31. Time-Space Constrained Codes for Phase-Change Memories

33. Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

36. All-in-One

38. Hardware-efficient stochastic rounding unit design for DNN training

39. Shfl-BW

40. CHEX: CHannel EXploration for CNN Model Compression

42. Constrained Codes and Signal Processing for Non-Volatile Memories

Catalog

Books, media, physical & digital resources