Search

Your search keyword '"Liu, Zirui"' showing total 1,266 results

Search Constraints

Start Over You searched for: Author "Liu, Zirui" Remove constraint Author: "Liu, Zirui"
1,266 results on '"Liu, Zirui"'

Search Results

1. Confident or Seek Stronger: Exploring Uncertainty-Based On-device LLM Routing From Benchmarking to Generalization

2. Massive Values in Self-Attention Modules are the Key to Contextual Knowledge Understanding

3. LiNo: Advancing Recursive Residual Decomposition of Linear and Nonlinear Patterns for Robust Time Series Forecasting

4. Weighted Diversified Sampling for Efficient Data-Driven Single-Cell Gene-Gene Interaction Discovery

5. Gradient Rewiring for Editable Graph Neural Network Training

6. Taylor Unswift: Secured Weight Release for Large Language Models via Taylor Expansion

7. Robust Network Learning via Inverse Scale Variational Sparsification

8. INT-FlashAttention: Enabling Flash Attention for INT8 Quantization

9. From Commands to Prompts: LLM-based Semantic File System for AIOS

10. Assessing and Enhancing Large Language Models in Rare Disease Question-answering

11. Research on Tibetan Tourism Viewpoints information generation system based on LLM

12. KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches

13. Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity

16. Postdigital Videogames Literacies: Thinking With, Through, and Beyond James Gee’s Learning Principles

17. SimiSketch: Efficiently Estimating Similarity of streaming Multisets

18. CuckooGraph: A Scalable and Space-Time Efficient Data Structure for Large-Scale Dynamic Graphs

19. Language Ranker: A Metric for Quantifying LLM Performance Across High and Low-Resource Languages

20. Survey of Computerized Adaptive Testing: A Machine Learning Perspective

24. LoRA-as-an-Attack! Piercing LLM Safety Under The Share-and-Play Scenario

25. Learning to Compress Prompt in Natural Language Formats

26. KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache

27. FFSplit: Split Feed-Forward Network For Optimizing Accuracy-Efficiency Trade-off in Language Model Inference

28. LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning

29. TVE: Learning Meta-attribution for Transferable Vision Explainer

30. Chasing Fairness in Graphs: A GNN Architecture Perspective

31. CAFE: Towards Compact, Adaptive, and Fast Embedding for Large-scale Recommendation Models

45. Experimental Analysis of Large-scale Learnable Vector Storage Compression

46. Setting the Trap: Capturing and Defeating Backdoors in Pretrained Language Models through Honeypots

47. Label-free single-vesicle based surface enhanced Raman spectroscopy: A robust approach for investigating the biomolecular composition of small extracellular vesicles

50. Efficient GNN Explanation via Learning Removal-based Attribution

Catalog

Books, media, physical & digital resources