Search

Your search keyword '"Liu Ziming"' showing total 71 results

Search Constraints

Start Over You searched for: Author "Liu Ziming" Remove constraint Author: "Liu Ziming" Publication Type Reports Remove constraint Publication Type: Reports
71 results on '"Liu Ziming"'

Search Results

1. On the expressiveness and spectral bias of KANs

2. To Shelter or Not To Shelter: Exploring the Influence of Different Modalities in Virtual Reality on Individuals' Tornado Mitigation Behaviors

3. Deep extragalactic HI survey of the COSMOS field with FAST

4. How Diffusion Models Learn to Factorize and Compose

5. Epsilon: Exploring Comprehensive Visual-Semantic Projection for Multi-Label Zero-Shot Learning

6. KAN 2.0: Kolmogorov-Arnold Networks Meet Science

7. WallFacer: Harnessing Multi-dimensional Ring Parallelism for Efficient Long Sequence Model Training

8. HiFAST : An HI Data Calibration and Imaging Pipeline for FAST II. Flux Density Calibration

9. Observation of HI around three satellite galaxies of the M31 with the FAST: Andromeda II, NGC 205, and NGC 185

10. Survival of the Fittest Representation: A Case Study with Modular Addition

11. Using a Convolutional Neural Network and Explainable AI to Diagnose Dementia Based on MRI Scans

12. Exploring Nutritional Impact on Alzheimer's Mortality: An Explainable AI Approach

13. How Do Transformers 'Do' Physics? Investigating the Simple Harmonic Oscillator

14. OptPDE: Discovering Novel Integrable Systems via AI-Human Collaboration

15. KAN: Kolmogorov-Arnold Networks

16. FEASTS Combined with Interferometry (I): Overall Properties of Diffuse HI and Implications for Gas Accretion in Nearby Galaxies

17. DSP: Dynamic Sequence Parallelism for Multi-Dimensional Transformers

18. HeteGen: Heterogeneous Parallel Inference for Large Language Models on Resource-Constrained Devices

19. GenEFT: Understanding Statics and Dynamics of Model Generalization via Effective Theory

20. A Resource Model For Neural Scaling Law

21. Opening the AI black box: program synthesis via mechanistic interpretability

22. Do Diffusion Models Learn Semantically Meaningful and Efficient Representations?

23. HiFAST: an HI data calibration and imaging pipeline for FAST

24. AutoChunk: Automated Activation Chunk for Memory-Efficient Long Sequence Inference

25. ParsNets: A Parsimonious Orthogonal and Low-Rank Linear Networks for Zero-Shot Learning

26. Generating Interpretable Networks using Hypernetworks

27. Growing Brains: Co-emergence of Anatomical and Functional Modularity in Recurrent Neural Networks

28. Grokking as Compression: A Nonlinear Complexity Perspective

29. A Neural Scaling Law from Lottery Ticket Ensembling

30. GBE-MLZSL: A Group Bi-Enhancement Framework for Multi-Label Zero-Shot Learning

31. Hanayo: Harnessing Wave-like Pipeline Parallelism for Enhanced Large Model Training Efficiency

32. The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks

33. Restart Sampling for Improving Generative Processes

34. Discovering New Interpretable Conservation Laws as Sparse Invariants

35. Seeing is Believing: Brain-Inspired Modular Training for Mechanistic Interpretability

36. DRPT: Disentangled and Recurrent Prompt Tuning for Compositional Zero-Shot Learning

37. GenPhys: From Physical Processes to Generative Models

38. The Quantization Model of Neural Scaling

39. PFGM++: Unlocking the Potential of Physics-Inspired Generative Models

40. ATP: Adaptive Tensor Parallelism for Foundation Models

41. FEASTS: IGM cooling triggered by tidal interactions through the diffuse HI phase around NGC 4631

42. Decomposed Soft Prompt Guided Fusion Enhancing for Compositional Zero-Shot Learning

43. ProCC: Progressive Cross-primitive Compatibility for Open-World Compositional Zero-Shot Learning

44. Precision Machine Learning

45. Omnigrok: Grokking Beyond Algorithmic Data

46. Poisson Flow Generative Models

47. EnergonAI: An Inference System for 10-100 Billion Parameter Transformer Models

48. Fed-FSNet: Mitigating Non-I.I.D. Federated Learning via Fuzzy Synthesizing Network

49. Second Order Ensemble Langevin Method for Sampling and Inverse Problems

50. Anchor Sampling for Federated Learning with Partial Client Participation

Catalog

Books, media, physical & digital resources