Search

Your search keyword '"Shen, Wei"' showing total 16,373 results

Search Constraints

Start Over You searched for: Author "Shen, Wei" Remove constraint Author: "Shen, Wei"
16,373 results on '"Shen, Wei"'

Search Results

1. Policy Filtration in RLHF to Fine-Tune LLM for Code Generation

2. Unleashing the Power of Task-Specific Directions in Parameter Efficient Fine-tuning

3. LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models

4. SG-GS: Photo-realistic Animatable Human Avatars with Semantically-Guided Gaussian Splatting

5. CHASE: 3D-Consistent Human Avatars with Sparse Inputs via Gaussian Splatting and Contrastive Learning

6. Leveraging Web-Crawled Data for High-Quality Fine-Tuning

7. UniProcessor: A Text-induced Unified Low-level Image Processor

8. PosFormer: Recognizing Complex Handwritten Mathematical Expression with Position Forest Transformer

9. See Further for Parameter Efficient Fine-tuning by Standing on the Shoulders of Decomposition

10. Parameter-efficient Fine-tuning in Hyperspherical Space for Open-vocabulary Semantic Segmentation

11. Safety Control of Service Robots with LLMs and Embodied Knowledge Graphs

12. Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey

13. FLoRA: Low-Rank Core Space for N-dimension

14. FecTek: Enhancing Term Weight in Lexicon-Based Retrieval with Feature Context and Term-level Knowledge

15. Tendency-driven Mutual Exclusivity for Weakly Supervised Incremental Semantic Segmentation

16. EndoGSLAM: Real-Time Dense Reconstruction and Tracking in Endoscopic Surgeries using Gaussian Splatting

17. Improving Reinforcement Learning from Human Feedback Using Contrastive Rewards

18. Overcoming Reward Overoptimization via Adversarial Policy Optimization with Lightweight Uncertainty Estimation

19. GaussianObject: High-Quality 3D Object Reconstruction from Four Views with Gaussian Splatting

20. Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning

21. StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback

22. ViTree: Single-path Neural Tree for Step-wise Interpretable Fine-grained Visual Categorization

23. Linear Alignment: A Closed-form Solution for Aligning Human Preferences without Tuning and Feedback

24. Secrets of RLHF in Large Language Models Part II: Reward Modeling

25. Can Bell inequalities be tested via scattering cross-section at colliders ?

34. China's National Survey on Teaching-Research Officers and Institutions

35. Construction of a Pt‐CeOx Interface for the Electrocatalytic Hydrogen Evolution Reaction

36. Efficient Deformable Tissue Reconstruction via Orthogonal Neural Plane

37. Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning

38. LoRAMoE: Alleviate World Knowledge Forgetting in Large Language Models via MoE-Style Plugin

39. Bridging Synthetic and Real Worlds for Pre-training Scene Text Detectors

40. Segment Any 3D Gaussians

41. Parameter Efficient Fine-tuning via Cross Block Orchestration for Segment Anything Model

42. All Data on the Table: Novel Dataset and Benchmark for Cross-Modality Scientific Information Extraction

43. Stochastic Smoothed Gradient Descent Ascent for Federated Minimax Optimization

44. Improving Generalization of Alignment with Human Preferences through Group Invariant Learning

45. Loose lips sink ships: Mitigating Length Bias in Reinforcement Learning from Human Feedback

Catalog

Books, media, physical & digital resources