Search

Your search keyword '"HOOKER, SARA"' showing total 139 results

Search Constraints

Start Over You searched for: Author "HOOKER, SARA" Remove constraint Author: "HOOKER, SARA"
139 results on '"HOOKER, SARA"'

Search Results

1. Nexus: Specialization meets Adaptability for Efficiently Training Mixture of Experts

2. Multilingual Arbitrage: Optimizing Data Pools to Accelerate Multilingual Progress

3. To Code, or Not To Code? Exploring Impact of Code in Pre-training

4. The Future of Open Human Feedback

5. Open Problems in Technical AI Governance

6. Consent in Crisis: The Rapid Decline of the AI Data Commons

7. On the Limitations of Compute Thresholds as a Governance Strategy

8. How Does Quantization Affect Multilingual LLMs?

9. RLHF Can Speak Many Languages: Unlocking Multilingual Preference Optimization for LLMs

10. LLM See, LLM Do: Guiding Data Generation to Target Non-Differentiable Objectives

11. The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm

12. IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models

13. Critical Learning Periods: Leveraging Early Training Dynamics for Efficient Data Pruning

14. Aya 23: Open Weight Releases to Further Multilingual Progress

15. From One to Many: Expanding the Scope of Toxicity Mitigation in Language Models

16. Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs

17. Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model

18. Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning

19. A large-scale audit of dataset licensing and attribution in AI

20. On The Fairness Impacts of Hardware Selection in Machine Learning

21. Generalisable Agents for Neural Network Optimisation

23. The Data Provenance Initiative: A Large Scale Audit of Dataset Licensing & Attribution in AI

24. Locally Differentially Private Document Generation Using Zero Shot Prompting

25. Which Prompts Make The Difference? Data Prioritization For Efficient Human LLM Evaluation

26. Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented Models

27. The Grand Illusion: The Myth of Software Portability and Implications for ML Progress

28. Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction Tuning

29. When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale

30. Frontier AI Regulation: Managing Emerging Risks to Public Safety

31. Evaluating the Social Impact of Generative AI Systems in Systems and Society

32. Intriguing Properties of Quantization at Scale

33. On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

34. FAIR-Ensemble: When Fairness Naturally Emerges From Deep Ensembling

35. Intriguing Properties of Compression on Multilingual Models

36. The Goldilocks of Pragmatic Understanding: Fine-Tuning Strategy Matters for Implicature Resolution by LLMs

37. Metadata Archaeology: Unearthing Data Subsets by Leveraging Training Dynamics

38. Efficient Methods for Natural Language Processing: A Survey

39. Studying the impact of magnitude pruning on contrastive learning methods

40. Robust Distillation for Worst-class Performance

41. When less is more: Simplifying inputs aids neural network understanding

42. The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation

43. A Tale Of Two Long Tails

44. When does loss-based prioritization fail?

45. Randomness In Neural Network Training: Characterizing The Impact of Tooling

46. Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization

47. Characterising Bias in Compressed Models

48. The Hardware Lottery

49. Estimating Example Difficulty Using Variance of Gradients

50. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims

Catalog

Books, media, physical & digital resources