Search

Your search keyword '"Tramèr, Florian"' showing total 65 results

Search Constraints

Start Over You searched for: Author "Tramèr, Florian" Remove constraint Author: "Tramèr, Florian" Topic computer science - cryptography and security Remove constraint Topic: computer science - cryptography and security
65 results on '"Tramèr, Florian"'

Search Results

1. Adversarial Search Engine Optimization for Large Language Models

2. Blind Baselines Beat Membership Inference Attacks for Foundation Models

3. AgentDojo: A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents

4. Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI

5. Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition

6. Evaluations of Machine Learning Privacy Defenses are Misleading

7. Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs

8. Privacy Backdoors: Stealing Data with Corrupted Pretrained Models

9. JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models

10. Stealing Part of a Production Language Model

11. Query-Based Adversarial Prompt Generation

12. Universal Jailbreak Backdoors from Poisoned Human Feedback

13. Privacy Side Channels in Machine Learning Systems

14. Backdoor Attacks for In-Context Learning with Language Models

15. Are aligned neural networks adversarially aligned?

16. Evaluating Superhuman Models with Consistency Checks

17. Evading Black-box Classifiers Without Breaking Eggs

18. Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators

19. Poisoning Web-Scale Training Datasets is Practical

20. Tight Auditing of Differentially Private Machine Learning

21. Extracting Training Data from Diffusion Models

22. Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining

23. Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems

24. Red-Teaming the Stable Diffusion Safety Filter

25. SNAP: Efficient Extraction of Private Properties with Poisoning

26. Increasing Confidence in Adversarial Robustness Evaluations

27. (Certified!!) Adversarial Robustness for Free!

28. The Privacy Onion Effect: Memorization is Relative

29. Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets

30. Membership Inference Attacks From First Principles

31. NeuraCrypt is not private

32. Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them

33. Data Poisoning Won't Save You From Facial Recognition

34. Antipodes of Label Differential Privacy: PATE and ALIBI

35. Extracting Training Data from Large Language Models

36. Differentially Private Learning Needs Better Features (or Much More Data)

37. Is Private Learning Possible with Instance Encoding?

38. Label-Only Membership Inference Attacks

39. On Adaptive Attacks to Adversarial Example Defenses

40. Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations

41. Advances and Open Problems in Federated Learning

42. SquirRL: Automating Attack Analysis on Blockchain Incentive Mechanisms with Deep Reinforcement Learning

43. Adversarial Training and Robustness for Multiple Perturbations

44. Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

45. SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems

46. AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning

47. Physical Adversarial Examples for Object Detectors

48. Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware

49. Note on Attacking Object Detectors with Adversarial Stickers

50. Ensemble Adversarial Training: Attacks and Defenses

Catalog

Books, media, physical & digital resources