Search

Your search keyword '"Lapuschkin, Sebastian"' showing total 323 results

Search Constraints

Start Over You searched for: Author "Lapuschkin, Sebastian" Remove constraint Author: "Lapuschkin, Sebastian"
323 results on '"Lapuschkin, Sebastian"'

Search Results

1. Post-Hoc Concept Disentanglement: From Correlated to Isolated Concept Representations

2. FADE: Why Bad Descriptions Happen to Good Features

3. A Close Look at Decomposition-based XAI-Methods for Transformer Language Models

4. Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data

5. Mechanistic understanding and validation of large AI models with SemanticLens

6. Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond

7. Synthetic Generation of Dermatoscopic Images with GAN and Closed-Form Factorization

8. PINNfluence: Influence Functions for Physics-Informed Neural Networks

9. Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers

10. CoSy: Evaluating Textual Explanations of Neurons

11. A Fresh Look at Sanity Checks for Saliency Maps

12. Explainable concept mappings of MRI: Revealing the mechanisms underlying deep learning-based brain disease classification

13. Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression

14. PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits

15. DualView: Data Attribution from the Dual Perspective

16. AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers

17. Explaining Predictive Uncertainty by Exposing Second-Order Effects

18. Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test

19. Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations

20. Generative Fractional Diffusion Models

21. Human-Centered Evaluation of XAI Methods

22. Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation

23. From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space

24. XAI-based Comparison of Input Representations for Audio Event Classification

25. Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models

26. Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models

27. Explainable AI for Time Series via Virtual Inspection Layers

28. The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

29. Optimizing Explanations by Network Canonization and Hyperparameter Search

30. Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

31. Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations

32. Explaining machine learning models for age classification in human gait analysis

33. Explaining automated gender classification of human gait

34. From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation

35. Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI

36. Causes of Outcome Learning: a causal inference-inspired machine learning approach to disentangling common combinations of potential causes of a health outcome

37. But that's not why: Inference adjustment by interactive prototype revision

38. Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement

39. Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond

40. Measurably Stronger Explanation Reliability via Model Canonization

41. Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence

42. ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs

43. Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy

45. Explanation-Guided Training for Cross-Domain Few-Shot Classification

46. Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution

48. Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

49. Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning Models

50. Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models

Catalog

Books, media, physical & digital resources