Search

Your search keyword '"Lapuschkin, Sebastian"' showing total 194 results

Search Constraints

Start Over You searched for: Author "Lapuschkin, Sebastian" Remove constraint Author: "Lapuschkin, Sebastian" Publication Year Range Last 3 years Remove constraint Publication Year Range: Last 3 years
194 results on '"Lapuschkin, Sebastian"'

Search Results

1. Post-Hoc Concept Disentanglement: From Correlated to Isolated Concept Representations

2. FADE: Why Bad Descriptions Happen to Good Features

3. A Close Look at Decomposition-based XAI-Methods for Transformer Language Models

4. Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data

5. Mechanistic understanding and validation of large AI models with SemanticLens

6. Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond

7. Synthetic Generation of Dermatoscopic Images with GAN and Closed-Form Factorization

8. PINNfluence: Influence Functions for Physics-Informed Neural Networks

9. Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers

10. CoSy: Evaluating Textual Explanations of Neurons

11. A Fresh Look at Sanity Checks for Saliency Maps

12. Explainable concept mappings of MRI: Revealing the mechanisms underlying deep learning-based brain disease classification

13. Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression

14. PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits

15. DualView: Data Attribution from the Dual Perspective

16. AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers

17. Explaining Predictive Uncertainty by Exposing Second-Order Effects

18. Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test

19. Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations

20. Generative Fractional Diffusion Models

21. Human-Centered Evaluation of XAI Methods

22. Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation

23. From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space

24. XAI-based Comparison of Input Representations for Audio Event Classification

25. Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models

26. Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models

27. Explainable AI for Time Series via Virtual Inspection Layers

28. The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

29. Optimizing Explanations by Network Canonization and Hyperparameter Search

30. Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

31. Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations

32. Explaining machine learning models for age classification in human gait analysis

33. Explaining automated gender classification of human gait

34. From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation

35. Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI

36. Causes of Outcome Learning: a causal inference-inspired machine learning approach to disentangling common combinations of potential causes of a health outcome

37. But that's not why: Inference adjustment by interactive prototype revision

38. Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement

44. A Guide to Feature Importance Methods for Scientific Inference

45. Toward Understanding the Disagreement Problem in Neural Network Feature Attribution

46. CountARFactuals – Generating Plausible Model-Agnostic Counterfactual Explanations with Adversarial Random Forests

47. On the Explainability of Financial Robo-Advice Systems

Catalog

Books, media, physical & digital resources