Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
I interpretability
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 0
    • Issues 0
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Operations
    • Operations
    • Incidents
    • Environments
  • Packages & Registries
    • Packages & Registries
    • Container Registry
  • Analytics
    • Analytics
    • CI/CD
    • Repository
    • Value Stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Avishek Anand
  • interpretability
  • Wiki
  • Home

Home · Changes

Page history
added Adrien Bibal and Benoît Frénay authored Dec 12, 2019 by Maximilian Idahl's avatar Maximilian Idahl
Hide whitespace changes
Inline Side-by-side
Showing with 2 additions and 2 deletions
+2 -2
  • home.md home.md +2 -2
  • No files found.
home.md
View page @ 1e8b1c04
......@@ -19,6 +19,6 @@
|-----|------|----|
|The Mythos of Model Interpretability|Lipton|[arxiv](https://arxiv.org/abs/1606.03490)|
|Towards a Rigorous Science of Interpretable Machine Learning|Doshi-Velez and Kim|[arxiv](https://arxiv.org/abs/1702.08608)|
|A Survey of Methods for Explaining Black Box Models| Guidotti et al.|[arxiv](https://arxiv.org/abs/1802.01933)|
|A Survey of Methods for Explaining Black Box Models|Guidotti et al.|[arxiv](https://arxiv.org/abs/1802.01933)|
|Interpretability of Machine Learning Models and Representations: an Introduction|Adrien Bibal and Benoît Frénay|[pdf](https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2016-141.pdf)|
Clone repository
  • Concept based Explanations
  • Interpretability By Design
  • Limitations of Interpretability
  • Neurips 2019 Interpretability Roundup
  • On the (In)fidelity and Sensitivity of Explanations
  • Re inforcement Learning for NLP and Text
  • Tutorials and Introductory remarks
  • Visualizing and Measuring the Geometry of BERT
  • a benchmark for interpretability methods in deep neural networks
  • bam
  • explanations can be manipulated and geometry is to blame
  • Home