Commit 5650038b authored by Avishek Anand's avatar Avishek Anand
Browse files


## Must-read papers on Interpretability and Explanations.
NRL: network representation learning. NE: network embedding.
We release [InterpretMe]
### Survey papers:
1. **Representation Learning on Graphs: Methods and Applications.**
*William L. Hamilton, Rex Ying, Jure Leskovec* 2017. [paper](
### Journal and Conference papers:
1. **DeepWalk: Online Learning of Social Representations.**
*Bryan Perozzi, Rami Al-Rfou, Steven Skiena.* KDD 2014. [paper]( [code](
1. **Towards a rigorous science of interpretable machine learning.**
*Finale Doshi-Velez and Been Kim.* 2017. [paper]
1. **Streaming weak submodularity: Interpreting neural networks on the fly.**
*Ethan R Elenberg, Alexandros G Dimakis, Moran Feldman, and Amin Karbasi*. 2017 [paper](
Ruth C Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3429–3437, 2017.
Michael C Hughes, Huseyin Melih Elibol, Thomas McCoy, Roy Perlis, and Finale Doshi-Velez. Supervised topic models for clinical interpretability. arXiv preprint arXiv:1612.01678, 2016.
Scott Lundberg and Su-In Lee. A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874, 2017.
Sina Mohseni and Eric D Ragan. A human-grounded evaluation benchmark for local ex- planations of machine learning. arXiv preprint arXiv:1801.05075, 2018.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision model- agnostic explanations. In AAAI Conference on Artificial Intelligence (AAAI), 2018.
Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez. Right for the right reasons: Training differentiable models by constraining their explanations. In Pro- ceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 2662–2670, 2017a. doi: 10.24963/ijcai.2017/371. URL
Andrew Slavin Ross, Michael C Hughes, and Finale Doshi-Velez. Right for the right rea- sons: Training differentiable models by constraining their explanations. arXiv preprint arXiv:1703.03717, 2017b.
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment