Commit fb68a024 authored by Avishek Anand's avatar Avishek Anand

Update reade.md

parent 5650038b
......@@ -21,13 +21,17 @@ We release [InterpretMe]
1. **Streaming weak submodularity: Interpreting neural networks on the fly.**
*Ethan R Elenberg, Alexandros G Dimakis, Moran Feldman, and Amin Karbasi*. 2017 [paper](https://arxiv.org/pdf/1703.02647).
Ruth C Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3429–3437, 2017.
1. **Interpretable explanations of black boxes by meaningful perturbation.**
*Ruth C Fong and Andrea Vedaldi.*.CVPR 2017. [paper]
Michael C Hughes, Huseyin Melih Elibol, Thomas McCoy, Roy Perlis, and Finale Doshi-Velez. Supervised topic models for clinical interpretability. arXiv preprint arXiv:1612.01678, 2016.
1. **Supervised topic models for clinical interpretability.**
*Michael C Hughes, Huseyin Melih Elibol, Thomas McCoy, Roy Perlis, and Finale Doshi-Velez.*.2016. [paper](https://arxiv.org/pdf/1612.01678)
Scott Lundberg and Su-In Lee. A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874, 2017.
Sina Mohseni and Eric D Ragan. A human-grounded evaluation benchmark for local ex- planations of machine learning. arXiv preprint arXiv:1801.05075, 2018.
1. **A unified approach to interpreting model predictions.**
*Scott Lundberg and Su-In Lee.*2016. [paper](https://arxiv.org/pdf/1705.07874)
1. **A human-grounded evaluation benchmark for local explanations of machine learning.**
*Sina Mohseni and Eric D Ragan.*2018. [paper](https://arxiv.org/pdf/1801.05075).
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision model- agnostic explanations. In AAAI Conference on Artificial Intelligence (AAAI), 2018.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment