Commit 32655383 authored by Avishek Anand's avatar Avishek Anand

Update README.md

parent cfd7be48
......@@ -22,19 +22,19 @@ We release [InterpretMe]
*Ruth C Fong and Andrea Vedaldi.*.CVPR 2017. [paper]
1. **Supervised topic models for clinical interpretability.**
*Michael C Hughes, Huseyin Melih Elibol, Thomas McCoy, Roy Perlis, and Finale Doshi-Velez.*.2016. [paper](https://arxiv.org/pdf/1612.01678)
*Michael C Hughes, Huseyin Melih Elibol, Thomas McCoy, Roy Perlis, and Finale Doshi-Velez*.2016. [paper](https://arxiv.org/pdf/1612.01678)
1. **A unified approach to interpreting model predictions.**
*Scott Lundberg and Su-In Lee.*.2016. [paper](https://arxiv.org/pdf/1705.07874)
*Scott Lundberg and Su-In Lee*.2016. [paper](https://arxiv.org/pdf/1705.07874)
1. **A human-grounded evaluation benchmark for local explanations of machine learning.**
*Sina Mohseni and Eric D Ragan.*.2018. [paper](https://arxiv.org/pdf/1801.05075).
*Sina Mohseni and Eric D Ragan*.2018. [paper](https://arxiv.org/pdf/1801.05075).
1. **Anchors: High-precision model-agnostic explanations.**
*Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin.*.AAAI 2018. [paper]
*Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin*.AAAI 2018. [paper]
1. **Right for the right reasons: Training differentiable models by constraining their explanations.**
*Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez.*.IJCAI 2018. [paper](https://doi.org/10.24963/ijcai.2017/371)
*Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez*.IJCAI 2018. [paper](https://doi.org/10.24963/ijcai.2017/371)
1. **Sharing Deep Neural Network Models with Interpretation.**
*Huijun Wu, Chen Wang, Jie Yin, Kai Lu and Liming Zhu*. WWW’18. [paper](https://doi.org/10.24963/ijcai.2017/371)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment