Commit cfd7be48 authored by Avishek Anand's avatar Avishek Anand

Update README.md

parent 805a1753
......@@ -35,3 +35,27 @@ We release [InterpretMe]
1. **Right for the right reasons: Training differentiable models by constraining their explanations.**
*Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez.*.IJCAI 2018. [paper](https://doi.org/10.24963/ijcai.2017/371)
1. **Sharing Deep Neural Network Models with Interpretation.**
*Huijun Wu, Chen Wang, Jie Yin, Kai Lu and Liming Zhu*. WWW’18. [paper](https://doi.org/10.24963/ijcai.2017/371)
1. **TEM:Tree-enhanced Embedding Model for Explainable Recommendation Xiang Wang.**
*Xiangnan He, Fuli Feng, Liqiang Nie and Tat-Seng Chua*. WWW’18. [paper]
1. **Towards Deep Interpretability (MUS-ROVER II): Learning Hierarchical Representations of Tonal Music.**
*Haizi Yu, Lav R. Varshney*. ICLR’17. [paper]
1. **Generating Interpretable Images with Controllable Structure**
*Scott Reed, Aron van den Oord, Nal Kalchbrenner, Victor Bapst, Matt Botvinick, Nando de Freitas*. ICLR’17. [paper]
1. **Supervised topic models for clinical interpretability.**
*Hughes et al.*. 2016.
1. **An Effective and Interpretable Method for Document Classification**
*Ngo Van Linh, Nguyen Kim Anh, Khoat Than, Chien Nguyen Dang*.
1. **Interpretable probabilistic embeddings: bridging the gap between topic models and neural networks**
*Anna Potapenko, Artem Popov, and Konstantin Vorontsov*.
1. **Interpretable Explanations of Black Boxes by Meaningful Perturbation.**
*Fong, Ruth C and Vedaldi, Andrea*.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment