*Huijun Wu, Chen Wang, Jie Yin, Kai Lu and Liming Zhu*. WWW’18. [paper](https://doi.org/10.24963/ijcai.2017/371)
1.**TEM:Tree-enhanced Embedding Model for Explainable Recommendation Xiang Wang.**
*Xiangnan He, Fuli Feng, Liqiang Nie and Tat-Seng Chua*. WWW’18. [paper]
*Xiangnan He, Fuli Feng, Liqiang Nie and Tat-Seng Chua*. WWW’18. [paper](https://www.comp.nus.edu.sg/~xiangnan/papers/www18-tem.pdf)
1.**Towards Deep Interpretability (MUS-ROVER II): Learning Hierarchical Representations of Tonal Music.**
*Haizi Yu, Lav R. Varshney*. ICLR’17. [paper]
*Haizi Yu, Lav R. Varshney*. ICLR’17. [paper](https://openreview.net/pdf?id=ryhqQFKgl)
1.**Generating Interpretable Images with Controllable Structure**
*Scott Reed, Aron van den Oord, Nal Kalchbrenner, Victor Bapst, Matt Botvinick, Nando de Freitas*. ICLR’17. [paper]
*Scott Reed, Aron van den Oord, Nal Kalchbrenner, Victor Bapst, Matt Botvinick, Nando de Freitas*. ICLR’17. [paper](http://www.scottreed.info/files/iclr2017.pdf)
1.**Supervised topic models for clinical interpretability.**
*Hughes et al.*. 2016.
*Hughes et al.*. 2016.[paper](https://arxiv.org/pdf/1612.01678.pdf)
1.**An Effective and Interpretable Method for Document Classification**
*Ngo Van Linh, Nguyen Kim Anh, Khoat Than, Chien Nguyen Dang*.
1.**Interpretable probabilistic embeddings: bridging the gap between topic models and neural networks**
*Anna Potapenko, Artem Popov, and Konstantin Vorontsov*.
1.**Interpretable probabilistic embeddings: bridging the gap between topic models and neural networks.**
*Anna Potapenko, Artem Popov, and Konstantin Vorontsov*. 2017.[paper](https://arxiv.org/pdf/1711.04154.pdf)
1.**Interpretable Explanations of Black Boxes by Meaningful Perturbation.**
*Fong, Ruth C and Vedaldi, Andrea*.
*Fong, Ruth C and Vedaldi, Andrea*.ICCV 2017.[paper](http://openaccess.thecvf.com/content_ICCV_2017/papers/Fong_Interpretable_Explanations_of_ICCV_2017_paper.pdf)