|
|
We collect relevant papers for Interpretability using MTL framework here.
|
|
|
|
|
|
## Multi-Task Learning
|
|
|
1. **An Overview of Multi-Task Learning in Deep Neural Networks** *https://arxiv.org/abs/1706.05098* 2017. [paper](https://arxiv.org/abs/1706.05098) [blog](http://ruder.io/multi-task/)
|
|
|
|
|
|
|
|
|
##Everything else about interpretability
|
|
|
1. **Interpretation of Neural Networks is Fragile** *Amirata Ghorbani, Abubakar Abid, James Zou* 2018. [paper](https://arxiv.org/abs/1710.10547)
|
|
|
|
|
|
|
|
|
## Merely attention
|
|
|
1. **An Attentive survey of Attention Models** *Sneha Chaudhari, Gungor Polatkan, Rohan Ramanath, Varun Mithal* 2019. [paper](https://arxiv.org/abs/1904.02874)
|
|
|
1. **Hierarchical Attention Networks for Document Classification** *Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, Eduard Hovy1* ACL2016. [paper](https://www.aclweb.org/anthology/N16-1174)
|
|
|
|
|
|
|
|
|
## Interpretability based on attention
|
|
|
1. **Attention is not Explanation** *Sarthak Jain, Byron C. Wallace* 2019. [paper](https://arxiv.org/abs/1902.10186)
|
|
|
1. **On the Validity of Self-Attention as Explanation in Transformer Models** *Gino Brunner, Yang Liu, Damián Pascual, Oliver Richter, Roger Wattenhofer* 2019. [paper](https://arxiv.org/abs/1908.04211)
|
|
|
|
|
|
|
|
|
## Co-attention based Methods
|
|
|
1. **Hierarchical Question-Image Co-Attention for Visual Question Answering** *Jiasen Lu, Jianwei Yang, Dhruv Batra, Devi Parikh* NIPS2016. [paper](http://papers.nips.cc/paper/6202-hierarchical-question-image-co-attention-for-visual-question-answering)
|
|
|
1. **Mind Your Neighbours: Image Annotation With Metadata Neighbourhood Graph Co-Attention Networks** *Junjie Zhang, Qi Wu, Jian Zhang, Chunhua Shen, Jianfeng Lu* 2019. [paper] (http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhang_Mind_Your_Neighbours_Image_Annotation_With_Metadata_Neighbourhood_Graph_Co-Attention_CVPR_2019_paper.pdf)
|
|
|
1. **Multi-Pointer Co-Attention Networks for Recommendation** * Yi Tay, Anh Tuan Luu, Siu Cheung Hui* 2018. [paper] (https://dl.acm.org/citation.cfm?id=3220086)
|
|
|
|
|
|
|
|
|
## Interpretation by design
|
|
|
1. **Designing and Interpreting Probes with Control Tasks** *John Hewitt, Percy Liang* EMNLP2019 [paper](https://arxiv.org/abs/1909.03368) [blog](https://github.com/john-hewitt/control-tasks) [github](https://github.com/john-hewitt/control-tasks) [related git repo](https://github.com/john-hewitt/structural-probes)
|
|
|
|
|
|
|
|
|
## Sentiment analysis with attention models
|
|
|
### Aspect based sentiment classification
|
|
|
1. **Attention-based LSTM for Aspect-level Sentiment Classification** *Yequan Wang, Minlie Huang, Li Zhao, Xiaoyan Zhu* ACL2016 [paper](https://aclweb.org/anthology/D16-1058)
|
|
|
2. **Targeted Aspect-Based Sentiment Analysis via Embedding Commonsense Knowledge into an Attentive LSTM** *Yukun Ma, Haiyun Peng, Erik Cambria* AAAI2018. [paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPaper/16541)
|
|
|
### Memory networks
|
|
|
1. **Aspect Level Sentiment Classification with Deep Memory Network** *Duyu Tang, Bing Qin, Ting Liu* 2016. [paper](https://arxiv.org/abs/1605.08900)
|
|
|
#### Transformer based sentiment analysis
|
|
|
1. **Self-Attention: A Better Building Block for Sentiment Analysis Neural Network Classifiers** *Artaches Ambartsoumian, Fred Popowich* ACL2018. [paper](https://aclweb.org/anthology/W18-6219)
|
|
|
1. **Attentional Encoder Network for Targeted Sentiment Classification** *Youwei Song, Jiahai Wang, Tao Jiang, Zhiyue Liu, Yanghui Rao* 2019 [paper](https://arxiv.org/abs/1902.09314)
|
|
|
|
|
|
|
|
|
## Datasets |
|
|
\ No newline at end of file |