... | ... | @@ -10,7 +10,7 @@ |
|
|
|-----|------|-----|----|-------|
|
|
|
|GNNExplainer: Generating Explanations for Graph Neural Networks|Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, Jure Leskovec|?|[link](https://papers.nips.cc/paper/9123-gnnexplainer-generating-explanations-for-graph-neural-networks)|ToDo (AA)|
|
|
|
|A Benchmark for Interpretability Methods in Deep Neural Networks|Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim|?|[link](https://papers.nips.cc/paper/9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks)|[ROAR](a-benchmark-for-interpretability-methods-in-deep-neural-networks)|
|
|
|
|Fooling Neural Network Interpretations via Adversarial Model Manipulation|Juyeon Heo, Sunghwan Joo, Taesup Moon|?|[link](https://papers.nips.cc/paper/8558-fooling-neural-network-interpretations-via-adversarial-model-manipulation)|ToDo (FK)|
|
|
|
|Fooling Neural Network Interpretations via Adversarial Model Manipulation|Juyeon Heo, Sunghwan Joo, Taesup Moon|?|[link](https://papers.nips.cc/paper/8558-fooling-neural-network-interpretations-via-adversarial-model-manipulation)|ToDo (FK & JS)|
|
|
|
|Learning Dynamics of Attention: Human Prior for Interpretable Machine Reasoning|Wonjae Kim, Yoonho Lee |?|[link](https://papers.nips.cc/paper/8835-learning-dynamics-of-attention-human-prior-for-interpretable-machine-reasoning)|ToDo (AA)|
|
|
|
|Solving Interpretable Kernel Dimensionality Reduction|Chieh Wu, Jared Miller, Yale Chang, Mario Sznaier, Jennifer Dy|?|[link](https://papers.nips.cc/paper/9005-solving-interpretable-kernel-dimensionality-reduction)|`#F00`ToDo|
|
|
|
|This Looks Like That: Deep Learning for Interpretable Image Recognition|Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, Jonathan K. Su|?|[link](https://papers.nips.cc/paper/9095-this-looks-like-that-deep-learning-for-interpretable-image-recognition)|[link](https://docs.google.com/document/d/17zUBN_WtTL89wc-guP1kK4mr6GfwqgX5QqLOl8MKkxs/edit?usp=sharing)|
|
... | ... | @@ -18,7 +18,6 @@ |
|
|
|Towards Interpretable Reinforcement Learning Using Attention Augmented Agents|Alexander Mott, Daniel Zoran, Mike Chrzanowski, Daan Wierstra, Danilo Jimenez Rezende|?|[link](http://papers.neurips.cc/paper/9400-towards-interpretable-reinforcement-learning-using-attention-augmented-agents)|`#F00`TODO|
|
|
|
|Accurate Layerwise Interpretable Competence Estimation|Vickram Rajendran, William LeVine|?|[link](https://papers.nips.cc/paper/9548-accurate-layerwise-interpretable-competence-estimation)|`#F00`ToDo|
|
|
|
|Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)|Mariya Toneva, Leila Wehbe|?|[link](https://papers.nips.cc/paper/9633-interpreting-and-improving-natural-language-processing-in-machines-with-natural-language-processing-in-the-brain)|`#F00`ToDo|
|
|
|
|Fooling Neural Network Interpretations via Adversarial Model Manipulation|Juyeon Heo, Sunghwan Joo, Taesup Moon|?|[link](https://papers.nips.cc/paper/8558-fooling-neural-network-interpretations-via-adversarial-model-manipulation)|ToDo (JS)|
|
|
|
|Towards Automatic Concept-based Explanations|Amirata Ghorbani, James Wexler, James Y. Zou, Been Kim|?|[link](https://papers.nips.cc/paper/9126-towards-automatic-concept-based-explanations)|see [Concept-based Explanations](Concept-based-Explanations)|
|
|
|
|Visualizing and Measuring the Geometry of BERT|Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B. Viegas, Andy Coenen, Adam Pearce, Been Kim|?|[link](https://papers.nips.cc/paper/9065-visualizing-and-measuring-the-geometry-of-bert)|`#F00`ToDo|
|
|
|
|Deep Model Transferability from Attribution Maps|Jie Song, Yixin Chen, Xinchao Wang, Chengchao Shen, Mingli Song|?|[link](https://papers.nips.cc/paper/8849-deep-model-transferability-from-attribution-maps)|ToDo (JS)|
|
... | ... | |