... | ... | @@ -7,12 +7,12 @@ |
|
|
|Title|Authors|Track|link|Summary|
|
|
|
|-----|------|-----|----|-------|
|
|
|
|GNNExplainer: Generating Explanations for Graph Neural Networks|Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, Jure Leskovec|?|[link](https://papers.nips.cc/paper/9123-gnnexplainer-generating-explanations-for-graph-neural-networks)|ToDo (AA)|
|
|
|
|A Benchmark for Interpretability Methods in Deep Neural Networks|Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim|?|[link](https://papers.nips.cc/paper/9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks)|ToDo (AA)|
|
|
|
|A Benchmark for Interpretability Methods in Deep Neural Networks|Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim|?|[link](https://papers.nips.cc/paper/9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks)|In Progress (Max)|
|
|
|
|Fooling Neural Network Interpretations via Adversarial Model Manipulation|Juyeon Heo, Sunghwan Joo, Taesup Moon|?|[link](https://papers.nips.cc/paper/8558-fooling-neural-network-interpretations-via-adversarial-model-manipulation)|ToDo (AA)|
|
|
|
|Learning Dynamics of Attention: Human Prior for Interpretable Machine Reasoning|Wonjae Kim, Yoonho Lee |?|[link](https://papers.nips.cc/paper/8835-learning-dynamics-of-attention-human-prior-for-interpretable-machine-reasoning)|ToDo (AA)|
|
|
|
|Solving Interpretable Kernel Dimensionality Reduction|Chieh Wu, Jared Miller, Yale Chang, Mario Sznaier, Jennifer Dy|?|[link](https://papers.nips.cc/paper/9005-solving-interpretable-kernel-dimensionality-reduction)|ToDo|
|
|
|
|This Looks Like That: Deep Learning for Interpretable Image Recognition|Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, Jonathan K. Su|?|[link](https://papers.nips.cc/paper/9095-this-looks-like-that-deep-learning-for-interpretable-image-recognition)|ToDo|
|
|
|
|CXPlain: Causal Explanations for Model Interpretation under Uncertainty|Patrick Schwab, Walter Karlen|?|[link](https://papers.nips.cc/paper/9211-cxplain-causal-explanations-for-model-interpretation-under-uncertainty)|ToDo|
|
|
|
|This Looks Like That: Deep Learning for Interpretable Image Recognition|Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, Jonathan K. Su|?|[link](https://papers.nips.cc/paper/9095-this-looks-like-that-deep-learning-for-interpretable-image-recognition)|ToDo (Ghost)|
|
|
|
|CXPlain: Causal Explanations for Model Interpretation under Uncertainty|Patrick Schwab, Walter Karlen|?|[link](https://papers.nips.cc/paper/9211-cxplain-causal-explanations-for-model-interpretation-under-uncertainty)|ToDo (Ghost)|
|
|
|
|Towards Interpretable Reinforcement Learning Using Attention Augmented Agents|Alexander Mott, Daniel Zoran, Mike Chrzanowski, Daan Wierstra, Danilo Jimenez Rezende|?|[link](https://papers.nips.cc/paper/9400-towards-interpretable-reinforcement-learning-using-attention-augmented-agents)|ToDo|
|
|
|
|Accurate Layerwise Interpretable Competence Estimation|Vickram Rajendran, William LeVine|?|[link](https://papers.nips.cc/paper/9548-accurate-layerwise-interpretable-competence-estimation)|ToDo|
|
|
|
|Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)|Mariya Toneva, Leila Wehbe|?|[link](https://papers.nips.cc/paper/9633-interpreting-and-improving-natural-language-processing-in-machines-with-natural-language-processing-in-the-brain)|ToDo|
|
... | ... | |