... | ... | @@ -6,29 +6,29 @@ |
|
|
|
|
|
[Link to Vis. and Interpret. Track](https://nips.cc/Conferences/2019/Schedule?showParentSession=15517)
|
|
|
|
|
|
|Title|Authors|Track|link|Summary|
|
|
|
|-----|------|-----|----|-------|
|
|
|
|GNNExplainer: Generating Explanations for Graph Neural Networks|Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, Jure Leskovec|?|[link](https://papers.nips.cc/paper/9123-gnnexplainer-generating-explanations-for-graph-neural-networks)|ToDo (AA)|
|
|
|
|A Benchmark for Interpretability Methods in Deep Neural Networks|Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim|?|[link](https://papers.nips.cc/paper/9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks)|[ROAR](a-benchmark-for-interpretability-methods-in-deep-neural-networks)|
|
|
|
|Fooling Neural Network Interpretations via Adversarial Model Manipulation|Juyeon Heo, Sunghwan Joo, Taesup Moon|?|[link](https://papers.nips.cc/paper/8558-fooling-neural-network-interpretations-via-adversarial-model-manipulation)|ToDo (FK & JS)|
|
|
|
|Learning Dynamics of Attention: Human Prior for Interpretable Machine Reasoning|Wonjae Kim, Yoonho Lee |?|[link](https://papers.nips.cc/paper/8835-learning-dynamics-of-attention-human-prior-for-interpretable-machine-reasoning)|ToDo (AA)|
|
|
|
|Solving Interpretable Kernel Dimensionality Reduction|Chieh Wu, Jared Miller, Yale Chang, Mario Sznaier, Jennifer Dy|?|[link](https://papers.nips.cc/paper/9005-solving-interpretable-kernel-dimensionality-reduction)|`#F00`ToDo|
|
|
|
|This Looks Like That: Deep Learning for Interpretable Image Recognition|Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, Jonathan K. Su|?|[link](https://papers.nips.cc/paper/9095-this-looks-like-that-deep-learning-for-interpretable-image-recognition)|[link](https://docs.google.com/document/d/17zUBN_WtTL89wc-guP1kK4mr6GfwqgX5QqLOl8MKkxs/edit?usp=sharing)|
|
|
|
|CXPlain: Causal Explanations for Model Interpretation under Uncertainty|Patrick Schwab, Walter Karlen|?|[link](https://papers.nips.cc/paper/9211-cxplain-causal-explanations-for-model-interpretation-under-uncertainty)|[link](https://docs.google.com/document/d/17zUBN_WtTL89wc-guP1kK4mr6GfwqgX5QqLOl8MKkxs/edit?usp=sharing)|
|
|
|
|Towards Interpretable Reinforcement Learning Using Attention Augmented Agents|Alexander Mott, Daniel Zoran, Mike Chrzanowski, Daan Wierstra, Danilo Jimenez Rezende|?|[link](http://papers.neurips.cc/paper/9400-towards-interpretable-reinforcement-learning-using-attention-augmented-agents)|`#F00`TODO|
|
|
|
|Accurate Layerwise Interpretable Competence Estimation|Vickram Rajendran, William LeVine|?|[link](https://papers.nips.cc/paper/9548-accurate-layerwise-interpretable-competence-estimation)|`#F00`ToDo|
|
|
|
|Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)|Mariya Toneva, Leila Wehbe|?|[link](https://papers.nips.cc/paper/9633-interpreting-and-improving-natural-language-processing-in-machines-with-natural-language-processing-in-the-brain)|`#F00`ToDo|
|
|
|
|Towards Automatic Concept-based Explanations|Amirata Ghorbani, James Wexler, James Y. Zou, Been Kim|?|[link](https://papers.nips.cc/paper/9126-towards-automatic-concept-based-explanations)|see [Concept-based Explanations](Concept-based-Explanations)|
|
|
|
|Visualizing and Measuring the Geometry of BERT|Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B. Viegas, Andy Coenen, Adam Pearce, Been Kim|?|[link](https://papers.nips.cc/paper/9065-visualizing-and-measuring-the-geometry-of-bert)|ToDo (JW)|
|
|
|
|Deep Model Transferability from Attribution Maps|Jie Song, Yixin Chen, Xinchao Wang, Chengchao Shen, Mingli Song|?|[link](https://papers.nips.cc/paper/8849-deep-model-transferability-from-attribution-maps)|ToDo (JS)|
|
|
|
|Robust Attribution Regularization|Jiefeng Chen, Xi Wu, Vaibhav Rastogi, Yingyu Liang, Somesh Jha|?|[link](https://papers.nips.cc/paper/9577-robust-attribution-regularization)|`#F00`ToDo|
|
|
|
|Demystifying Black-box Models with Symbolic Metamodels|Ahmed M. Alaa, Mihaela van der Schaar|?|[link](https://papers.nips.cc/paper/9308-demystifying-black-box-models-with-symbolic-metamodels)|`#F00`ToDo|
|
|
|
|Deliberative Explanations: visualizing network insecurities|Pei Wang, Nuno Nvasconcelos|?|[link](https://papers.nips.cc/paper/8418-deliberative-explanations-visualizing-network-insecurities)|`#F00`ToDo|
|
|
|
|Grid Saliency for Context Explanations of Semantic Segmentation|Lukas Hoyer, Mauricio Munoz, Prateek Katiyar, Anna Khoreva, Volker Fischer|?|[link](https://papers.nips.cc/paper/8874-grid-saliency-for-context-explanations-of-semantic-segmentation)|`#F00`ToDo|
|
|
|
|On the (In)fidelity and Sensitivity of Explanations|Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I. Inouye, Pradeep K. Ravikumar||[link](https://papers.nips.cc/paper/9278-on-the-infidelity-and-sensitivity-of-explanations)|ToDo (FK)|
|
|
|
|Explanations can be manipulated and geometry is to blame|Ann-Kathrin Dombrowski, Maximillian Alber, Christopher Anders, Marcel Ackermann, Klaus-Robert Müller, Pan Kessel|?|[link](https://papers.nips.cc/paper/9511-explanations-can-be-manipulated-and-geometry-is-to-blame)|[link](explanations-can-be-manipulated-and-geometry-is-to-blame)|
|
|
|
|On Relating Explanations and Adversarial Examples|Alexey Ignatiev, Nina Narodytska, Joao Marques-Silva|?|[link](https://papers.nips.cc/paper/9717-on-relating-explanations-and-adversarial-examples)|ToDo (JS)|
|
|
|
|Benchmarking Attribution Methods with Ground Truth|Mengjiao Yang and Been Kim|HCML workshop|[short](https://drive.google.com/file/d/1w1P0UB3bBVZ82g6OblxM6mh6C3nxNyeh/view?usp=sharing)/[arxiv](https://arxiv.org/abs/1907.09701)|ToDo(Max)|
|
|
|
| Title | Authors | Track | link | Summary |
|
|
|
| -------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
|
|
|
| GNNExplainer: Generating Explanations for Graph Neural Networks | Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, Jure Leskovec | ? | [link](https://papers.nips.cc/paper/9123-gnnexplainer-generating-explanations-for-graph-neural-networks) | ToDo (AA) |
|
|
|
| A Benchmark for Interpretability Methods in Deep Neural Networks | Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim | ? | [link](https://papers.nips.cc/paper/9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks) | [ROAR](a-benchmark-for-interpretability-methods-in-deep-neural-networks) |
|
|
|
| Fooling Neural Network Interpretations via Adversarial Model Manipulation | Juyeon Heo, Sunghwan Joo, Taesup Moon | ? | [link](https://papers.nips.cc/paper/8558-fooling-neural-network-interpretations-via-adversarial-model-manipulation) | ToDo (FK & JS) |
|
|
|
| Learning Dynamics of Attention: Human Prior for Interpretable Machine Reasoning | Wonjae Kim, Yoonho Lee | ? | [link](https://papers.nips.cc/paper/8835-learning-dynamics-of-attention-human-prior-for-interpretable-machine-reasoning) | ToDo (AA) |
|
|
|
| Solving Interpretable Kernel Dimensionality Reduction | Chieh Wu, Jared Miller, Yale Chang, Mario Sznaier, Jennifer Dy | ? | [link](https://papers.nips.cc/paper/9005-solving-interpretable-kernel-dimensionality-reduction) | `#F00`ToDo |
|
|
|
| This Looks Like That: Deep Learning for Interpretable Image Recognition | Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, Jonathan K. Su | ? | [link](https://papers.nips.cc/paper/9095-this-looks-like-that-deep-learning-for-interpretable-image-recognition) | [link](https://docs.google.com/document/d/17zUBN_WtTL89wc-guP1kK4mr6GfwqgX5QqLOl8MKkxs/edit?usp=sharing) |
|
|
|
| CXPlain: Causal Explanations for Model Interpretation under Uncertainty | Patrick Schwab, Walter Karlen | ? | [link](https://papers.nips.cc/paper/9211-cxplain-causal-explanations-for-model-interpretation-under-uncertainty) | [link](https://docs.google.com/document/d/17zUBN_WtTL89wc-guP1kK4mr6GfwqgX5QqLOl8MKkxs/edit?usp=sharing) |
|
|
|
| Towards Interpretable Reinforcement Learning Using Attention Augmented Agents | Alexander Mott, Daniel Zoran, Mike Chrzanowski, Daan Wierstra, Danilo Jimenez Rezende | ? | [link](http://papers.neurips.cc/paper/9400-towards-interpretable-reinforcement-learning-using-attention-augmented-agents) | `#F00`TODO |
|
|
|
| Accurate Layerwise Interpretable Competence Estimation | Vickram Rajendran, William LeVine | ? | [link](https://papers.nips.cc/paper/9548-accurate-layerwise-interpretable-competence-estimation) | `#F00`ToDo |
|
|
|
| Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain) | Mariya Toneva, Leila Wehbe | ? | [link](https://papers.nips.cc/paper/9633-interpreting-and-improving-natural-language-processing-in-machines-with-natural-language-processing-in-the-brain) | `#F00`ToDo |
|
|
|
| Towards Automatic Concept-based Explanations | Amirata Ghorbani, James Wexler, James Y. Zou, Been Kim | ? | [link](https://papers.nips.cc/paper/9126-towards-automatic-concept-based-explanations) | see [Concept-based Explanations](Concept-based-Explanations) |
|
|
|
| Visualizing and Measuring the Geometry of BERT | Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B. Viegas, Andy Coenen, Adam Pearce, Been Kim | ? | [link](https://papers.nips.cc/paper/9065-visualizing-and-measuring-the-geometry-of-bert) | see [Visualizing and Measuring the Geometry of BERT](Visualizing-and-Measuring-the-Geometry-of-BERT) |
|
|
|
| Deep Model Transferability from Attribution Maps | Jie Song, Yixin Chen, Xinchao Wang, Chengchao Shen, Mingli Song | ? | [link](https://papers.nips.cc/paper/8849-deep-model-transferability-from-attribution-maps) | ToDo (JS) |
|
|
|
| Robust Attribution Regularization | Jiefeng Chen, Xi Wu, Vaibhav Rastogi, Yingyu Liang, Somesh Jha | ? | [link](https://papers.nips.cc/paper/9577-robust-attribution-regularization) | `#F00`ToDo |
|
|
|
| Demystifying Black-box Models with Symbolic Metamodels | Ahmed M. Alaa, Mihaela van der Schaar | ? | [link](https://papers.nips.cc/paper/9308-demystifying-black-box-models-with-symbolic-metamodels) | `#F00`ToDo |
|
|
|
| Deliberative Explanations: visualizing network insecurities | Pei Wang, Nuno Nvasconcelos | ? | [link](https://papers.nips.cc/paper/8418-deliberative-explanations-visualizing-network-insecurities) | `#F00`ToDo |
|
|
|
| Grid Saliency for Context Explanations of Semantic Segmentation | Lukas Hoyer, Mauricio Munoz, Prateek Katiyar, Anna Khoreva, Volker Fischer | ? | [link](https://papers.nips.cc/paper/8874-grid-saliency-for-context-explanations-of-semantic-segmentation) | `#F00`ToDo |
|
|
|
| On the (In)fidelity and Sensitivity of Explanations | Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I. Inouye, Pradeep K. Ravikumar | | [link](https://papers.nips.cc/paper/9278-on-the-infidelity-and-sensitivity-of-explanations) | ToDo (FK) |
|
|
|
| Explanations can be manipulated and geometry is to blame | Ann-Kathrin Dombrowski, Maximillian Alber, Christopher Anders, Marcel Ackermann, Klaus-Robert Müller, Pan Kessel | ? | [link](https://papers.nips.cc/paper/9511-explanations-can-be-manipulated-and-geometry-is-to-blame) | [link](explanations-can-be-manipulated-and-geometry-is-to-blame) |
|
|
|
| On Relating Explanations and Adversarial Examples | Alexey Ignatiev, Nina Narodytska, Joao Marques-Silva | ? | [link](https://papers.nips.cc/paper/9717-on-relating-explanations-and-adversarial-examples) | ToDo (JS) |
|
|
|
| Benchmarking Attribution Methods with Ground Truth | Mengjiao Yang and Been Kim | HCML workshop | [short](https://drive.google.com/file/d/1w1P0UB3bBVZ82g6OblxM6mh6C3nxNyeh/view?usp=sharing)/[arxiv](https://arxiv.org/abs/1907.09701) | ToDo(Max) |
|
|
|
|
|
|
|
|
|
<!--
|
... | ... | |