|
|
This is a list of papers relating to interpretability of learning systems accepted in Neurips 2019. https://papers.nips.cc/book/advances-in-neural-information-processing-systems-32-2019
|
|
|
## This is a list of papers relating to interpretability of learning systems accepted in Neurips 2019.
|
|
|
|
|
|
[Link to pre-proceedings](https://papers.nips.cc/book/advances-in-neural-information-processing-systems-32-2019)
|
|
|
|
|
|
[Link to Vis. and Interpret. Track](https://nips.cc/Conferences/2019/Schedule?showParentSession=15517)
|
|
|
|
|
|
|Title|Authors|Track|link|Summary|
|
|
|
|-----|------|-----|----|-------|
|
|
|
|GNNExplainer: Generating Explanations for Graph Neural Networks|Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, Jure Leskovec|?|[link](https://papers.nips.cc/paper/9123-gnnexplainer-generating-explanations-for-graph-neural-networks)|ToDo|
|
|
|
|A Benchmark for Interpretability Methods in Deep Neural Networks|Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim|?|[link](https://papers.nips.cc/paper/9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks)|ToDo|
|
|
|
|GNNExplainer: Generating Explanations for Graph Neural Networks|Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, Jure Leskovec|?|[link](https://papers.nips.cc/paper/9123-gnnexplainer-generating-explanations-for-graph-neural-networks)|ToDo (AA)|
|
|
|
|A Benchmark for Interpretability Methods in Deep Neural Networks|Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim|?|[link](https://papers.nips.cc/paper/9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks)|ToDo (AA)|
|
|
|
|Fooling Neural Network Interpretations via Adversarial Model Manipulation|Juyeon Heo, Sunghwan Joo, Taesup Moon|?|[link](https://papers.nips.cc/paper/8558-fooling-neural-network-interpretations-via-adversarial-model-manipulation)|ToDo (AA)|
|
|
|
|Learning Dynamics of Attention: Human Prior for Interpretable Machine Reasoning|Wonjae Kim, Yoonho Lee |?|[link](https://papers.nips.cc/paper/8835-learning-dynamics-of-attention-human-prior-for-interpretable-machine-reasoning)|ToDo (AA)|
|
|
|
|Solving Interpretable Kernel Dimensionality Reduction|Chieh Wu, Jared Miller, Yale Chang, Mario Sznaier, Jennifer Dy|?|[link](https://papers.nips.cc/paper/9005-solving-interpretable-kernel-dimensionality-reduction)|ToDo|
|
|
|
|This Looks Like That: Deep Learning for Interpretable Image Recognition|Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, Jonathan K. Su|?|[link](https://papers.nips.cc/paper/9095-this-looks-like-that-deep-learning-for-interpretable-image-recognition)|ToDo|
|
|
|
|CXPlain: Causal Explanations for Model Interpretation under Uncertainty|Patrick Schwab, Walter Karlen|?|[link](https://papers.nips.cc/paper/9211-cxplain-causal-explanations-for-model-interpretation-under-uncertainty)|ToDo|
|
|
|
|Towards Interpretable Reinforcement Learning Using Attention Augmented Agents|Alexander Mott, Daniel Zoran, Mike Chrzanowski, Daan Wierstra, Danilo Jimenez Rezende|?|[link](https://papers.nips.cc/paper/9400-towards-interpretable-reinforcement-learning-using-attention-augmented-agents)|ToDo|
|
|
|
|Accurate Layerwise Interpretable Competence Estimation|Vickram Rajendran, William LeVine|?|[link](https://papers.nips.cc/paper/9548-accurate-layerwise-interpretable-competence-estimation)|ToDo|
|
|
|
|Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)|Mariya Toneva, Leila Wehbe|?|[link](https://papers.nips.cc/paper/9633-interpreting-and-improving-natural-language-processing-in-machines-with-natural-language-processing-in-the-brain)|ToDo|
|
|
|
|Fooling Neural Network Interpretations via Adversarial Model Manipulation|Juyeon Heo, Sunghwan Joo, Taesup Moon|?|[link](https://papers.nips.cc/paper/8558-fooling-neural-network-interpretations-via-adversarial-model-manipulation)|ToDo|
|
|
|
|Towards Automatic Concept-based Explanations|Amirata Ghorbani, James Wexler, James Y. Zou, Been Kim|?|[link](https://papers.nips.cc/paper/9126-towards-automatic-concept-based-explanations)|see [Concept-based Explanations](Concept-based-Explanations)|
|
|
|
|Visualizing and Measuring the Geometry of BERT|Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B. Viegas, Andy Coenen, Adam Pearce, Been Kim|?|[link](https://papers.nips.cc/paper/9065-visualizing-and-measuring-the-geometry-of-bert)|ToDo|
|
|
|
|Deep Model Transferability from Attribution Maps|Jie Song, Yixin Chen, Xinchao Wang, Chengchao Shen, Mingli Song|?|[link](https://papers.nips.cc/paper/8849-deep-model-transferability-from-attribution-maps)|ToDo|
|
|
|
|Robust Attribution Regularization|Jiefeng Chen, Xi Wu, Vaibhav Rastogi, Yingyu Liang, Somesh Jha|?|[link](https://papers.nips.cc/paper/9577-robust-attribution-regularization)|ToDo|
|
|
|
|Demystifying Black-box Models with Symbolic Metamodels|Ahmed M. Alaa, Mihaela van der Schaar|?|[link](https://papers.nips.cc/paper/9308-demystifying-black-box-models-with-symbolic-metamodels)|ToDo|
|
|
|
|Deliberative Explanations: visualizing network insecurities|Pei Wang, Nuno Nvasconcelos|?|[link](https://papers.nips.cc/paper/8418-deliberative-explanations-visualizing-network-insecurities)|ToDo|
|
|
|
|Grid Saliency for Context Explanations of Semantic Segmentation|Lukas Hoyer, Mauricio Munoz, Prateek Katiyar, Anna Khoreva, Volker Fischer|?|[link](https://papers.nips.cc/paper/8874-grid-saliency-for-context-explanations-of-semantic-segmentation)|ToDo|
|
|
|
|On the (In)fidelity and Sensitivity of Explanations|Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I. Inouye, Pradeep K. Ravikumar||[link](https://papers.nips.cc/paper/9278-on-the-infidelity-and-sensitivity-of-explanations)|ToDo|
|
|
|
|Explanations can be manipulated and geometry is to blame|Ann-Kathrin Dombrowski, Maximillian Alber, Christopher Anders, Marcel Ackermann, Klaus-Robert Müller, Pan Kessel|?|[link](https://papers.nips.cc/paper/9511-explanations-can-be-manipulated-and-geometry-is-to-blame)|ToDo|
|
|
|
|On Relating Explanations and Adversarial Examples|Alexey Ignatiev, Nina Narodytska, Joao Marques-Silva|?|[link](https://papers.nips.cc/paper/9717-on-relating-explanations-and-adversarial-examples)|ToDo|
|
|
|
||||[link]()||
|
|
|
|
|
|
|
|
|
## Visualization and Interpretability Track:
|
|
|
<!--
|
|
|
## Visualization and Interpretability Track:
|
|
|
|
|
|
https://nips.cc/Conferences/2019/Schedule?showParentSession=15517
|
|
|
|
... | ... | @@ -16,24 +41,27 @@ https://nips.cc/Conferences/2019/Schedule?showParentSession=15517 |
|
|
1. [A Benchmark for Interpretability Methods in Deep Neural Networks](https://papers.nips.cc/paper/9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks) [Sara Hooker](https://papers.nips.cc/author/sara-hooker-14137), [Dumitru Erhan](https://papers.nips.cc/author/dumitru-erhan-6793), [Pieter-Jan Kindermans](https://papers.nips.cc/author/pieter-jan-kindermans-9525), [Been Kim](https://papers.nips.cc/author/been-kim-7252) (AA)
|
|
|
1. [Fooling Neural Network Interpretations via Adversarial Model Manipulation](https://papers.nips.cc/paper/8558-fooling-neural-network-interpretations-via-adversarial-model-manipulation) [Juyeon Heo](https://papers.nips.cc/author/juyeon-heo-12966), [Sunghwan Joo](https://papers.nips.cc/author/sunghwan-joo-12967), [Taesup Moon](https://papers.nips.cc/author/taesup-moon-9152) (AA)
|
|
|
1. [Learning Dynamics of Attention: Human Prior for Interpretable Machine Reasoning](https://papers.nips.cc/paper/8835-learning-dynamics-of-attention-human-prior-for-interpretable-machine-reasoning) [Wonjae Kim](https://papers.nips.cc/author/wonjae-kim-13513), [Yoonho Lee](https://papers.nips.cc/author/yoonho-lee-13514) (AA)
|
|
|
1. [Solving Interpretable Kernel Dimensionality Reduction](https://papers.nips.cc/paper/9005-solving-interpretable-kernel-dimensionality-reduction) [Chieh Wu](https://papers.nips.cc/author/chieh-wu-13837), [Jared Miller](https://papers.nips.cc/author/jared-miller-13838), [Yale Chang](https://papers.nips.cc/author/yale-chang-13839), [Mario Sznaier](https://papers.nips.cc/author/mario-sznaier-13840), [Jennifer Dy](https://papers.nips.cc/author/jennifer-dy-13841) (AA)
|
|
|
1. [This Looks Like That: Deep Learning for Interpretable Image Recognition](https://papers.nips.cc/paper/9095-this-looks-like-that-deep-learning-for-interpretable-image-recognition) [Chaofan Chen](https://papers.nips.cc/author/chaofan-chen-14016), [Oscar Li](https://papers.nips.cc/author/oscar-li-14017), [Daniel Tao](https://papers.nips.cc/author/daniel-tao-14018), [Alina Barnett](https://papers.nips.cc/author/alina-barnett-14019), [Cynthia Rudin](https://papers.nips.cc/author/cynthia-rudin-2772), [Jonathan K. Su](https://papers.nips.cc/author/jonathan-k-su-14020)
|
|
|
1. [CXPlain: Causal Explanations for Model Interpretation under Uncertainty](https://papers.nips.cc/paper/9211-cxplain-causal-explanations-for-model-interpretation-under-uncertainty) [Patrick Schwab](https://papers.nips.cc/author/patrick-schwab-14203), [Walter Karlen](https://papers.nips.cc/author/walter-karlen-14204)
|
|
|
1. [Towards Interpretable Reinforcement Learning Using Attention Augmented Agents](https://papers.nips.cc/paper/9400-towards-interpretable-reinforcement-learning-using-attention-augmented-agents) [Alexander Mott](https://papers.nips.cc/author/alexander-mott-14506), [Daniel Zoran](https://papers.nips.cc/author/daniel-zoran-6768), [Mike Chrzanowski](https://papers.nips.cc/author/mike-chrzanowski-11807), [Daan Wierstra](https://papers.nips.cc/author/daan-wierstra-5118), [Danilo Jimenez Rezende](https://papers.nips.cc/author/danilo-jimenez-rezende-7298)
|
|
|
1. [Accurate Layerwise Interpretable Competence Estimation](https://papers.nips.cc/paper/9548-accurate-layerwise-interpretable-competence-estimation) [Vickram Rajendran](https://papers.nips.cc/author/vickram-rajendran-14788), [William LeVine](https://papers.nips.cc/author/william-levine-14789)
|
|
|
1. [Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)](https://papers.nips.cc/paper/9633-interpreting-and-improving-natural-language-processing-in-machines-with-natural-language-processing-in-the-brain)[Mariya Toneva](https://papers.nips.cc/author/mariya-toneva-14808), [Leila Wehbe](https://papers.nips.cc/author/leila-wehbe-14809) (am not sure about this)
|
|
|
1. [Fooling Neural Network Interpretations via Adversarial Model Manipulation](https://papers.nips.cc/paper/8558-fooling-neural-network-interpretations-via-adversarial-model-manipulation) [Juyeon Heo](https://papers.nips.cc/author/juyeon-heo-12966), [Sunghwan Joo](https://papers.nips.cc/author/sunghwan-joo-12967), [Taesup Moon](https://papers.nips.cc/author/taesup-moon-9152)
|
|
|
1. [Towards Automatic Concept-based Explanations](https://papers.nips.cc/paper/9126-towards-automatic-concept-based-explanations) [Amirata Ghorbani](https://papers.nips.cc/author/amirata-ghorbani-14069), [James Wexler](https://papers.nips.cc/author/james-wexler-14070), [James Y. Zou](https://papers.nips.cc/author/james-y-zou-6498), [Been Kim](https://papers.nips.cc/author/been-kim-7252)
|
|
|
1. [Visualizing and Measuring the Geometry of BERT](https://papers.nips.cc/paper/9065-visualizing-and-measuring-the-geometry-of-bert) [Emily Reif](https://papers.nips.cc/author/emily-reif-13960), [Ann Yuan](https://papers.nips.cc/author/ann-yuan-13961), [Martin Wattenberg](https://papers.nips.cc/author/martin-wattenberg-1391), [Fernanda B. Viegas](https://papers.nips.cc/author/fernanda-b-viegas-13962), [Andy Coenen](https://papers.nips.cc/author/andy-coenen-13963), [Adam Pearce](https://papers.nips.cc/author/adam-pearce-13964), [Been Kim](https://papers.nips.cc/author/been-kim-7252)
|
|
|
1. [Deep Model Transferability from Attribution Maps](https://papers.nips.cc/paper/8849-deep-model-transferability-from-attribution-maps) [Jie Song](https://papers.nips.cc/author/jie-song-13540), [Yixin Chen](https://papers.nips.cc/author/yixin-chen-3208), [Xinchao Wang](https://papers.nips.cc/author/xinchao-wang-11553), [Chengchao Shen](https://papers.nips.cc/author/chengchao-shen-13541), [Mingli Song](https://papers.nips.cc/author/mingli-song-11556)
|
|
|
1. [Robust Attribution Regularization](https://papers.nips.cc/paper/9577-robust-attribution-regularization) [Jiefeng Chen](https://papers.nips.cc/author/jiefeng-chen-14836), [Xi Wu](https://papers.nips.cc/author/xi-wu-14837), [Vaibhav Rastogi](https://papers.nips.cc/author/vaibhav-rastogi-14838), [Yingyu Liang](https://papers.nips.cc/author/yingyu-liang-6637), [Somesh Jha](https://papers.nips.cc/author/somesh-jha-14420)
|
|
|
1. [Demystifying Black-box Models with Symbolic Metamodels](https://papers.nips.cc/paper/9308-demystifying-black-box-models-with-symbolic-metamodels) [Ahmed M. Alaa](https://papers.nips.cc/author/ahmed-m-alaa-9344), [Mihaela van der Schaar](https://papers.nips.cc/author/mihaela-van-der-schaar-9351)
|
|
|
1. [Deliberative Explanations: visualizing network insecurities](https://papers.nips.cc/paper/8418-deliberative-explanations-visualizing-network-insecurities) [Pei Wang](https://papers.nips.cc/author/pei-wang-12672), [Nuno Nvasconcelos](https://papers.nips.cc/author/nuno-nvasconcelos-9103)
|
|
|
1. [Grid Saliency for Context Explanations of Semantic Segmentation](https://papers.nips.cc/paper/8874-grid-saliency-for-context-explanations-of-semantic-segmentation) [Lukas Hoyer](https://papers.nips.cc/author/lukas-hoyer-13590), [Mauricio Munoz](https://papers.nips.cc/author/mauricio-munoz-13591), [Prateek Katiyar](https://papers.nips.cc/author/prateek-katiyar-13592), [Anna Khoreva](https://papers.nips.cc/author/anna-khoreva-13552), [Volker Fischer](https://papers.nips.cc/author/volker-fischer-11249)
|
|
|
1. [On the (In)fidelity and Sensitivity of Explanations](https://papers.nips.cc/paper/9278-on-the-infidelity-and-sensitivity-of-explanations) [Chih-Kuan Yeh](https://papers.nips.cc/author/chih-kuan-yeh-12137), [Cheng-Yu Hsieh](https://papers.nips.cc/author/cheng-yu-hsieh-14308), [Arun Suggala](https://papers.nips.cc/author/arun-suggala-10070), [David I. Inouye](https://papers.nips.cc/author/david-i-inouye-7228), [Pradeep K. Ravikumar](https://papers.nips.cc/author/pradeep-k-ravikumar-6369)
|
|
|
1. [Explanations can be manipulated and geometry is to blame](https://papers.nips.cc/paper/9511-explanations-can-be-manipulated-and-geometry-is-to-blame) [Ann-Kathrin Dombrowski](https://papers.nips.cc/author/ann-kathrin-dombrowski-14708), [Maximillian Alber](https://papers.nips.cc/author/maximillian-alber-14709), [Christopher Anders](https://papers.nips.cc/author/christopher-anders-14710), [Marcel Ackermann](https://papers.nips.cc/author/marcel-ackermann-14711), [Klaus-Robert Müller](https://papers.nips.cc/author/klaus-robert-muller-1282), [Pan Kessel](https://papers.nips.cc/author/pan-kessel-14712)
|
|
|
1. [On Relating Explanations and Adversarial Examples](https://papers.nips.cc/paper/9717-on-relating-explanations-and-adversarial-examples) [Alexey Ignatiev](https://papers.nips.cc/author/alexey-ignatiev-15097), [Nina Narodytska](https://papers.nips.cc/author/nina-narodytska-15098), [Joao Marques-Silva](https://papers.nips.cc/author/joao-marques-silva-15099)
|
|
|
1.
|
|
|
|
|
|
|
|
|
2. [Solving Interpretable Kernel Dimensionality Reduction](https://papers.nips.cc/paper/9005-solving-interpretable-kernel-dimensionality-reduction) [Chieh Wu](https://papers.nips.cc/author/chieh-wu-13837), [Jared Miller](https://papers.nips.cc/author/jared-miller-13838), [Yale Chang](https://papers.nips.cc/author/yale-chang-13839), [Mario Sznaier](https://papers.nips.cc/author/mario-sznaier-13840), [Jennifer Dy](https://papers.nips.cc/author/jennifer-dy-13841) (AA)
|
|
|
3. [This Looks Like That: Deep Learning for Interpretable Image Recognition](https://papers.nips.cc/paper/9095-this-looks-like-that-deep-learning-for-interpretable-image-recognition) [Chaofan Chen](https://papers.nips.cc/author/chaofan-chen-14016), [Oscar Li](https://papers.nips.cc/author/oscar-li-14017), [Daniel Tao](https://papers.nips.cc/author/daniel-tao-14018), [Alina Barnett](https://papers.nips.cc/author/alina-barnett-14019), [Cynthia Rudin](https://papers.nips.cc/author/cynthia-rudin-2772), [Jonathan K. Su](https://papers.nips.cc/author/jonathan-k-su-14020)
|
|
|
4. [CXPlain: Causal Explanations for Model Interpretation under Uncertainty](https://papers.nips.cc/paper/9211-cxplain-causal-explanations-for-model-interpretation-under-uncertainty) [Patrick Schwab](https://papers.nips.cc/author/patrick-schwab-14203), [Walter Karlen](https://papers.nips.cc/author/walter-karlen-14204)
|
|
|
5. [Towards Interpretable Reinforcement Learning Using Attention Augmented Agents](https://papers.nips.cc/paper/9400-towards-interpretable-reinforcement-learning-using-attention-augmented-agents) [Alexander Mott](https://papers.nips.cc/author/alexander-mott-14506), [Daniel Zoran](https://papers.nips.cc/author/daniel-zoran-6768), [Mike Chrzanowski](https://papers.nips.cc/author/mike-chrzanowski-11807), [Daan Wierstra](https://papers.nips.cc/author/daan-wierstra-5118), [Danilo Jimenez Rezende](https://papers.nips.cc/author/danilo-jimenez-rezende-7298)
|
|
|
6. [Accurate Layerwise Interpretable Competence Estimation](https://papers.nips.cc/paper/9548-accurate-layerwise-interpretable-competence-estimation) [Vickram Rajendran](https://papers.nips.cc/author/vickram-rajendran-14788), [William LeVine](https://papers.nips.cc/author/william-levine-14789)
|
|
|
7. [Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)](https://papers.nips.cc/paper/9633-interpreting-and-improving-natural-language-processing-in-machines-with-natural-language-processing-in-the-brain)[Mariya Toneva](https://papers.nips.cc/author/mariya-toneva-14808), [Leila Wehbe](https://papers.nips.cc/author/leila-wehbe-14809) (am not sure about this)
|
|
|
8. [Fooling Neural Network Interpretations via Adversarial Model Manipulation](https://papers.nips.cc/paper/8558-fooling-neural-network-interpretations-via-adversarial-model-manipulation) [Juyeon Heo](https://papers.nips.cc/author/juyeon-heo-12966), [Sunghwan Joo](https://papers.nips.cc/author/sunghwan-joo-12967), [Taesup Moon](https://papers.nips.cc/author/taesup-moon-9152)
|
|
|
9. [Towards Automatic Concept-based Explanations](https://papers.nips.cc/paper/9126-towards-automatic-concept-based-explanations) [Amirata Ghorbani](https://papers.nips.cc/author/amirata-ghorbani-14069), [James Wexler](https://papers.nips.cc/author/james-wexler-14070), [James Y. Zou](https://papers.nips.cc/author/james-y-zou-6498), [Been Kim](https://papers.nips.cc/author/been-kim-7252)
|
|
|
10. [Visualizing and Measuring the Geometry of BERT](https://papers.nips.cc/paper/9065-visualizing-and-measuring-the-geometry-of-bert) [Emily Reif](https://papers.nips.cc/author/emily-reif-13960), [Ann Yuan](https://papers.nips.cc/author/ann-yuan-13961), [Martin Wattenberg](https://papers.nips.cc/author/martin-wattenberg-1391), [Fernanda B. Viegas](https://papers.nips.cc/author/fernanda-b-viegas-13962), [Andy Coenen](https://papers.nips.cc/author/andy-coenen-13963), [Adam Pearce](https://papers.nips.cc/author/adam-pearce-13964), [Been Kim](https://papers.nips.cc/author/been-kim-7252)
|
|
|
11. [Deep Model Transferability from Attribution Maps](https://papers.nips.cc/paper/8849-deep-model-transferability-from-attribution-maps) [Jie Song](https://papers.nips.cc/author/jie-song-13540), [Yixin Chen](https://papers.nips.cc/author/yixin-chen-3208), [Xinchao Wang](https://papers.nips.cc/author/xinchao-wang-11553), [Chengchao Shen](https://papers.nips.cc/author/chengchao-shen-13541), [Mingli Song](https://papers.nips.cc/author/mingli-song-11556)
|
|
|
12. [Robust Attribution Regularization](https://papers.nips.cc/paper/9577-robust-attribution-regularization) [Jiefeng Chen](https://papers.nips.cc/author/jiefeng-chen-14836), [Xi Wu](https://papers.nips.cc/author/xi-wu-14837), [Vaibhav Rastogi](https://papers.nips.cc/author/vaibhav-rastogi-14838), [Yingyu Liang](https://papers.nips.cc/author/yingyu-liang-6637), [Somesh Jha](https://papers.nips.cc/author/somesh-jha-14420)
|
|
|
13. [Demystifying Black-box Models with Symbolic Metamodels](https://papers.nips.cc/paper/9308-demystifying-black-box-models-with-symbolic-metamodels) [Ahmed M. Alaa](https://papers.nips.cc/author/ahmed-m-alaa-9344), [Mihaela van der Schaar](https://papers.nips.cc/author/mihaela-van-der-schaar-9351)
|
|
|
14. [Deliberative Explanations: visualizing network insecurities](https://papers.nips.cc/paper/8418-deliberative-explanations-visualizing-network-insecurities) [Pei Wang](https://papers.nips.cc/author/pei-wang-12672), [Nuno Nvasconcelos](https://papers.nips.cc/author/nuno-nvasconcelos-9103)
|
|
|
15. [Grid Saliency for Context Explanations of Semantic Segmentation](https://papers.nips.cc/paper/8874-grid-saliency-for-context-explanations-of-semantic-segmentation) [Lukas Hoyer](https://papers.nips.cc/author/lukas-hoyer-13590), [Mauricio Munoz](https://papers.nips.cc/author/mauricio-munoz-13591), [Prateek Katiyar](https://papers.nips.cc/author/prateek-katiyar-13592), [Anna Khoreva](https://papers.nips.cc/author/anna-khoreva-13552), [Volker Fischer](https://papers.nips.cc/author/volker-fischer-11249)
|
|
|
16. [On the (In)fidelity and Sensitivity of Explanations](https://papers.nips.cc/paper/9278-on-the-infidelity-and-sensitivity-of-explanations) [Chih-Kuan Yeh](https://papers.nips.cc/author/chih-kuan-yeh-12137), [Cheng-Yu Hsieh](https://papers.nips.cc/author/cheng-yu-hsieh-14308), [Arun Suggala](https://papers.nips.cc/author/arun-suggala-10070), [David I. Inouye](https://papers.nips.cc/author/david-i-inouye-7228), [Pradeep K. Ravikumar](https://papers.nips.cc/author/pradeep-k-ravikumar-6369)
|
|
|
17. [Explanations can be manipulated and geometry is to blame](https://papers.nips.cc/paper/9511-explanations-can-be-manipulated-and-geometry-is-to-blame) [Ann-Kathrin Dombrowski](https://papers.nips.cc/author/ann-kathrin-dombrowski-14708), [Maximillian Alber](https://papers.nips.cc/author/maximillian-alber-14709), [Christopher Anders](https://papers.nips.cc/author/christopher-anders-14710), [Marcel Ackermann](https://papers.nips.cc/author/marcel-ackermann-14711), [Klaus-Robert Müller](https://papers.nips.cc/author/klaus-robert-muller-1282), [Pan Kessel](https://papers.nips.cc/author/pan-kessel-14712)
|
|
|
18. [On Relating Explanations and Adversarial Examples](https://papers.nips.cc/paper/9717-on-relating-explanations-and-adversarial-examples) [Alexey Ignatiev](https://papers.nips.cc/author/alexey-ignatiev-15097), [Nina Narodytska](https://papers.nips.cc/author/nina-narodytska-15098), [Joao Marques-Silva](https://papers.nips.cc/author/joao-marques-silva-15099)
|
|
|
|
|
|
-->
|
|
|
|
|
|
|
|
|
|