... | ... | @@ -6,11 +6,11 @@ https://nips.cc/Conferences/2019/Schedule?showParentSession=15517 |
|
|
|
|
|
## Other tracks
|
|
|
|
|
|
- [GNNExplainer: Generating Explanations for Graph Neural Networks](https://papers.nips.cc/paper/9123-gnnexplainer-generating-explanations-for-graph-neural-networks) [Zhitao Ying](https://papers.nips.cc/author/zhitao-ying-9531), [Dylan Bourgeois](https://papers.nips.cc/author/dylan-bourgeois-14057), [Jiaxuan You](https://papers.nips.cc/author/jiaxuan-you-11387), [Marinka Zitnik](https://papers.nips.cc/author/marinka-zitnik-10897), [Jure Leskovec](https://papers.nips.cc/author/jure-leskovec-4767)
|
|
|
- [A Benchmark for Interpretability Methods in Deep Neural Networks](https://papers.nips.cc/paper/9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks) [Sara Hooker](https://papers.nips.cc/author/sara-hooker-14137), [Dumitru Erhan](https://papers.nips.cc/author/dumitru-erhan-6793), [Pieter-Jan Kindermans](https://papers.nips.cc/author/pieter-jan-kindermans-9525), [Been Kim](https://papers.nips.cc/author/been-kim-7252)
|
|
|
- [Fooling Neural Network Interpretations via Adversarial Model Manipulation](https://papers.nips.cc/paper/8558-fooling-neural-network-interpretations-via-adversarial-model-manipulation) [Juyeon Heo](https://papers.nips.cc/author/juyeon-heo-12966), [Sunghwan Joo](https://papers.nips.cc/author/sunghwan-joo-12967), [Taesup Moon](https://papers.nips.cc/author/taesup-moon-9152)
|
|
|
- [Learning Dynamics of Attention: Human Prior for Interpretable Machine Reasoning](https://papers.nips.cc/paper/8835-learning-dynamics-of-attention-human-prior-for-interpretable-machine-reasoning) [Wonjae Kim](https://papers.nips.cc/author/wonjae-kim-13513), [Yoonho Lee](https://papers.nips.cc/author/yoonho-lee-13514)
|
|
|
- [Solving Interpretable Kernel Dimensionality Reduction](https://papers.nips.cc/paper/9005-solving-interpretable-kernel-dimensionality-reduction) [Chieh Wu](https://papers.nips.cc/author/chieh-wu-13837), [Jared Miller](https://papers.nips.cc/author/jared-miller-13838), [Yale Chang](https://papers.nips.cc/author/yale-chang-13839), [Mario Sznaier](https://papers.nips.cc/author/mario-sznaier-13840), [Jennifer Dy](https://papers.nips.cc/author/jennifer-dy-13841)
|
|
|
- [GNNExplainer: Generating Explanations for Graph Neural Networks](https://papers.nips.cc/paper/9123-gnnexplainer-generating-explanations-for-graph-neural-networks) [Zhitao Ying](https://papers.nips.cc/author/zhitao-ying-9531), [Dylan Bourgeois](https://papers.nips.cc/author/dylan-bourgeois-14057), [Jiaxuan You](https://papers.nips.cc/author/jiaxuan-you-11387), [Marinka Zitnik](https://papers.nips.cc/author/marinka-zitnik-10897), [Jure Leskovec](https://papers.nips.cc/author/jure-leskovec-4767) (AA)
|
|
|
- [A Benchmark for Interpretability Methods in Deep Neural Networks](https://papers.nips.cc/paper/9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks) [Sara Hooker](https://papers.nips.cc/author/sara-hooker-14137), [Dumitru Erhan](https://papers.nips.cc/author/dumitru-erhan-6793), [Pieter-Jan Kindermans](https://papers.nips.cc/author/pieter-jan-kindermans-9525), [Been Kim](https://papers.nips.cc/author/been-kim-7252) (AA)
|
|
|
- [Fooling Neural Network Interpretations via Adversarial Model Manipulation](https://papers.nips.cc/paper/8558-fooling-neural-network-interpretations-via-adversarial-model-manipulation) [Juyeon Heo](https://papers.nips.cc/author/juyeon-heo-12966), [Sunghwan Joo](https://papers.nips.cc/author/sunghwan-joo-12967), [Taesup Moon](https://papers.nips.cc/author/taesup-moon-9152) (AA)
|
|
|
- [Learning Dynamics of Attention: Human Prior for Interpretable Machine Reasoning](https://papers.nips.cc/paper/8835-learning-dynamics-of-attention-human-prior-for-interpretable-machine-reasoning) [Wonjae Kim](https://papers.nips.cc/author/wonjae-kim-13513), [Yoonho Lee](https://papers.nips.cc/author/yoonho-lee-13514) (AA)
|
|
|
- [Solving Interpretable Kernel Dimensionality Reduction](https://papers.nips.cc/paper/9005-solving-interpretable-kernel-dimensionality-reduction) [Chieh Wu](https://papers.nips.cc/author/chieh-wu-13837), [Jared Miller](https://papers.nips.cc/author/jared-miller-13838), [Yale Chang](https://papers.nips.cc/author/yale-chang-13839), [Mario Sznaier](https://papers.nips.cc/author/mario-sznaier-13840), [Jennifer Dy](https://papers.nips.cc/author/jennifer-dy-13841) (AA)
|
|
|
- [This Looks Like That: Deep Learning for Interpretable Image Recognition](https://papers.nips.cc/paper/9095-this-looks-like-that-deep-learning-for-interpretable-image-recognition) [Chaofan Chen](https://papers.nips.cc/author/chaofan-chen-14016), [Oscar Li](https://papers.nips.cc/author/oscar-li-14017), [Daniel Tao](https://papers.nips.cc/author/daniel-tao-14018), [Alina Barnett](https://papers.nips.cc/author/alina-barnett-14019), [Cynthia Rudin](https://papers.nips.cc/author/cynthia-rudin-2772), [Jonathan K. Su](https://papers.nips.cc/author/jonathan-k-su-14020)
|
|
|
- [CXPlain: Causal Explanations for Model Interpretation under Uncertainty](https://papers.nips.cc/paper/9211-cxplain-causal-explanations-for-model-interpretation-under-uncertainty) [Patrick Schwab](https://papers.nips.cc/author/patrick-schwab-14203), [Walter Karlen](https://papers.nips.cc/author/walter-karlen-14204)
|
|
|
- [Towards Interpretable Reinforcement Learning Using Attention Augmented Agents](https://papers.nips.cc/paper/9400-towards-interpretable-reinforcement-learning-using-attention-augmented-agents) [Alexander Mott](https://papers.nips.cc/author/alexander-mott-14506), [Daniel Zoran](https://papers.nips.cc/author/daniel-zoran-6768), [Mike Chrzanowski](https://papers.nips.cc/author/mike-chrzanowski-11807), [Daan Wierstra](https://papers.nips.cc/author/daan-wierstra-5118), [Danilo Jimenez Rezende](https://papers.nips.cc/author/danilo-jimenez-rezende-7298)
|
... | ... | @@ -31,4 +31,3 @@ https://nips.cc/Conferences/2019/Schedule?showParentSession=15517 |
|
|
|
|
|
|
|
|
|
|
|
|