|
|
[The Challenge of Cra ing Intelligible Intelligence](https://arxiv.org/pdf/1803.04263.pdf) In this paper Bansal and Weld argue that to build trust in interpretability systems it should have an interactive nature (almost conversational style) with its stake holder. They state |
|
|
\ No newline at end of file |
|
|
[The Challenge of Crafting Intelligible Intelligence](https://arxiv.org/pdf/1803.04263.pdf): In this paper Bansal and Weld argue that to build trust in interpretability systems it should have an interactive nature (almost conversational style) with its stake holder. They state
|
|
|
|
|
|
"The key challenge for designing intelligible AI is communicating a complex computational process to a human. This requires interdisciplinary skills, including HCI as well as AI and machine learning expertise."
|
|
|
|
|
|
Some notions addressed in the paper:
|
|
|
|
|
|
* one suggested criterion is human simulatability (Lipton '16): can a human user easily predict the model’s output for a given input? By this definition, sparse linear models are more interpretable than dense or non-linear ones.
|
|
|
|
|
|
|
|
|
#What Errors do ML systems show ? |
|
|
\ No newline at end of file |