The Challenge of Crafting Intelligible Intelligence: In this paper Bansal and Weld argue that to build trust in interpretability systems it should have an interactive nature (almost conversational style) with its stake holder. They state
"The key challenge for designing intelligible AI is communicating a complex computational process to a human. This requires interdisciplinary skills, including HCI as well as AI and machine learning expertise."
Some notions addressed in the paper:
- one suggested criterion is human simulatability (Lipton '16): can a human user easily predict the model’s output for a given input? By this definition, sparse linear models are more interpretable than dense or non-linear ones.
What Errors do ML systems show ?
- AI may have the Wrong Objective
- AI may be Using Inadequate Features: correlated features in the data
- Distributional Drift
- Facilitating User Control: Many AI systems induce user pref- erences from their actions. For example, adaptive news feeds predict which stories are likely most interesting to a user. As robots become more common and enter the home, preference learning will become ever more common. If users understand why the AI performed an undesired action, they can better issue instructions that will lead to improved future behavior.
- User Acceptance: Even if they don’t seek to change system behavior, users have been shown to be happier with and more likely to accept algorithmic decisions if they are accompanied by an explanation . After being told that they should have their kidney removed, it’s natural for a patient to ask the doctor why — even if they don’t fully understand the answer.
- Improving Human Insights:
- Legal Imperatives
Tutorial at AAAI 19
Explanation and Persuasion Theory
Taken from : Progressive Disclosure Empirically Motivated Approaches to Designing Effective Transparency
People interact with computers and intelligent systems in ways that mirror how they interact with other people [62,70]. Given that transparency is essentially an explanation of why a model made a given prediction, we can turn to fields such as psychology and sociology for guidance about operationalizing explanations. These fields have a long history of studying explanation. One approach is to model causal explanation as a form of conversation which is governed by common-sense conversational rules  such as Grice’s maxims . In addition, when explanation is needed and a communication breakdown occurs this is remedied by a phenomenon known as conversational repair. Conversational repair is interactional, participants in the conversation collaborate to achieve mutual understanding; this often happens in a turn-by-turn structure with repeated questions and clarifications . These theories would indicate that we should operationalize transparency in ways that fit human communication and repair strategies.