... | ... | @@ -8,3 +8,12 @@ Some notions addressed in the paper: |
|
|
|
|
|
|
|
|
# What Errors do ML systems show ?
|
|
|
|
|
|
* AI may have the Wrong Objective
|
|
|
* AI may be Using Inadequate Features: correlated features in the data
|
|
|
* Distributional Drift
|
|
|
* Facilitating User Control: Many AI systems induce user pref- erences from their actions. For example, adaptive news feeds predict which stories are likely most interesting to a user. As robots become more common and enter the home, preference learning will become ever more common. If users understand why the AI performed an undesired action, they can better issue instructions that will lead to improved future behavior.
|
|
|
* User Acceptance: Even if they don’t seek to change system behavior, users have been shown to be happier with and more likely to accept algorithmic decisions if they are accompanied by an explanation [18]. After being told that they should have their kidney removed, it’s natural for a patient to ask the doctor why — even if they don’t fully understand the answer.
|
|
|
* Improving Human Insights:
|
|
|
* Legal Imperatives
|
|
|
|