Intelligibility

Supporting Intelligibility in Context-Aware Applications

Context-aware applications employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to the loss of user trust, satisfaction and acceptance of these systems. Fortunately, automatically providing explanations about a system’s decision process can help mitigate this problem. However, users may not be interested in all the information and explanations that the applications can produce. This project aims to improve the usability of and trust in context-aware applications, by gaining an understanding of how mental models are formed about these systems, and designing interaction techniques and programming tools that will help application designers make their systems intelligible.

Assessing Impact of Intelligibility on Understanding Context-Aware Applications.

We sought to explore how much better participants could understand intelligent, decision-based applications when provided explanations. In particular, we investigated differences in understanding and resulting trust when participants were provided with one of four types of explanations compared to receiving no explanations (None). The four types of explanations are in terms of answers to question types:

  1. Why did the application do X?
  2. Why did it not do Y?
  3. How (under what condition) does it do Y?
  4. What if there is a change W, what would happen?

We showed participants an online abstracted application with anonymous inputs and outputs and asked them to learn how the application makes decisions after viewing 24 examples of its performance. Of the 158 participants recruited, they were evenly divided into groups where some received one of the four types of explanations and one group received no explanation. We subsequently measured their understanding by testing whether they can predict missing inputs and outputs in 15 test cases, and asking them to explain how they think the application reasons. We also measured their level of trust of the application output.

We found that participants who received Why and Why Not explanations better understood and trusted the application than How To and What If.

abbox -results

abbox -results

Read the rest of this entry »

Page last modified: April 23, 2016