Intelligibility

Supporting Intelligibility in Context-Aware Applications

Context-aware applications employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to the loss of user trust, satisfaction and acceptance of these systems. Fortunately, automatically providing explanations about a system’s decision process can help mitigate this problem. However, users may not be interested in all the information and explanations that the applications can produce. This project aims to improve the usability of and trust in context-aware applications, by gaining an understanding of how mental models are formed about these systems, and designing interaction techniques and programming tools that will help application designers make their systems intelligible.

Thesis Defense: Improving Understanding and Trust with Intelligibility in Context-Aware Applications

I will be defending my thesis in late April 2012 about my work in providing intelligibility in context-aware applications.

When:   April 23rd, Monday @ 9.30am
Where:  Newell Simon Hall 1507

THESIS DEFENSE
Improving Understanding and Trust with Intelligibility in Context-Aware Applications

COMMITTEE
Anind K. Dey (Chair), Carnegie Mellon University, Human-Computer Interaction Institute
Scott E. Hudson, Carnegie Mellon University, Human-Computer Interaction Institute
Aniket Kittur, Carnegie Mellon University, Human-Computer Interaction Institute
Margaret M. Burnett, Oregon State University

DOCUMENTS
Flyer
Dissertation

ABSTRACT
To facilitate everyday activities, context-aware applications use sensors to detect what is happening, and use increasingly complex mechanisms (e.g., by using big rule-sets or machine learning) to infer the user’s context and intent. For example, a mobile application can recognize that the user is in a conversation, and suppress any incoming calls. When the application works well, this implicit sensing and complex inference remain invisible. However, when it behaves inappropriately or unexpectedly, users may not understand its behavior, and this can lead users to mistrust, misuse, or even abandon it. To counter this lack of understanding and loss of trust, context-aware applications should be intelligible, capable of explaining their behavior.

We investigate providing intelligibility in context-aware applications and evaluate its usefulness to improve user understanding and trust for context-aware applications. Specifically, this thesis supports intelligibility in context-aware applications through the provision of explanations that answer different question types, such as: Why did it do X? Why did it not do Y? What if I did W, What will it do? How can I get the application to do Y? Etc.

This thesis takes a three-pronged approach to investigating intelligibility by (i) eliciting the user requirements for intelligibility, to identify what explanation types end-users are interested in asking context-aware applications, (ii) supporting the development of intelligible context-aware applications with a software toolkit and the design of these applications with design and usability recommendations, and (iii) evaluating the impact of intelligibility on user understanding and trust under various situations and application reliability, and measuring how users use an interactive intelligible prototype. We show that users are willing to use well-designed intelligibility, and this can improve user understanding and trust in the adaptive behavior of context-aware applications.

Page last modified: July 6, 2012

Second Workshop on Intelligibility and Control in Pervasive Computing

I am co-organizing a Pervasive 2012 workshop on Intelligibility and Control in Pervasive Computing with Jo Vermeulen and Fahim Kawsar to be held on June 18. This is the second year of the workshop. The Call for Papers is out and more information on the workshop can be found at the workshop website.

Page last modified: April 9, 2012

Thesis Proposal: Improving Understanding, Trust, and Control with Intelligibility in Context-Aware Applications

I will be presenting my thesis proposal in early May 2011 about my work in providing intelligibility in context-aware applications.

When:   May 2nd, Monday @ 1.30pm
Where:  Gates-Hillman Center 6115

THESIS PROPOSAL
Improving Understanding, Trust, and Control with Intelligibility in Context-Aware Applications

COMMITTEE
Anind K. Dey (Chair), Carnegie Mellon University, Human-Computer Interaction Institute
Scott E. Hudson, Carnegie Mellon University, Human-Computer Interaction Institute
Aniket Kittur, Carnegie Mellon University, Human-Computer Interaction Institute
Margaret M. Burnett, Oregon State University

DOCUMENTS
Flyer
Proposal

ABSTRACT
To facilitate everyday activities, context-aware applications use sensors to detect what is happening, and use increasingly complex mechanisms (e.g., by using machine learning) to infer the user’s context. For example, a mobile application can recognize that you are in a conversation, and suppress any incoming messages. When the application works well, this implicit sensing and complex inference remain invisible. However, when it behaves inappropriately or unexpectedly, users may not understand its behavior, and this can lead users to mistrust, misuse, or abandon it. To counter this, context-aware applications should be intelligible, capable of generating explanations of their behavior.

My thesis investigates providing intelligibility in context-aware applications, and evaluates its usefulness to improve user understanding, trust, and control. I explored what explanation types users want when using context-aware applications in various circumstances. I provided explanations in terms of questions that users would ask, such as why did it do X, what if I did W, what will it do? Early evaluation found that why and why not explanations can improve understanding and trust. I next developed a toolkit to help developers to implement intelligibility in their context-aware applications, such that they can automatically generate explanations. Following which, I conducted a usability study to derive design recommendations for presenting usable intelligibility interfaces of a mobile application. In the remaining work, I will evaluate intelligibility in more realistic settings. First, I shall explore the helpfulness and harmfulness of intelligibility for applications with high and low certainties. Finally, I shall investigate how intelligibility, through improving user understanding, can help the users to more effectively control a context-aware application.

Page last modified: April 10, 2012

ContextToolkit.org

I’ve recently launched a website for the new Context Toolkit that I’ve adapted from the original one built by my advisor, Anind, years ago. Visit www.contexttoolkit.org to learn more. There you can download v2.0 of the toolkit, and learn how to use it from tutorials there. The Intelligibility Toolkit is also now available for download as part of the Context Toolkit. Tutorials for how to use its various components are also located on the website.

Page last modified: January 17, 2011

Toolkit to Support Intelligibility in Context-Aware Applications.

With a design framework in place from [Lim & Dey 2009], this work makes a technical contribution by facilitating the provision of 8 explanation types (Input, Output, What, Why, Why Not, How To, What If, Certainty) generated from commonly used decision models in context-aware applications (rules, decision tree, naïve Bayes, hidden Markov models). The Intelligibility Toolkit extends the Enactor framework [Dey & Newberger] by providing more types of explanations and supporting machine learning classifiers other than rules. We validate the toolkit with three demonstration applications showing how the explanations can be generated from various decision models.

Flickr Tag Error: Call to display photo '4722069584' failed.

Error state follows:

  • stat: fail
  • code: 95
  • message: SSL is required

Read the rest of this entry »

Page last modified: June 21, 2010

Assessing Demand for Intelligibility in Context-Aware Applications.

This study investigates which explanations users of context-aware applications wanted to know so that we could target to provide these explanations to maximize user satisfaction. We presented 860 online participants with video scenarios of four prototypical context-aware applications under various circumstances along the dimensions of application behavior appropriateness, situation criticality, goal-supportiveness, recommendation, and number of external dependencies. We elicited and subsequently solicited (validation) what information participants wanted to know under the various circumstances and extracted 11 types of explanations of interest. We also found how the demands for the explanations varied with circumstance (e.g., explanations of all types are highly desired for critical situations, and Why Not explanations are highly desired for goal-supportive applications such as reminders). We presented our results as design recommendations of when context-aware applications should provide certain explanations.

Intelligibility Design Recommendations

We provide a table of recommendation to designers and developers of context-aware applications derived from survey data of participant responses and the resulting analysis [Lim & Dey 2009]. They can use this table to determine which types of intelligibility explanations to include in their applications depending on the circumstances their applications would encounter. For example, if the application is not very accurate, it would have low Appropriateness, and we would recommend the explanation types: Why, Why Not, How, What If, and Control.

Instructions on usage

Select the checkbox or radio buttons as according to how your candidate context-aware application is defined (e.g. whether it has high criticality, etc). This will highlight the respective explanation types recommended for your application. You can mouse over the keywords in the table for the definitions of what they mean.

Explanation Type General
ApplicationInputs  
Outputs  
ModelWhy +
Why Not  
How +
What If  
What Else  
Certainty +
Control +
Situation  
Appropriateness Criticality Goal-Supportive Recommendation Externalities
LowHigh LowHigh LowHigh LowHigh LowHigh
    +        +
    ++     +   
++ +++         
+   +  ++     ++
++ ++    +  + 
    +    +  + 
    ++         
    ++  +      
++   ++         
    ++         
Select this option for recommendations for context-aware applications, in general. Whether the application tends to be accurate, or behaves appropriately.
E.g. an accuracy of <80% for recognizing falls may be considered to be of low Appropriateness.
Whether the situation presented is critical.
Situations involving accidents or medical concerns, or maybe work-related urgency can be considered highly critical.
Whether the situation is motivated by a goal the user has. Whether the application is recommending information for the user to follow or ignore. Whether the application is perceived to have high external dependencies
(e.g., getting weather information from a weather radio station) vs. being perceived as “self-contained.”
Explanations about the application, what it does, how it works, etc. What sensors or input sources the application uses/used and what their values are/were. What outputs, options, alternative actions the application can produce.
E.g. What accidents can the system sense?
Explanations about the conceptual model of the application. Why the system behaved the way it did for a specific event/action.
E.g. Why did the system report a fall?
Why the system did not behave another way for a specific event/action.
Normally asked when the user's expectation does not match the system behavior.
E.g. Why did the system not report a fire?
How the application achieves a decision or output action.
This is more general than the Why question.
E.g. How does the system distinguish a between a falling object and person?
Explanations about what would happen if an alternative circumstance or input values were present.
E.g. If an object falls, would the system report a fall?
What else the application has done / is doing other than what has been told.
E.g. Did the system alert emergency services of the accident?
Description of how confident the application is of its decision (recognition, interpretation, etc).
How accurate it is for an action.
How the user can change parameters for more appropriate application behavior, override, etc.
E.g. How can I change settings to control the sensitivity for reports?
Explanations to provide users with more situational awareness,
to get more information about the situation, environment, or people, rather than about the application.
E.g. What was the family member doing before the accident?

Read the rest of this entry »

Page last modified: February 21, 2012

Assessing Impact of Intelligibility on Understanding Context-Aware Applications.

We sought to explore how much better participants could understand intelligent, decision-based applications when provided explanations. In particular, we investigated differences in understanding and resulting trust when participants were provided with one of four types of explanations compared to receiving no explanations (None). The four types of explanations are in terms of answers to question types:

  1. Why did the application do X?
  2. Why did it not do Y?
  3. How (under what condition) does it do Y?
  4. What if there is a change W, what would happen?

We showed participants an online abstracted application with anonymous inputs and outputs and asked them to learn how the application makes decisions after viewing 24 examples of its performance. Of the 158 participants recruited, they were evenly divided into groups where some received one of the four types of explanations and one group received no explanation. We subsequently measured their understanding by testing whether they can predict missing inputs and outputs in 15 test cases, and asking them to explain how they think the application reasons. We also measured their level of trust of the application output.

We found that participants who received Why and Why Not explanations better understood and trusted the application than How To and What If.

Flickr Tag Error: Call to display photo '4721856930' failed.

Error state follows:

  • stat: fail
  • code: 95
  • message: SSL is required

Read the rest of this entry »

Page last modified: June 21, 2010