Toolkit to Support Intelligibility in Context-Aware Applications.

June 21st, 2010 § 0 comments

With a design framework in place from [Lim & Dey 2009], this work makes a technical contribution by facilitating the provision of 8 explanation types (Input, Output, What, Why, Why Not, How To, What If, Certainty) generated from commonly used decision models in context-aware applications (rules, decision tree, naïve Bayes, hidden Markov models). The Intelligibility Toolkit extends the Enactor framework [Dey & Newberger] by providing more types of explanations and supporting machine learning classifiers other than rules. We validate the toolkit with three demonstration applications showing how the explanations can be generated from various decision models.

intelligibility toolkit - architecture


Lim, B. Y., Dey, A. K. 2010.
Toolkit to Support Intelligibility in Context-Aware Applications.
In Proceedings of the 12th ACM international Conference on Ubiquitous Computing (Copenhagen, Denmark, September 26 – 29, 2010). Ubicomp ’10. ACM, New York, NY, 13-22. DOI=10.1145/1864349.1864353.
Poster for demo.

Leave a Reply

Your email address will not be published.

You can add images to your comment by clicking here.