Supporting Intelligibility in Context-Aware Applications

Context-aware applications employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to the loss of user trust, satisfaction and acceptance of these systems. Fortunately, automatically providing explanations about a system’s decision process can help mitigate this problem. However, users may not be interested in all the information and explanations that the applications can produce. This project aims to improve the usability of and trust in context-aware applications, by gaining an understanding of how mental models are formed about these systems, and designing interaction techniques and programming tools that will help application designers make their systems intelligible.

Leave a Reply

Your email address will not be published. Required fields are marked *

You can add images to your comment by clicking here.