With a design framework in place from [Lim & Dey 2009], this work makes a technical contribution by facilitating the provision of 8 explanation types (Input, Output, What, Why, Why Not, How To, What If, Certainty) generated from commonly used decision models in context-aware applications (rules, decision tree, naïve Bayes, hidden Markov models). The Intelligibility Toolkit extends the Enactor framework [Dey & Newberger] by providing more types of explanations and supporting machine learning classifiers other than rules. We validate the toolkit with three demonstration applications showing how the explanations can be generated from various decision models.