Anatomy of a HCI Researcher

January 25th, 2010 § 0

A fun poster design I made for a design course in CMU. It abstractly charts my academic different interests from high school to grad school, and how I became interested in HCI. You may browse the process booklet for an explanation.


final2-large

View Large On White

Water Gonna Do?

January 21st, 2010 § 0

51-711 Graduate Design Studio 1

Granny Home Activity Monitor Simulator

January 21st, 2010 § 0

05-344 Applied Machine Learning

Project for course Applied Machine Learning. I used the dataset provided from Kasteren et al. 2008 about activity recognition in a smart home to train a classifer using a decision tree. To test the classifier, I developed this simulator using a Java game engine (Golden T Game Engine).


Game simulator for smart home activity recognition

References

  • Golden T Game Engine. www.goldenstudios.or.id . Retrieved 16 December 2008.
  • Kasteren, T.L.M., Noulas, A. K., Englebienne, G., Kröse, B.J.A. Accurate Activity Recognition in a Home Setting. In Proc. Ubicomp 08. Seoul, South Korea, 2008.

» Read the rest of this entry «

Digital Stress Bell

January 21st, 2010 § 0

05-833 Applied Gadgets, Sensors and Activity Recognition in HCI
For a course, I built a soft, squeezable “stress ball” with physiological sensors, a PSoC
microcontroller, and a Bluetooth chip to measure various physiological signals and
communicated the information to a computer. The development went from breadboard prototyping to a custom PCB implementation.

Digital Stressbell

Gadget project developed for a course. Combines the concept of a stressball, dumb bell (shape), and a sound emitting "bell". It is meant to sense various physiological signals to determine if the user is stressed.
Processor
- Programmable System on Chip (PSoC)
Sensors:
- Flex sensors to detect squeezing
- Thermistor to detect palm temperature
- Electrical contacts to measure galvanic skin response (GSR) - a measure of arousal
- Heart rate (oximeter) - didn't really get this to work
Outputs
- LEDs
- LCD display
- Bluetooth output to computer (that has a custom written oscilloscope program and data logger)
- audio
Gadget project developed for a course. Combines the concept of a stressball, dumb bell (shape), and a sound emitting "bell". It is meant to sense various physiological signals to determine if the user is stressed. Processor - Programmable System on Chip (PSoC) Sensors: - Flex sensors to detect squeezing - Thermistor to detect palm temperature - Electrical contacts to measure galvanic skin response (GSR) - a measure of arousal - Heart rate (oximeter) - didn't really get this to work Outputs - LEDs - LCD display - Bluetooth output to computer (that has a custom written oscilloscope program and data logger) - audio

Digital Stressbell v2

Digital Stressbell v2

Digital Stressbell v1

Digital Stressbell v1

Digital Stressbell

Digital Stressbell

Digital Stressbell

Digital Stressbell

» Read the rest of this entry «

Assessing Demand for Intelligibility in Context-Aware Applications.

January 18th, 2010 § 0

This study investigates which explanations users of context-aware applications wanted to know so that we could target to provide these explanations to maximize user satisfaction. We presented 860 online participants with video scenarios of four prototypical context-aware applications under various circumstances along the dimensions of application behavior appropriateness, situation criticality, goal-supportiveness, recommendation, and number of external dependencies. We elicited and subsequently solicited (validation) what information participants wanted to know under the various circumstances and extracted 11 types of explanations of interest. We also found how the demands for the explanations varied with circumstance (e.g., explanations of all types are highly desired for critical situations, and Why Not explanations are highly desired for goal-supportive applications such as reminders). We presented our results as design recommendations of when context-aware applications should provide certain explanations.

Intelligibility Design Recommendations

We provide a table of recommendation to designers and developers of context-aware applications derived from survey data of participant responses and the resulting analysis [Lim & Dey 2009]. They can use this table to determine which types of intelligibility explanations to include in their applications depending on the circumstances their applications would encounter. For example, if the application is not very accurate, it would have low Appropriateness, and we would recommend the explanation types: Why, Why Not, How, What If, and Control.

Instructions on usage

Select the checkbox or radio buttons as according to how your candidate context-aware application is defined (e.g. whether it has high criticality, etc). This will highlight the respective explanation types recommended for your application. You can mouse over the keywords in the table for the definitions of what they mean.

Explanation Type General
ApplicationInputs  
Outputs  
ModelWhy +
Why Not  
How +
What If  
What Else  
Certainty +
Control +
Situation  
Appropriateness Criticality Goal-Supportive Recommendation Externalities
LowHigh LowHigh LowHigh LowHigh LowHigh
    +        +
    ++     +   
++ +++         
+   +  ++     ++
++ ++    +  + 
    +    +  + 
    ++         
    ++  +      
++   ++         
    ++         
Select this option for recommendations for context-aware applications, in general. Whether the application tends to be accurate, or behaves appropriately.
E.g. an accuracy of <80% for recognizing falls may be considered to be of low Appropriateness.
Whether the situation presented is critical.
Situations involving accidents or medical concerns, or maybe work-related urgency can be considered highly critical.
Whether the situation is motivated by a goal the user has. Whether the application is recommending information for the user to follow or ignore. Whether the application is perceived to have high external dependencies
(e.g., getting weather information from a weather radio station) vs. being perceived as “self-contained.”
Explanations about the application, what it does, how it works, etc. What sensors or input sources the application uses/used and what their values are/were. What outputs, options, alternative actions the application can produce.
E.g. What accidents can the system sense?
Explanations about the conceptual model of the application. Why the system behaved the way it did for a specific event/action.
E.g. Why did the system report a fall?
Why the system did not behave another way for a specific event/action.
Normally asked when the user's expectation does not match the system behavior.
E.g. Why did the system not report a fire?
How the application achieves a decision or output action.
This is more general than the Why question.
E.g. How does the system distinguish a between a falling object and person?
Explanations about what would happen if an alternative circumstance or input values were present.
E.g. If an object falls, would the system report a fall?
What else the application has done / is doing other than what has been told.
E.g. Did the system alert emergency services of the accident?
Description of how confident the application is of its decision (recognition, interpretation, etc).
How accurate it is for an action.
How the user can change parameters for more appropriate application behavior, override, etc.
E.g. How can I change settings to control the sensitivity for reports?
Explanations to provide users with more situational awareness,
to get more information about the situation, environment, or people, rather than about the application.
E.g. What was the family member doing before the accident?

» Read the rest of this entry «

Assessing Impact of Intelligibility on Understanding Context-Aware Applications.

January 18th, 2010 § 0

We sought to explore how much better participants could understand intelligent, decision-based applications when provided explanations. In particular, we investigated differences in understanding and resulting trust when participants were provided with one of four types of explanations compared to receiving no explanations (None). The four types of explanations are in terms of answers to question types:

  1. Why did the application do X?
  2. Why did it not do Y?
  3. How (under what condition) does it do Y?
  4. What if there is a change W, what would happen?

We showed participants an online abstracted application with anonymous inputs and outputs and asked them to learn how the application makes decisions after viewing 24 examples of its performance. Of the 158 participants recruited, they were evenly divided into groups where some received one of the four types of explanations and one group received no explanation. We subsequently measured their understanding by testing whether they can predict missing inputs and outputs in 15 test cases, and asking them to explain how they think the application reasons. We also measured their level of trust of the application output.

We found that participants who received Why and Why Not explanations better understood and trusted the application than How To and What If.

abbox -results

abbox -results

» Read the rest of this entry «

Firefly

January 17th, 2010 § 0

Investigated people’s reaction time to visual stimuli at 7 placements on the body (wrist, upper arm, shoulder, brooch, waist, thigh, foot). We found that people reacted fastest to the wrist, and slowest to the foot. Our findings would inform others who want to deploy wearable displays on various body locations.

» Read the rest of this entry «