next up previous
Next: Examples of user goals Up: Affective User Modeling Previous: Herman the Bug

Overview of Affective User modeling Architecture

The general idea behind the model we are using is that AR agents have relatively reusable structures for appraising the world. The same structures that give them their own dispositions, can be built, and maintained, for other agents as well. The vehicle for attempting to model some rudimentary form of the affective state of users is based on this idea:

  1. AR agents have a dispositional component which determines how they appraise the world. This frame-based structure allows them to interpret situations that arise in ways that may give rise to emotion responses.
  2. Because agents have emotions about the fortunes of other agents, it is necessary for them to also maintain similar structures for these other agents. In other words, if the experiencing agent's team wins, for example, he will be happy for himself, but might gloat over an agent rooting for the other team. To effect this the agent's own appraisal structure must result in an appraisal of an achieved goal from the situation, but the agent's own structure of the presumed goals of the second agent must result in an appraisal of a blocked goal from that same situation.
  3. Agents, who already keep these concerns-of-others structures, can maintain them for users as well.
  4. A perfect structure of each individual user's goals, principles, and preferences, (e.g., a perfect affective user model, albeit begging the question of updating it correctly) would allow a great many correct inferences to be made about their emotion responses to the situations that arise while using the system. Since this will not be the case, it is necessary for us to use multiple types of inference:

    1. Ask the user. In work with the AR, it appears to be true that users are motivated to express themselves to a computer agent who appears to have some understanding of how they feel.
    2. Use other known information to make assumptions about user types. Some users like to win, some like to have fun, some prefer to follow the rules, some are impatient. These qualities will tend to remain constant across tasks and domains.
    3. Use context information. For example, a user who has just repeatedly failed is likely to feel bad, whereas one who has been successful is likely to feel good.
    4. How would most users feel? The more user models extant, the stronger a prototype we have for a typical user.
    5. If all else fails, what would the agent feel if it happened to him? Agents have affective lives too. One can always ask how they themselves would feel, and make the assumption that the user would feel that way too\ (i.e., the agent would filter the situation through its own appraisal mechanism and examine the resulting emotions which do, or do not, arise).
    6. Although not intended to be part of the preliminary tutoring system work, AR agents do have access to a case-based reasoning component that allows them to reason abductively back from actions assumed to be manifested as the result of an affective state. In this way, agents collect case information, and are trained, to heuristically classify sets of tokens present in the environment as representing differing emotion expressions.

Our hypothesis is that we can be at least minimally effective at filling in missing information when working from a structure that specifies (a) what is likely to be true of the antecedents of user emotion, and (b) gives us a high-level understanding of different plausible affective user models, in a relatively comprehensive (albeit purely descriptive) model of human emotion. In other words, having a rigorously defined model of what user affect we are looking for helps us to map system events, and direct queries of the user, effectively.


next up previous
Next: Examples of user goals Up: Affective User Modeling Previous: Herman the Bug

Clark Elliott
Mon Mar 10 19:53:21 EST 1997