next up previous
Next: Deep thinking at the Up: Agent-based Models of emotion Previous: Agent-based Models of emotion

Affective Reasoning is effective reasoning

In our own work in the Affective Reasoning (AR) paradigm we have used a descriptive model of emotion based on the seminal work of Ortony, et al. [Ortony, Clore, & Collins1988]. The AR has been agent-centric since its inception in 1992. The model is manifested in, albeit entirely independent of, network-efficient multimedia agents that have highly expressive schematic faces, speak with somewhat emotionally-inflected voices, listen to users through speech recognition, and use a rich set of musical selections to help reflect their current states. The agents have twenty-six emotion types, along with a rich set of variables for controlling intensity. A key element of the AR agents, which run on a PC platform, is that they respond in real time to input, or lack thereof.

Recent work in the project has several branches. In one study it was shown that it is possible for computer realized agents to do as well as a human actor at using non-verbal emotion cues to convey meaningful background information about complex situations [Elliott1997b]. In another branch, the AR agents were able to generate, and present, a large number of stories, based on a single external plot sequence [Elliott et al. 1998]. The stories varied according the dispositions and attendant appraisals of the agent-actors presenting the story, so that the themes, and characters in each differed significantly. In short, what happend stayed much the same, but how the characters felt about it and why they felt that way varied from story, under control of the program.

Work is also under way to incorporate the AR's model of emotion and personality into agent-based tutoring systems with the idea of making the automated tutoring personalities more engaging, more motivating, and more expressive. Preliminary designs have been outlined for integrating a subset of the AR's models into the Steve project at USC-ISI and the COSMO project at Intellimedia-NCSU [Elliott, Rickel, & Lester1997, Elliott1997a] (and c.f. the work of Rickel and Lester below).

One of the lines of research in the AR project which has yet to be developed to any depth, but which is promising enough to mention, is the idea of affective user modeling. In this paradigm the hard problems of general user modeling are left alone, with the focus being placed not on what the user knows, but how they feel [Elliott, Rickel, & Lester1997]. Since AR agents, and other emotionally-intelligent systems, necessarily keep at least implicit internal models of how others see the world (for how else can one, e.g., feel sorry for someone if not by knowing that they are sad about something?) it is not a big step to then model a user's general emotion state. In the AR agents this internal model is explicit ([Elliott & Ortony1992]), and it is only a minor theoretical leap to use this as a basis for tutoring, and other, goals. Bolstering this approach is something we have observed ad hoc in the relationship between users and AR agents, but which is also commonsensical: people are socially motivated to express their emotion states (e.g., I am frustrated, I am angry, I admire the way you...) even to a computer agent, as long as the agent has some way to respond appropriately. This research is in contrast to, but can work in concert with, those developing real-time sensing mechanisms for detecting human emotion, such as those in the emotion sensing subgroup of the Affective Computing project at MIT (c.f., http://vismod.www.media.mit.edu/vismod/ demos/affect/AC_research/sensing.html).

Just as Fogg and Nass [Fogg & Nass1997] have shown that users prefer flattery (at least in the short term) even when they know it for what it is, it might not be so far-fetched to find that users can accept a computer agent that says, e.g., ``I am just a simple computer program. Still, I consider you to be my friend. I believe that you are unhappy about [fill in the blank]. In my own small way, I am sorry that it happened.''


next up previous
Next: Deep thinking at the Up: Agent-based Models of emotion Previous: Agent-based Models of emotion

Clark Elliott
Thu Dec 25 19:14:31 EST 1997