next up previous
Next: Expert System on Emotion Up: Research Questions Previous: Affective user modeling.

A sympathetic computer

Sidestepping the issue of whether a a user's engagement with a computer system will be increased by dealing with an emotionally believable agent, we might first ask what is required to achieve believability. How much does it matter if the computer is transparent in its capabilities and motivations? What level of sophistication is required? How intelligent does the system have to be to achieve a minimal level of believability?

Consider the following: A computer system detects, or is told by the user (as discussed above), that the user is fearful (anxious, worried, scared - each of these would be categorized as different intensities of the emotion category fear). The system is able to respond by asking what might be considered reasonable questions: ``What is the name of the event over which you are fearful?,'' ``How far is this event in the future?'' (According to the underlying theory, as embodied in the Affective Reasoner, fear is a prospect-based emotion resulting from the likely blocking of a future event.) ``What is the likelihood of this coming about?'' (Likelihood is an intensity variable). It is then able to comment on relationships between the various intensity variables and then to recall cases from a library, where either the automated agent itself (``I was afraid of getting turned off!'' [Frijda & Swagerman1987]), or some other agent (``I was previously told the following story by another user...'') was fearful with similar intensities. Lastly, after verifying that the retrieved case involves an emotion similar to that of the user, the computer agent responds by saying, ``I am only a stupid computer program. Nonetheless, in my own simple way, the following is true: I consider you my friend (see models of relationship below). I pity you for your fearful state. I hope to become aware in the future that you are relieved about the outcome of situation < input-situation>, and that your preservation goals with respect to this are not blocked after all.''

Such a system is within the range of our current technology, and representational capabilities (and see the reference to Scherer's work below). At what level of sophistication are users willing to accept that, in its own (remarkably) simple way, the computer does nonetheless feel pity for them? Given the tendency of users to anthropomorphize even the simplest video game characters, this would seem to be an important question to answer.



next up previous
Next: Expert System on Emotion Up: Research Questions Previous: Affective user modeling.



Clark Elliott
Thu May 2 01:02:59 CDT 1996