Emotion representation on the computer can be seen to fall into at least four categories of pursuit: (1) the testing of design issues raised by theories of emotion [Colby1981, Toda1982, Frijda & Swagerman1987, Pfeifer & Nicholas1985, Sloman1987], (2) the placing of neuro-motor control in an environmental or social context [Gray1993, Rolls1993], (3) the use of a ``folk'' representation of emotion to control the behavior of automated agents in social situations; and to predict, or attempt to understand, the behavior of other agents in such situations [Elliott1993, Reilly1993, Bates, A. Bryan Loyall, & Reilly1992] and, (4) the use of emotions for process control [Birnbaum & Collins1984].
At the recent Workshop on Architectures Underlying Motivation
and Emotion it became clear
that the issue of which approach is more promising is far from settled. Even
commonly held beliefs about emotions, such as their use as some sort of
reactive-planning mechanism were questioned (e.g., Jeffrey Gray posed the
question, if this were true, why would all the manifestations of fear arise
many seconds after slamming on one's brakes to avoid an auto
accident?). Perhaps what we consider to be emotions arise only as a
byproduct of more essential mechanisms? Nonetheless, it seems that emotions
are ubiquitous in human society and an integral part of the social fabric
thereof. Until shown that it is wrong, we will continue to make the following
argument: even if we were to completely understand, and be able to recreate,
the neural-procedural architecture of the the part of the brain where
emotions reside, we still would need to have an understanding of the
software that was to run on the machine. (To wit: consider that
deservingness is a somewhat universal concept, and its effects on our
emotions are also common fare (e.g., our pity for people suffering the
effects of war is increased when those people are innocent children.).
How is this represented in the biological hardware?) Additionally, unless we
can ``download'' human reasoning into a machine, it is necessary to specify
the rules underlying the tasks and responses we wish to make our agents
capable of, within the various domains. In the short term at least, this
will require a scientifically based, analytical understanding of how
personality and emotion affect the interaction and motivation of human
agents in social situations. Note that we are not so pure in our arguments:
such bottom up approaches will yield tremendous insight into design, and
will most certainly constrain the top down approach. Likewise, our top-down
approach will yield insights into determining the more salient emotion
issues in developing a smooth social interaction with computers. We see the
use of intelligent computer systems, especially those that interact with
human agents, as the most promising path of study.