The Role of Elegance in Emotion and Personality Reasoning for Believable Agents.

Clark Elliott
Institute for Applied Artificial Intelligence
School of Computer Science, Telecommunications, and Information Systems
DePaul University, 243 South Wabash Ave., Chicago, IL 60604

email: elliott AT cs depaul edu, Web: http://condor.depaul.edu/elliott


Formal Citation:

Clark Elliott (2002), `` The Role of Elegance in Emotion and Personality Reasoning for Believable Agents.'' in Robert Trappl, Paolo Petta & Sabine Payr, (Eds.), Emotions in Humans and Artifacts. MIT Press, Cambridge, MA.

[This paper was presented at a conference on AI and Emotion, Vienna, August 1999]



-- Position Paper --

Introduction:

In this short position paper we will suggest that elegance and artistic cohesion are important factors in emotion and personality models used for simulating anthorpomorphical creatures on computers. For such models to be useful in either real-time, or delayed, interaction, with users, they must create a social fabric with material coming primarily from the user's imagination. It is the user's ability to continually generate the spark of inspiration that makes such interaction work. When the rhythm of such interaction locks in, a spell will be cast in which inspired beings can exist, where previously there were only amusing, but disconnected, techniques. Without elegance in both the underlying theories that drive the interaction, and in the exposition of the theories, the spell will be broken.

Elegance leads to generalization and adaptability.

Emotion and personality models for the computer will tend to fall into two categories, albeit with some overlap. The first, with which we are less concerned here, categorizes theories which seek to illuminate human systems, both psychological and physiological. The second categorizes theories which forsake human systems and processes and instead attempt to describe, symbolically, details of human-like emotion and personality in ways that are, ultimately, useful for computer applications of many types. These two categories are analogous to what has been termed *Human AI* and *Alien AI*

With the first category, primarily concerned with illuminating real human systems, granularity is not an issue: the systems exist in the physical, temporal, world. The level at which translation into symbolic form takes place depends on the particular data being collected. With the second category, however, the granularity with which emotion and personality are being described is of tantamount importance. We will suggest that for symbolically descriptive systems (i.e., *Alien AI* emotion and personality systems) selecting a suitable level of granularity that allows us to reasonably support a wide range of social scenarios may be critical. Doing this correctly allows us to provide a solid foundation for the elegance that gives rhythm and flow to the imaginative processes of users. Additionally, we can argue that this elegance, giving rise to a natural understanding and acceptance on the part of the user, will also allow our theories to work in a variety of different contexts: for generating personalities, for decribing social processes, for understanding users, for tweaking tutoring systems, etc.

A multi-context example.

For example, consider that in all cases, emotion and personality models for computer applications must make the translation from the analog processes of the real world to the rough-hewn set of discrete states used in computer simulations. For a socially rich system these discrete states will, currently, be high-level and complex, and, necessarily cut from relatively large chunks of time.

If the theory is not elegant, one aspect of the model, within which the representational problems are less difficult, will be richer, and constructed of finer grain components while another aspect of the model will be sparse. For the study of human-like systems this may be fine, for artistic balance, and the creation of flow in interaction with users, it may be fatal.

We are not championing any particular believable agents emotion and personality model in this exposition, but rather a *style* of model. Our point, independent of the particular theory, is that the strength of whichever (necessarily impoverished) representation is used must be utilized in such a way that it can act as glue holding together other ideas. Whatever the theory, it will best have enough elegance that its internal consistency will allow it to work in a number of contexts.

Purely for the purpose of discussion we will use an example from the work of Ortony, Clore, and Collins, 1988 (OCC), as embodied in the Affective Reasoner (hereafter, AR) work (refer elliott-phd). As used in the AR, the OCC theory is relatively high-level symbolism for driving personality and emotion. It is also broad in coverage and uses a similar level of granularity for its varied aspects. We will argue, through example, that it is because of these traits, and its inner consistency, that such representation is useful in a number of different believable agent contexts.

In the following list, we briefly examine the OCC representation of *hope* (being pleased over the prospect of favorable outcome with respect to one's goals), as used in the AR, in seven different contexts. Some subset of twenty-four such emotion templates, each with considerably more detail than presented here, have been used in these seven contexts as well as others. We suggest, as a position, that it is the balance in the OCC/AR representation, and internal consistency, rather than its *rightness or wrongness* which allow us to use it, fluidly, in so many different ways.

Personality context: Agent personaltiy properties affected include... (a) the ability to reason about the future. This can be as simple as holding a discrete representation of features to be checked against future states in the system. When the features map to an unfolding event outcomes can be: confirmation, or disconfirmation. (b) character traits: tendency toward experiencing hope (optimistic personality), or the reverse. *strength* of hopefulness, *duration* of hopefulness without reinforcement, *strength* of ultimate satisfaction or disappoinment, etc. (c) agent mood changes through dynamic tweaking of degree of hopefulness ``experienced'' by agent.

Emotion generation: When an agent ``observes'' elements that suggest a stored goal will obtain in the future, possibly, depending on current constraints in both the external state of the world, and the internal state of the agent, generate an instance of hope, and queue up a set of features to be compared against future events for the possible generation of satisfaction, or disappointment. In observing agents make the following changes of state: (a) possibly generate an instance of a fortunes-of-others emotions if the same set of observed world elements, also observed by the second agent, leads that second agent to believe that the first agent will experience an emotion, and the second agent is in a relationship with the first agent (e.g., is happy-for the first agent because of the first agent's hopefulness); or (b) possibly generate an emotion sequence wherein the expressed emotion of the first agent triggers an emotion in the second agent (e.g., resentment over the perceived hopefulness of the first agent).

Emotion manipulation: If the system (e.g. tutoring system), knows that a user has a certain goal, then lead them to believe that such a goal is likely to obtain in the future. Tweak subsequent actions to take advantage of the likelihood that the user is experiencing hope.

Emotion understanding context: if an agent is hopeful, look for uncertainty with respect to as yet unrealized goals. What are these goals? Ask? Personality traits in the individual: a positive personality? Look for features comprising the strength of the hope. How far in the future? How strong is the hope? How important is the goal?

Story generation: If an agent has a goal, make it currently unrealized (or if a preservation goal, realized but possibly threatened in the future), but believed to be realized in the near future by the agent to experience hope. Have other agents *presume* this desired goal of the agent. In a fixed plot, alter the story by having an agent *desire* a future event, rather than desire *NOT* the same future event. Alter the personality of the agent from one that tends to believe in the negative outcome of the fixed future event, to one that tends to believe in the positive outcome of the fixed future event.

User models: Does this user tend to experience hope, or fear? Are they motivated better by hope of fear? If they are hopeful, find out what goal they value. How strong is the desire for this outcome? How confident are they that this goal will obtain? Users are motivated to answer questions when they feel that they are being understood.

Humor: Consider this scenario: We, as the obvsering audience, know that an event will not obtain (e.g., the protagonist wants the girl to go out with him, and does not realize that her mamoth boyfriend has just returned with their drinks). In fact, it may already have been disconfirmed. The agent, with which we (a) may be in an adversarial relationship (possibly temporary), and (b) are able to identify (cognitive unit), plays out the elements of classic comedy wherein we observe the agent's increasing hope, knowing that it can only, ultimately, end in disappointment. To be funny, the scenario has to be tweaked according to the represented features of our emotion and personality theory: if the congnitive bond is too strong, the scenario will not be funny, it will be sad. If the congnitive bond is too weak we may not feel anything because it is not *personal* enough. The degree of funniness may depend on our own emotional involvement in the emotions of the central characters.


Supporting the user, rather than the empirical emotion theory.

The apex of interaction with believable agents is firmly located internally in the user. Because of this, it is not so much a matter of "rightness" and "wrongness" but rather of "Goodness" (elegance, flow)) and "badness" (lack of rhythm, unintuitiveness) which will determine the effectiveness of the underlying emotion and personality theory used. We use the following argument to illustrate that users are not really interested in computerized beings, but rather only their projections of such beings.

Consider pain and pleasure, two very basic, elemental, components of somatic emotion experience, and certainly states represented in some form in all believable agents. At what point have we *really* represented this in the computer? Choosing pain, the more dramatic of these two, for this exposition, let us work through a small hypothetical example. Suppose that we build a computer system, "Samantha" (Sam), that incorporates state-of-the-art, danger-avoidance, sensors, stimulous-response learning, etc.. Now, we tell Sam-the-computer-system we are going to kill her, and begin slowly to dismantle her hardware and software. In other words, we are "torturing'' Sam to death. The question arises, how much do we, as users, care? If Sam-the-computer-system is our friend, maintains her state, has a long history with us, and provides interesting company, we may be sad that Sam is dying. Furthermore, if Sam is able to clearly express her distress (in the case of the AR agents this would be through the use of arresting music, facial expressions, and spoken, somewhat inflected, text) we might even experience a somatic sympathetic response of remarkable strength. However, this response on our part is internally fabricated on the basis of our social nature. WE provide all of the "juice" that makes this relationship work.

We are now told, "I have placed your cat in oven, and turned on the heat. To the extent that you increase Sam's pain, I will reduce the cat's pain." Most of us (depending of course on how we feel about the cat!) would immediately say, "Well sorry Sam, it was nice knowing you," and do whatever we can to help the cat. It is always in the back of our minds, that under the illusion, somewhere in Sam, there is a register labled "pain" with a number in it. Sam does not *really* feel pain.

We can make the example even more striking by changing two of the details. First, when we dismantle Sam, she believes that this is permanent, but it is not: we have a full backup. Second, instead of threatening to cook the cat, we merely threaten to dip it, repeatedly, in cold water. Under such circumstances we can argue that even though the cat will not be harmed, few (one might hope none!) of us would be willing make the cat suffer this discomfort, even though it would mean that Sam was undergoing some terrific martyrdom: once we turned off the monitor and the speakers, we would not give Sam a second thought. Sam, independently of our experience of Sam, does not matter, but *the cat does*. The fact that we can fully restore Sam makes Sam's suffering of no account whatsoever.

It can be argued that this clearly dileneates the internal/external distinction in our subjective experience with respect to believeable computer agents. We do not really believe that, in the ``real'' world, such agents have either the experience or the subjective qualities that we are willing to attribute to them. Nonetheless, we are quite willing to interact with them in a way that acknowledges their social qualities, so long as they can skillfully, and elegantly, support the illusion WE create to support this interaction.


Elegance in communication:

Another important component of elegance in believable agents is in capitalizing on the communicative abilities, and especially the communicative adaptability, of humans. If humans are placed together they will find a way to communicate, even in a group comprised of widely different cultures and languages. Additionally humans find ways to communicate with dogs, birds, etc. They are *robust* in their social craft. By contrast, however, even the very best software agents are exceedingly brittle in their communicative capabilities. Additionally, humans are strongly *motivated* to communicate with other beings: this is in our nature. If we hold the belief that someone can understand us, we will "talk" to them.

Thus, given that humans are both motivated to communicate, and also quite adaptable in the ways they are willing to effect this, whereas computer agents have almost no adaptability in this area, it follows that a system supporting believable agents should place the burden of communication with the human participants. However, here is where a touch of elegance is essential: to capitalize on this particular ``hook'' the computer system must be designed with the theme of not mimicing human communication, but rather *supporting* it. It may not be necessary, or even possibly even desirable, to have such components as fully realized, real-time morphing, human faces; fully inflected, phoneme-tweaked, phrase-generated speech; or a complete understanding of the human world. If our goal is to *support* human skill at communication rather than fully realize it independently, then our focus might rather be on creating new expressive attributes and repertory that are both consistent with, and maximally suited to, the true nature of the believable agent.

For example, suppose an agent can generate only simple, albeit not always correct, speech patterns, such as ``Andrew want end conversation. I not want end conversation. Andrew conflict I. Andrew fight I or Andrew give up?'' We can argue that although the syntax is poor, a human can easily learn to understand such expressions. However, despite its impoverished delivery, for a believable agent, the content conveyed would be of interest. An elegant design might acknowledge then that the particular syntax, or lack thereof, is merely a little noise in the communication channel. Accordingly it would specify a graphical interface that does not, from an artistic perspective, look unnatural with broken English. After all, if our dog could generate such broken English he would soon be making millions for us on the Tonight Show. No one would care that his pronouns are not correct: he is just a dog!

Closing:

We have taken the following position. Emotion and personality models for believable agents should best have an internal consistency and elegance that will come through in many different contexts. This elegance allows the user to create the illustion of inspired life in a number of different contexts, through rhythmic interaction. Above all, the believable agents researcher should recognize that the biggest win is in supporting the user's ability to inspire computer agents with simulated life, rather than in attempting to create that simulated life unilaterally.