Story-morphing in the Affective Reasoning paradigm: Generating stories semi-automatically for use with ``emotionally intelligent'' multimedia agents.

Clark Elliott
Jacek Brzezinski
Sanjay Sheth
Robert Salvatoriello
Institute for Applied Artificial Intelligence
School of Computer Science, Telecommunications, and Information Systems
DePaul University, 243 South Wabash Ave., Chicago, IL 60604

email: elliott@cs.depaul.edu, Web: http://www.depaul.edu/~elliott


Formal Citation

Clark Elliott, Jacek Brzezinski, Sanjay Sheth, and Robert Salvatoriello (1998) , " Story-morphing in the Affective Reasoning Paradigm: Generating Stories semi-automatically for use with 'emotionally intelligent' multimedia agents," Proceedings of the Second International Conference on Autonomous Agents, Minneapolis, MN, May, 1998, pages 188-191.

Abstract:

In our work on the Affective Reasoning project we have established a strong relationship between reasoning about human emotion and reasoning about the stories people tell. In a recent set of exercises we demonstrated that (a) subjects do project attributions of human emotion interaction into multimedia multi-agent presentations, (b) we can use Affective Reasoner mechanisms to tag basic plot steps in stories with varied, but plausible, interpretations on behalf of each participating agent, (c) we can combine the various interpretations of these plot steps into narratives with relatively little concern for constraints, and (c) when shown presentations generated by the computer using such tagged plot steps, subjects form explanations for cohesive, distinct stories and that they rate the stories as highly plausible. Furthermore, for certain types of scenarios, a single base story and a readily created set of interpretation tags allow the generation of many hundreds of distinct, plausible, and cohesive story-morphs, where the external plot steps remain similar, but the thematic material, and the internal lives of the characters, vary greatly. The exercises provide evidence that this mechanism may be applicable to a broad range of base stories.





1 Introduction

In earlier work we introduced the idea that what makes many stories interesting is not what happens, but how characters in the story feel about what happens [EM95]. We showed how the thematic core of a story could be greatly transformed by changing the appraisals that characters make of a static sequence of external events [Ell92b]. In this current set of exercises we formalized this process. First we studied narratives based on real life, and analyzed them using the Affective Reasoner paradigm. Next we created large numbers of similar, albeit different, stories (hereafter referred to as story-morphs) which shared the same external plot steps, but which contained widely varied emotion responses, on the parts of the characters, to these external events. Lastly, we used Affective Reasoner display characters to present realizations of randomly chosen story-morphs for subjects, and did a preliminary analysis on the results.

Because of the large number of plausible story-morphs that can, in theory, be generated automatically from a single narrative, this mechanism has promise for use with interactive fiction, and interactive characters.

The background context in which the Affective Reasoner character agents operate is covered elsewhere and will not be discussed here (but c.f. [Ell92a] - overall architecture of the AR, [Ell97b] - multimedia presentation agents for human emotion, [Ell93] - supporting social simulation with the AR, [Ell94] - research problems in the area of emotionally intelligent agents, [ES93] - emotion intensity in the AR). For works on related topics see [Kod97,Rei96,RB92,LB95,Ree91,NS94,NT94,Sim67]. The twenty-six emotion types used in this work are shown in figure 1. Each of the emotion types has some subset of about twenty intensity variables associated with it, and agents are also subject to moods.



EMOTION CATEGORIES
Clark Elliott, 1998
after Ortony, et al., 1988

Figure 1:

GROUP

SPECIFICATION

CATEGORY LABEL AND EMOTION TYPE

Well-Being appraisal of a situation as an event joy: pleased about an event
distress: displeased about an event
Fortunes-of-Others presumed value of a situation as an
event affecting another
happy-for: pleased about an event desirable for another
gloating: pleased about an event undesirable for another
resentment: displeased about an event desirable for another
jealousy*: resentment over a desired mutually exclusive goal
envy*: resentment over a desired non-exclusive goal
sorry-for: displeased about an event undesirable for another
Prospect-based appraisal of a situation as a prospective
event
hope: pleased about a prospective desirable event
fear: displeased about a prospective undesirable event
Confirmation appraisal of a situation as confirming
or disconfirming an expectation
satisfaction: pleased about a confirmed desirable event
relief: pleased about a disconfirmed undesirable event
fears-confirmed: displeased about a confirmed undesirable event
disappointment: displeased about a disconfirmed desirable event
Attribution appraisal of a situation as an accountable
act of some agent
pride: approving of one's own act
admiration: approving of another's act
shame: disapproving of one's own act
reproach: disapproving of another's act
Attraction appraisal of a situation as containing
an attractive or unattractive object
liking: finding an object appealing
disliking: finding an object unappealing
Well-being/
Attribution
compound emotions gratitude: admiration+joy
anger: reproach+distress
gratification: pride+joy
remorse: shame+distress
Attraction/
Attribution
compound emotion extensions love:admiration+liking
hate:reproach+disliking
*Non-symmetric additions necessary for some stories.



2 Story-morphing

The automatic generation of narrative plot is extremely difficult. As illustrated at the 1995 AAAI Spring Symposium on Interactive Story Systems: Plot and Character, the managing of constraints is burdensome (e.g., [RM95,Ort95,Mur95,Kni95]. Guaranteeing a consistent, plausible, plot can easily degenerate into the classic AI search problem. Our approach to story generation is entirely different, wherein the external plot steps remain constant, and the overall narrative sequence in the external world is static, but where the meaning of the unfolding scenarios for the participating characters varies greatly. In addition, narrative explanations, such as might be provided by a narrator in a play, also vary greatly with the different interpretations.

In our current work we are primarily interested in presenting interesting, novel, scenarios that at least maintain believability. That there are other aspects that might be useful down the road, such as selecting particular story-morphs to promote a desired point of view, and integrating this work with other story-generation technologies, goes without saying. We do not discuss such applications in this paper.

In the following sections we show how one uses the emotion theory to analyze a story, and tag it for story-morphing. The focus of the paper is on this analysis, and showing that it can be used to produce a computationally feasible way of automatically generating a large number of different stories in real time -- stories which are well understood with respect to the emotions that arise in them.

2.1 Leverage

Our approach uses several points of leverage. This leverage is based on the ideas that human emotion can be faithfully represented at a descriptive level suitable for story representation; that the communication of human emotion is often indirect, and inaccurate; and that humans have a natural tendency to project plausible interpretations onto social interactions and observed emotion expressions. Following are the most salient points of this leverage on which we hope to capitalize.

First, the underlying descriptive emotion theory that we have used in the Affective Reasoning project, based largely on the Ortony, Clore, and Collins model [OCC88], has been exercised, and extended, over the years based largely on its usefulness in describing human emotion narratives. That is, where the theory has not been sufficient to describe some scenario it has had to be extended; where it has had a component that has not been necessary in describing scenarios it has been reduced. In this ongoing process we have described portions of about six hundred different scenarios. Through this transcription process we have developed a somewhat canonical form for the representation of the human emotion component of stories.

Second, in previous work we have given many examples illustrating that humans can have multiple, even conflicting, emotions stemming from the same emotion-eliciting events (e.g., as in [Ell92a]). For example, suppose that one unexpectedly comes by desperately needed money, but it is inherited because a much admired, and loved, favorite uncle dies. News of the uncle's death might simultaneously lead to distress over the loss of a loved one, joy over newly-inherited wealth, relief over escaping from creditors, anger that death came to the family, guilt over feeling happy about the benefits derived from the uncle's death, remorse over not having spent more time with the uncle recently, etc. What this means for story-morphing is that, with respect to interpretations of emotion eliciting situations, we are not so burdened with the ``foolish consistency'' that is the hobgoblin of plot structures.

Third, humans seem inclined to project not only anthropomorphic qualities to most everything, but also social qualities to anything that has agent-like attributes and interacts in ways perceivable as supporting a community. Given a set of agents engaged in dialog, or characters interacting in a script, people naturally attribute human-like motivations to the agents/characters for their actions. We humorously refer to scripted presentations of our personality-rich agents as a multimedia Rorschach test - but there is a strong element of truth in this line of thinking.

Fourth, facial expressions, music, and inflections in spoken text seem to trigger explanatory mechanisms -- that there is some sort of cognitive dissonance that is quelled only when observers abductively attribute narrative qualities to sequenced expressions, and other social cues, on which they can hang the complex fabric of the interaction.

2.2 An example story description

From the hundreds of different stories collected at DePaul we selected one ad hoc for use in these exercises. The story has no special qualities. It is a casually written narrative illustrating a morning in the life of two boys. Following is a sample portion of the original text (and note that Elliot in the story has no relation to C. Elliott the researcher):

Elliot and I had probably circled the block at least five times that day, making the point where we pass the tavern on the corner the checkered flag. (We did not keep count of how many races we won, only that we won.) Elliot was in the lead until I bumped his bike just before a hump. He didn't lose control long and soon we were side by side kicking at each other. Then my chain slipped and Elliot had taken the lead. He rode down to the corner before he had noticed that I had not caught up with him. He rode back toward me smiling, but he did not stop to help. Elliot passed me by and rode around the block before he stopped along the side of my bike. He still had that smile, and I had lost the humor in my situation. I told him to put my chain back on and, instead, he made some remark about his bike being better than mine. At this point I was ready to beat him up. [...]

In analyzing the original, full, story we extracted the following elements relative to the Ortony, et al. theory, and the Affective Reasoning paradigm that extends it. This is only a representative sample of the sorts of detail reasoning about the emotion fabric that can be performed on such narratives, and was in fact performed on this particular narrative. Italicized words in the analyses represent concepts with extensive roots in the theory:

This initial analysis gives us the plot steps (external events), which comprise the base story sequence used in the next section. These external events lead to appraisals on the part of the agents. By capturing these appraisals in story-morph tags (below) we have what might be thought of of as one of many story-morphs (albeit the original from which others morph). The rich, structured, set of emotion-relevant appraisals, intensity factors, and so forth, show the wealth of factors that can be varied to produce a large number of variations of the original story.

2.3 A base story sequence

There was nothing in the original story that would not have been suitable for use with story-morphing. However, because of the extensive coding intended for the sample stories, we altered and simplified the story as follows, with plot sequence steps illustrated primarily by spoken dialog:




Elliot and Rick raced bikes around their block. Rick won as usual.

Rick: ``Hey. I won the race again.''

Rick bullies Elliot as usual.

Rick: ``Hey Elliot. I can make you do anything I want.''

Elliot: ``O.K. Let's race again.''

They raced again but this time Rick's chain came off and he lost.

Rick: ``My chain came off!''

Elliot: ``That is too bad!''

Rick: ``Come back here and help me put it back on before I pound you.''

Elliot: ``I won't do it. You cannot make me. Fix it yourself.''

Rick: ``Darn! I am going to get you.''

Elliot: ``I don't care. I'll get you too.''

Rick: ``I am going home''

Rick goes home.

Rick thinks: ``Imagine that. He fought back for once.''

2.4 The story-morph tags

In this section we present a portion of the set of story-morph tags we used in the Bike Race example. Tags with the same letter codes (e.g., A1, A2, ...) represent exclusive disjunctive choices. Tag sets, each with its own identifying letter, supply optional conjuncts, and may appear, or not appear, in the full set of interpretations chosen for the narrative.

2.4.1 The Bike Race example

Global intensity variables contributing to the emotion states of the characters are: physical state of Elliot, and physical state of Rick. (Values: negative - hungry, tired, hot; positive - well-being enhanced by endorphins from physical exercise.)


Tag A1: Attraction to racing as an object. Rick likes racing. Intensities: attraction (weak / strong).

Tag A2: Rick is bored with bike racing and so dislikes it (even though he still does it for other reasons). Intensities: aversion (weak / strong).


Tag B1, B2: Elliot likes racing / is bored with racing, as above for A1, A2.




Tags A1-B2 placed in sequence before the plot step, ``Hey. I won the race again.''




Tag C1: Elliot is ashamed of losing. Intensities: Importance of principle that he should win (weak / strong). Effort (low / high) - depending on assumed history of attempting to win. Degree that his ``should win'' principle is violated by a single loss (low / medium / high).

Tag C2: Elliot is angry-at himself for losing. (He considers having a lack of racing talent to be a blameworthy act, causing his goal of winning to be blocked.)

Tag C3: Elliot is distressed about losing. Intensities: Importance of goal of not losing (low, medium, high).

Tag C4: Elliot has had fears-confirmed about losing. Intensities: Importance of goal of not losing, in concert with prior expectation of losing, depending on the implied negative consequences in this social fabric (low / medium / high); certainty (low / high) depending on degree of confirmation that a single loss instance supports.

[Tag detail is abbreviated hereafter]


Tag D1: Elliot is happy-for Rick for winning

Tag D2: Elliot is angry-at Rick for winning

Tag D3: Elliot admires Rick for winning

Tag D4: Elliot is jealous of Rick for winning




Tag H1: Rick is proud of winning.

Tag H2: Rick is happy about winning


Tag I1: Rick feels sorry for Elliot

Tag I2: Rick gloats over Elliot


Tags C1-I2 placed in story sequence before plot step: ``Hey Elliot. I can make you do anything I want.''


A member of each of the tag sets (one from each lettered group) can optionally be selected for inclusion in the narrative structure. For the very early part of the Bike Race story then we can generate story-morphs that might, for example, contain the following:

2.4.2 Constraints

In some cases constraints have to be tightened. For example a (somewhat perverse) Tag I4, might be added wherein Rick resents winning because the real competition is in ``who suffers the most and gets the most sympathy for it.'' This would place plausibility constraints on future reactions when a character actually succeeds in getting sympathy because this goal has now become part of the plot structure.

In other cases constraints can be loosened. For example it is possible for Rick to be both admiring of Rick (D3) jealous of him (D4) at the same time.

These problems are readily addressed for most cases (and it should be obvious that there are no shortage of story-morphs possible from a single story base) using simple set membership techniques wherein constraints such as ``must not occur with...'' and ``must only occur in the presence of...'' can be made explicit.

The more that the base stories include components requiring the integration of agents' internal appraisals with the external plot, the more constraints there are that will have to be enforced. In practice there are many base stories suitable for story-morphing that do not require more than a straightforward constraint mechanism.

In addition, it should be noted that there is nothing that would keep story-morphing from being useful with other narrative-generation techniques. Quite the contrary is true: once the tags are created, story-morphing can run in real time; it can generate explanations; it is highly suitable for use with natural-language generation systems (that is, the internal structure of the narratives is rich, and well understood, and the presentation of the internal structure is somewhat formulaic); it can be used piecemeal to fill in portions of bigger stories; and it is responsive to characterizations of personality, including those made during real-time interaction with users (e.g., an agent can be biased toward negative emotions, and morphs with those characteristics for the agents biased for selection).

3 The questions

Two story-morphs were presented by three Affective Reasoner agents -- a Rick agent, an Elliot agent, and a narrator agent. For each of these, subjects wrote long-hand responses describing what they felt might be true of each of the two characters in the story, gave a general description of the scenarios, and rated the quality of the story-morph as a narrative. We analyzed these free-form answers using three coders to assess the data. Some of the coded data is presented here.

The current round of observations should be categorized as formalized exercises. Although we are reporting results, the interpretation of these results is necessarily limited. For example, on the one hand, because the coding process was rather extensive, we were only able to select two story-morphs from the many possible versions created from the Bike Race story. In fact, we randomly selected ten and then ad hoc chose the two least likely to seem plausible as the basis for our presentation. Nonetheless, since we only fully looked at two of the many story-morphs possible, we cannot reliably claim that these results will hold for all of the Bike Race morphs, and even less can we claim that such results will hold for other story-morph sets.

On the other hand, partially based on the work done here, and also on less formal trials with a number of other sets of story-morphs, we are comfortable saying that preliminary indications are that this mechanism is widely applicable, and that it will likely show similar results for other story bases treated in the same manner.

Additionally, we considered using a control story-morph that was explicitly not consistent with a theoretically-based set of emotions. Although this would have been useful, we did not devote resources to this for the following reasons. First, our less formal experience has suggested that such presentations do in fact appear somewhat nonsensical. Second, and most importantly, if such non-theory-based presentations were still strongly perceived as stories, this would not diminish the usefulness of story-morphing. On the contrary it would suggest an even greater likelihood of success in many applications.

In informal pilot exercises we found that the overwhelming majority of presentations based on stories generated using the story-morphing process were seen as plausible, whereas those presentations generated without theory-based morphing were seen as implausible, and often nonsensical. Over time, certain minimal constraints were developed in choosing both the base stories and the types of interpretations on the parts of the characters that were used as the basis for story morphing.

The data presented in this paper is based on coding the observations of about 70 subjects. The subjects all viewed presentations of two story-morphs, chosen ad hoc (although with a strong random component) from a collection of more than 300 such variations of a single base plot sequence. The spoken dialog remained constant for both story-morphs. Emotions were automatically scripted for the computer agents, based on appraisals of the action sequence according to different sets of selected goals, principles, preferences, and emotion intensity-relevant values (e.g., the tags and their variations). These emotions were then expressed during a presentation of the script by the multimedia Affective Reasoner agents, using minimally-inflected spoken dialog, music, and facial expressions. Subjects viewed the story-morph presentations and wrote long-hand descriptions of what they felt had taken place.

The two story-morphs differed in the appraisals the agents made of the otherwise constant events. For example, at the beginning of the first story-morph Elliot initially did not care enough about riding his bike with Rick to cross over the threshold into an emotion state. His facial expression was neutral, and he used no musical selections to indicate his emotion state. Rick was quite happy to be riding bikes, had a smiling face, and played a happy folk song tune in the background. After they raced around the block Elliot was mildly embarrassed about loosing, and showed mild embarrassment (shame) on his face. Rick was very proud of his victory. He had a smug expression on his face, played a selection from Handel's Water Music, and his voice inflection was slightly faster and higher than normal. When Rick reflected on having won the race again, he gloated over his rival Elliot, had a sneer on his face, and played a selection from Stravinsky's Symphony in Three Movements.

By contrast, at the beginning of the second story-morph, while Rick continued to be initially quite happy to be riding bikes, Elliot was also mildly pleased to be riding bikes, and had a slightly pleased look on his face. After riding bikes around the block, neither felt strongly about the race itself, although the decay on Rick's stronger initial emotion was slower than the decay on Elliot's weaker one. When Rick reflected on having won the race again, he was sorry for Elliot, the loser. His face showed compassion, and he played sad music from Mendelssohn's Italian Symphony.




3.1 Are these stories?

The first question we wanted to answer was whether or not the story-morphs presented were seen as cohesive, sensible narratives, or as implausible. Coders were given the following question to assess for each of the subjects:




Which of the following is true for this respondent?

Three coders were used to interpret the free-form responses. The coders worked closely together on consistently evaluating the data with respect to the three labeling choices.

As predicted, it was clear that choice (f) did not apply to any of the response sheets. The remaining distinction between (s) and (i) had to be handled carefully. Unless some explicit indication was present that inferences made by the subject were linked together (e.g., drawn from some projected story) the response was coded as (i). For example the coders agreed that one subject made many inferences, but did not make any that conclusively came from some internal narrative. Each of the subject's inferences could be made, ad hoc, from disconnected impressions of the presentation. Along with other inferences, the subject reported that Elliot might, ``be very upset, have a nicer personality than Rick, have facial expressions that did not reflect his inner feelings,'' and, ``defend his personality with comments like, `I won't do it.' '' While significant inferences were made (and this is only a partial list), there is nothing explicit to suggest cohesion between the inferences, and hence the response was coded as (i).




For the 72 long-hand responses we found:




Additionally, on the response sheet, along with the free-form response, we asked subjects to rate the quality of each of the stories presented. They were given the following scale:

   
      Could never |   Might   |  Seems    |   Makes   
         happen   |  happen   | plausible |   sense   
                  |           |           |          
          1   2   |   3   4   |  5    6   |   7   8

Our preliminary data show that on the average subjects evaluated the stories in the interval 6 to 8. This result is confirmed by the t-test (N = 72, t = 0.028). [Footnote 1: It is possible to argue that the intermediate labels on the range 1 to 8 add complexities to the analysis. We do not address this issue here.] Furthermore, responses in the 1 - 2 range, which would suggest that the story had enough flaws to threaten believability, were statistically insignificant.

The evidence seems clear that subjects formed internal, highly plausible, narratives based on inferences not explicitly stated in the presentations. This is consistent with, but goes beyond, the idea that computer users tend to anthropomorphize computer displays. In the case of play-acting Affective Reasoner agents they appear inclined to make projections about social interaction between the agents as well.

3.2 Are the stories different?

We asked the coders to assess whether a subject's free-form response indicated that the two stories were the same except for a few unimportant details, or they were different. Difference was liberally assessed as being different in at least one significant way. However, the difference had to be in content, not in notation, ordering, or choice of words. That is, if Elliot were described as ``courageous, retiring, and diminutive'' in story-morph one, this would not be considered different from a description of ``brave, and shy'' in story-morph two, since the content is the same except for the relatively insignificant description of the character's size.




With (N = 35 pairs) we found:

In other sets of data produced by the coding, now being analyzed, we are beginning to find additional qualitative differences between the stories. For example, it appears that comments on the first story-morph cite references to Elliot being ``brave'' three times as often as do comments on the second story-morph -- although bravery is not something directly represented in the computer model.

3.3 Conclusions

What the data suggest is that, as presented by the computer through the AR agents, both of the automatically-generated, more-or-less randomly-selected story-morphs were interpreted as stories by the subjects, and in at least some small way, as different stories.

4 Explanation generation

One of the powerful features of a system like this, which has not been formally utilized, is in its power to generate explanations. In this way stories which might on the surface appear to be similar, can through interaction with the character agents, or through generated narrative, be characterized as stemming from greatly different thematic material.

For example, drawing from the example above, in responding to an interactive query, Elliot himself might explain, ``I was really into racing bikes. I loved the sensations. When I lost I was very ashamed, because it is a very blameworthy thing to be so weak that you always lose at sports and neighborhood activities. It was very important to me to beat Rick. I put a lot of effort into winning. This made losing worse. It didn't matter that I only lost once. You either win or lose and that is the end of it.''

Or, the narrator can fill in this information by reporting the same information, but from the third person perspective, ``Elliot was really into racing. He loved the sensations...''

In each case, because the underlying structure of the emotion content is so well understood, and the domain is limited, it should not be an insurmountable problem to use natural language generation techniques to turn the structured emotion information, and any extra tagged information (such as why Rick would race bikes even though he did not find the activity attractive), into text which could then be spoken, in real time, by the AR agents. Given the success of explanation generation systems such as Lester's KNIGHT system on harder problems [Les94], successful natural-language generation of explanations for agents' emotion states seems highly plausible.

5 Story-morphing as appropriate for applications

Creating tags which can be applied to base stories requires detailed analysis, and although we have not covered it here, some degree of constraint structure for more complex scenarios. While requiring some thought, the analysis of stories has become more-or-less formulaic, and only seldom presents new intellectual challenges. Once the analysis is done the story-morph creation is relatively trivial, and can be done by the computer in real time, suitable for delivery through multimedia agents. Because of the large numbers of story-morphs that can be generated from each analysis, it is possible that a set of five or six such base stories would allow for a formidable arsenal of stories for an interactive system. This would especially true if trivially-realized variations in names, locations, and other minor details were used as well. (That is, an auto race where everyone is happy might not be recognizable as deriving from a bike race where everyone is sad.)

Each of these stories can be indexed (loosely, or in a fine-grained manner) according to the underlying thematic patterns in the emotion fabric. This gives the system control over which story-morph to play at any given time. For example stories where the agents start off with negative goal-based emotions (e.g., sadness, fear), but end with positive goal-based emotions (e.g., joy, satisfaction) may be appropriate in some application contexts, whereas stories that emphasize social values and correctness of principled behavior (e.g., through attribution emotions such as pride and reproach) may be appropriate in others.

Additionally, the system has control over the personality makeup of agents - and thus over the component that leads them to make idiosyncratic interpretations of events. An agent may be tweaked, at runtime, to be principled, depressed, sanguine, and so forth, leading to differing interpretations of the static, unfolding, external events in the base story, and hence to different story-morphs.

Although for the purpose of study we kept the text static in the exercises reported here, in practice we envision much of the narrative in such systems coming from descriptions of how the agents felt about what was taking place, and their reasons for feeling this way. For example, rather than simply saying, ``Imagine that, he fought back for once,'' Rick might say, ``Imagine that, he fought back for once and I hate him for it!'' or ``... and I admire him for it!''

AR story-morphing allows us to build story-telling agents, and actors, that can in principle be configured ad hoc. This might be by users who can then watch the results of their creative efforts, or by a system that interacts with a user.

For example, in the context of a system like Extempo's virtual tavern, Erin the bartender could introduce a wide variety of story-morphs into the dialog by using the thematic indices as transition points from ongoing dialog (and c.f. [RHR97a,RHR97b]). Supposing that someone in the electronic meeting place introduces an admiration theme, Erin might be able to relate any of a hundred different variations on a few base narratives that contain admiration components. In addition, she would have access to a great deal of depth information about the state of the characters in the narrative. To wit:

Customer: ``I really admire the way Boulez conducts.''

Erin: ``That reminds me of a pretty interesting story I heard about Rick last week. Would you like to hear it?''

Customer: ``What is it about?''

Erin: ``It is about a bike race. When you were talking about admiration I thought of it. In the end of this story Rick admired his friend Elliot.''

Customer: ``Why?''

Erin: ``Well Elliot fought back for once. Rick had mixed emotions about this. On the one hand he was angry at Elliot because Rick wanted Elliot to do what he told him to do. On the other hand, Rick admires people who stand up for themselves. This fighting-back principle is very important to Rick and he was surprised by Elliot's actions. So, in the end, although he was a little angry, he mostly felt admiration.''

Customer: ``What happened?''

Erin: ``Rick and Elliot were racing bikes...''

In systems where multimedia presentation is used to convey social knowledge, such as with the AR agents, there is less burden on the system to manage discourse bridges between the individual events in the plot. As shown in the exercises discussed here, users seem inclined to provide this on their own.

6 Closing

We believe that the story-morphing techniques illustrated here have wide applicability for agents that make use of narrative structure in their tasks. Applications where this might be useful are story-telling agents, agents for education [ERL97,Ell97a,LCK$^+$97], computer fantasy games, and intelligent multi-participant computer games that have non-player characters. While limited in scope, our results at least suggest that this approach will hold up under more rigorous testing.

7 Bibliography

Ell92a
Clark Elliott.
The Affective Reasoner: A Process Model of Emotions in a Multi-agent System.
PhD thesis, Northwestern University, May 1992.
The Institute for the Learning Sciences, Technical Report No. 32.

Ell92b
Clark Elliott.
The gift of the magi revisited, and revisited...
Unpublished Manuscript, 1992.

Ell93
Clark Elliott.
Using the affective reasoner to support social simulations.
In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, pages 194-200, Chambery, France, August 1993. Morgan Kaufmann.

Ell94
Clark Elliott.
Research problems in the use of a shallow artificial intelligence model of personality and emotion.
In Proceedings of the Twelfth National Conference on Artificial Intelligence, pages 9-15, Seattle, WA, August 1994. AAAI, American Association for Artificial Intelligence.

Ell97a
Clark Elliott.
Affective reasoner personality models for automated tutoring systems.
In Proceedings of the Workshop on Pedagogical Agents, pages 33-39, Kobe, Japan, August 1997. Eighth World Conference on Artificial Intelligence in Education.

Ell97b
Clark Elliott.
I picked up catapia and other stories: A multimodal approach to expressivity for ``emotionally intelligent'' agents.
In Proceedings of the First International Conference on Autonomous Agents, pages 451-457, 1997.

EM95
Clark Elliott and Ernst Melchior.
Getting to the point: Emotion as a necessary and sufficient element of story construction.
In AAAI Technical Report for the Spring Symposium on Interactive Story Systems, pages 37-40, Stanford University, March 1995. AAAI, American Association for Artificial Intelligence.

ERL97
Clark Elliott, Jeff Rickel, and James Lester.
Integrating affective computing into animated tutoring agents.
In Proceedings of the IJCAI97 workshop, Animated Interface Agents: Making them Intelligent, pages 113-121, 1997.

ES93
Clark Elliott and Greg Siegle.
Variables influencing the intensity of simulated affective states.
In AAAI technical report SS-93-05 for the Spring Symposium on Reasoning about Mental States: Formal Theories and Applications, pages 58-67. American Association for Artificial Intelligence, 1993.
Stanford University, March 23-25, Palo Alto, CA.

Kni95
Jonathan Knight.
Interactive story structure: Stanislavsky, meet pavlov.
In AAAI Technical Report for the Spring Symposium on Interactive Story Systems, pages 73-75, Stanford University, March 1995. AAAI, American Association for Artificial Intelligence.

Kod97
Tomoko Koda.
Agents with faces: A study on the effects of personification of software agents.
Master's thesis, MIT, 1997.

LB95
A. Bryan Loyall and Joseph Bates.
Personality-rich believable agents that use language.
Technical Report CMU-CS-95-139, CMU, 1995.

LCK$^+$97
James Lester, Sharolyn Converse, Susan Kahler, Todd Barlow, Brian Stone, and Ravinder Bhogal.
The persona effect: Affective impact of animated pedagogical agents.
In Proceedings of CHI '97, pages 359-366, Atlanta, March 1997.

Les94
James C. Lester.
Generating Natural Language Explanations from Large-Scale Knowledge Bases.
PhD thesis, The University of Texas at Austin, Austin, TX, 1994.

Mur95
Janet H. Murray.
Dr. quinn on the holodeck or blueprint for an electronic storyland.
In AAAI Technical Report for the Spring Symposium on Interactive Story Systems, pages 78-81, Stanford University, March 1995. AAAI, American Association for Artificial Intelligence.

NS94
Clifford Nass and S. Shyam Sundar.
Is human-computer interaction social or parasocial?
Stanford University. Submitted to Human Communication Research, 1994.

NT94
Katashi Nagao and Akikazu Takeuchi.
Social interaction: Multimodal conversation with social agents.
In Proceedings of the Twelfth National Conference on Artificial Intelligence, pages 9-15, Seattle, WA, August 1994. AAAI, American Association for Artificial Intelligence.

OCC88
Andrew Ortony, Gerald L. Clore, and Allan Collins.
The Cognitive Structure of Emotions.
Cambridge University Press, 1988.

Ort95
Peter Orton.
Interactivity and narrative: Friends or foes?
In AAAI Technical Report for the Spring Symposium on Interactive Story Systems, pages 82-85, Stanford University, March 1995. AAAI, American Association for Artificial Intelligence.

RB92
W. Scott Reilly and Joseph Bates.
Building emotional agents.
School of Computer Science Technical Report CS-92-143, Carnegie Mellon University, 1992.

Ree91
John F. Reeves.
Computational morality: A process model of belief conflict and resolution for story understanding.
Technical Report UCLA-AI-91-05, UCLA Artificial Intelligence Laboratory, 1991.

Rei96
W. Scott Reilly.
Believable Social and Emotional Agents.
PhD thesis, CMU, 1996.

RHR97a
Daniel Rousseau and Barbara Hayes-Roth.
Interacting with personality-rich characters.
Technical Report ksl-97-06, Knowledge Systems Laboratory, Stanford University, 1997.

RHR97b
Daniel Rousseau and Barbara Hayes-Roth.
A social-psychological model for synthetic actors.
Technical report, Knowledge Systems Laboratory, Stanford University, September 1997.

RM95
Brad Rhodes and Pattie Maes.
The stage as a character: Automatic creation of acts of god for dramatic effect.
In AAAI Technical Report for the Spring Symposium on Interactive Story Systems, pages 97-99, Stanford University, March 1995. AAAI, American Association for Artificial Intelligence.

Sim67
Herbert A. Simon.
Motivational and emotional controls of cognition.
Psychological Review, 74:29-39, 1967.

8 About this document ...

Story-morphing in the Affective Reasoning paradigm: Generating stories semi-automatically for use with ``emotionally intelligent'' multimedia agents.

This document was generated using the LaTeX2HTML translator Version 2002 (1.62)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split 0 -show_section_numbers aa98.tex (edited manually to insert sections)

The translation was initiated by Clark Elliott on 2006-02-03


2006-02-03