Over the past five years, one aspect of this research has been to test the representational coverage of the Ortony, and other, theories for the purpose of codifying a comprehensive description mechanism for building computable systems. In service of this goal we have analyzed something like 600 different emotion scenarios. The Ortony model has proven to be remarkably robust in this paradigm, with only the addition of (admittedly less theoretically pure) specific categories for love (admiration plus liking), hate (reproach plus disliking), jealousy (resentment with the goal being an exclusive resource also desired by the appraising agent), and envy (resentment when the agent desires a similar, but non-exclusive, goal). We felt the latter was required for adequate representation of the corpus of collected situations, at a suitable level of granularity.
One fallout of this research has been the insight that many of the emotion scenarios reviewed make very good stories, and that in fact the case can be made that every one of them that fulfills the minimal requirements for the presence of emotion, as computed by our system, also meets the minimal requirements for ``story-hood:'' for example, that ``the boy sits in the chair'' is not a story, but that ``the boy sits in the chair, but knows that he should not'' (containing the theoretical antecedent for shame) may very well have an essential element that does make this a story. In fact, if we say, ``the boy really wanted to sit in the chair, and did, even though he knew he should not,'' we can make the case that we have the core elements of one of the great themes of literature, wherein mixed, and conflicting emotions (shame over an achieved, desired, goal joy) yield classic thematic tension within a character.
Extending this emotion representation exercise, we formally analyzed real stories for their emotion content (work mostly yet to be presented in the academic literature) according to our computable theory. AR agents then acted out the parts of the characters in the story according to the structural descriptions of the emotions present. Users were able to understand the story in this context, largely as commonly understood by those simply reading an account of the story.
Subsequently, without varying the plot (e.g., what happened) we had the computer select varying configurations of alternate appraisals of the static, unfolding events, for the different agents, giving the agents different emotion responses to what took place. In this case users were also able to consistently agree on what happened in the stories, and rated them similarly as to quality, although the computer-modified story was significantly different from the original. In one early exercise, for example, we took the O. Henry story The Gift of the Magi, and without varying the external events (roughly, Della sells her prized hair to buy Jim a gold watch chain, Jim sacrifices his prized watch to buy Della a set of combs), we altered the story from one embodying ``the joy of sacrifice for true love" to one of ``the one who suffers the most wins."
What this suggests is that by representing stories in this manner, based on their emotion content, we may have a great deal of flexibility in how we can alter the ``story'' portrayed by the external events, as long as we have a reasoning system that understands the relationships between the aspects of this representation. To effect this, we simply change the personalities of the agents in the story, and thus their subsequent internal responses to the events that arise. Since the AR tracks roughly twenty-four categories of emotion, and up to ten intensity variables for each, along with numerous aspects of mood, relationship, and the like, a strong case can be made for using this as the basis for an interactive, dynamic, story-telling system that has great flexibility in the stories it relates, yet which still works under the constraint of maintaining ``story-hood'' in everything it produces. While this clearly fails to meet the larger goal of true story generation (which would require functional emotions on the part of the agents), it does open the door for significant progress in this area.
Lastly, we would like to suggest that limited forms of humor might also be effected in a similar manner, and that even in this constrained form it could be useful for dynamic story-telling. While crucial issues like timing, surprisingness, and creativity are clearly beyond such a system as the one described here, it might turn out to be true that certain aspects of humorous situations can be modeled in a computable way, since emotions and humor appear to be often closely tied together.
For example, a certain class of humor seems to revolve around situations wherein the comedian describes (or experiences) a negatively valenced state (e.g., distress, remorse) for which an audience member feels pity, and has fears about a similar situation applying to them. The relief they feel that it is happening to ``someone else'' is stronger than the pity they feel for the comedian. Funniness is dependent on (a) the importance of avoiding having the relevant goal blocked (an intensity variable) for both the comedian and the audience member, contrasted by (b) the reduced sense of reality of the situation (another intensity variable), and the cognitive unit (a relationship factor) formed by the audience member with the comedian. An example of this might be a portrayal of some speaker giving his Nobel prize acceptance speech, without realizing he has the remains of some lentil bean soup stuck on his front tooth the whole time. Furthermore, if one audience member were, for example, to consider the parodied researcher ``an arrogant SOB'' (affecting deservingness, another intensity variable) the situation might have increased humor for them.