Patrick Doyle's work at Stanford, like the DFKI group (below), applies independently developed techniques, both old and new, in a novel application. Doyle's system is a work in progress, but one that, if successful, will benefit a community likely to expand significantly in the near future. In his research Doyle is attempting to build what we might call an infrastructure for intelligent virtual environments [Doyle & Hayes-Roth1998]. That he is working in MUD's (virtual interactive playgrounds in which users can participate (c.f., [Curtis1992])) is less important than that he is addressing the problem of generalizing the relationship between an agent and the virtual environment in which the agent exists. The central thesis of this work is that knowledge for ``agenthood'' wherein the agent has its own personality, and more importantly its own history of interaction with users (e.g. as a mentor-guide for children) is internal to the agent, but that domain knowledge (e.g., how to play chess in one ``room'' of a MUD, and how to buy bread in another) is internal to the environment. Environment knowledge is annotated so that agents, through an established protocol, based minimally on meta-level understanding of a domain, can supplement their own beliefs and actions.
To old hands at AI and HCI this may sound like a rehash of various ideas that have come before. To an extent this is true: the work traces its roots back to J.J. Gibson's affordances [Gibson1977], Don Norman's knowledge in the world [Norman1990], and even Roger Schank's scripts [Schank & Abelson1977]. What makes the work appealing as a modern research paradigm is that it integrates the previous AI and HCI work with that of creating believable agents.
As an illustration, consider the entirely hypothetical example of an agent Marvin who is inspired with some limited ability to feel compassion for a user Sarah being guided through both rooms mentioned above. In the first scenario Marvin might ask of the virtual chess world, ``Suppose that I were a medium-skilled chess player. What sort of move might I advise Sarah to make?'' and ``If I understood chess, would I believe that Sarah's having just lost-her-queen is a good thing or a bad thing?'' In the second scenario Marvin might ask of the virtual bakery world, ``If Sarah does not have enough money to buy bread, will she be happy about this?'' In each of these cases the knowledge about trafficking in the concepts of each respective virtual world (e.g., winning/losing, purchasing desired items with money) is gotten from the world itself. In both worlds however, Marvin might, from internal knowledge, feel sympathetic sorrow for Sarah's failed goals, and moreover might be capable of such believable utterances as ``What a dissapointment this failure over buying bread must be. I feel like I did when you lost your queen in the previous room.''
There are some problems to be considered with this approach, of course. Most importantly, success in developing authoring mechanisms for similarly complex protocols has so far been earned only with painstaking work (c.f., the work of Lester, in effecting believable interaction with the world, and the serious attention paid similar problems at DFKI, both below). One of the keys for success will be in developing techniques for reducing potentially complex affordances to some reasonably cannonical meta-rule representation in such a way that agent-choices can be made according to the persistent nature of the agent: that is, to retain a consistent personality in its interactions with the user the agent must have meta-knowledge about the social implications of different domain choices, without having any semantic knowledge of the domain content itself.
Virtual worlds of various types, with believable agents of all sorts populating them, are here to stay. Any leverage to be gained in making agent intelligence portable from one to another is well worth seeking, and this is a serious, novel, attempt at such an effort.