There are a number of ongoing projects relevant to lifelike agents at MIT, many under the influence of Pattie Maes. In one such project, Tomoko Koda's E-poker, users were studied in the way they interact with computer poker players who display emotions using a variety of characters, each of which has a number of different expressions. As part of her master's work, ``Agents with Faces: A study on the Effects of Personification of Software Agents,'' Koda built agents capable of playing poker, and designed several user studies. Koda's emotion model was an augmented subset of that originally specified by Ortony, et al. [Ortony, Clore, & Collins1988]. Among her findings were that personified interfaces helped users become engaged in tasks, that faces were seen differently in isolation than they were when experienced as part of a task, and that perceived intelligence of a face has more to do with the underlying competence of an agent than its graphical presentation (see figure 9) (http://tomoko.www.media.mit.edu/people/tomoko/).
In the Media Lab's the ALIVE work (Artificial Life Interactive Video Environment ([Maes et al. 1994, Blumberg B.1995] http://lcs.www.media.mit.edu/projects/alive/) users are able to interact with virtual creatures without being constrained by the usual trappings of virtual reality systems. Rather, as long as users stay within a large rectangle of floor space their image appears, via the wonders of video tracking, on a large-screen television image in the virtual world. Autonomous animated characters, such as Bruce Blumberg's well-known Silas T. Dog, cohabit the space with the user image, and users are able to interact with them [Blumberg1996, Blumberg, Todd, & Maes1996] (see figure 10). Rather than relying on author input to tweak the behavior of the agents, these rather phenomenal creatures operate truly autonomously, using a set of motivations and goals, based originally on a number of ideas from animal ethology. Silas, for example, renders scenes from his viewpoint as he navigates through the virtual space, then uses the processed image as input which affects his behavior. At conference demonstrations the creatures would exhibit engaging and humorous high-level behavior as users waved their arms, embraced them, and walked around the space.
Most recently, in leading the Synthesizing Emotions group at the MIT Media Lab, Blumberg is pursuing research that focuses not only on the ability of machines to reason about which emotions are appropriate in a given situation, but also in building machines capable of ``having'' emotions, whereby the emotion processes form an integral part of decision-making processes (http://bruce.www.media.mit.edu/people/bruce/).