next up previous
Next: Just ask the Playful Up: Agents that educate in Previous: Herman and his high-flying

Don't blow up that H-Pack!

Lewis Johnson, in his capacity as conference chair, was a major force behind the success of the Autonomous Agents 97 conference hosted by USC's Information Sciences Institute in Marina del Rey, California (http://www.isi.edu/isd/Agents97/info.html). Johnson's educational technology group at ISI has a pair of agents under development that we discuss here. The newest addition, Adelle (Agent for Distance Learning Environments) is a 2D/3D pedagogical agent for presenting Web-based course materials. A copy of Adele runs locally, monitors student actions, and reports end-of-session information to a central server. The project is tackling both the case-based courses and the more difficult problem-based learning environments wherin teachers acts as guides for student discovery of relevant information. Authoring tools are also being developed for Adele's troika of instructional narrative, problem-solving simulation, and reference materials. Students are exposed to the instructional narrative, take part in discovery through interactive problem solving, and may refer to the on-line instructional materials. Additionally, on-line discussion with other students and instructors may take place. Adele plays the part of the tutor-guide during problem solving and in the after-exercise summary. An interesting component of Adele is that its thrust is on client-side intelligence, where the user and agent operate as a somewhat autonomous social unit.

A longer-running project, embodied as the pedagogical agent Steve (the Soar Training Expert for Virtual Environments) is coordinated under Johnson at ISI by Jeff Rickel. Steve, embodied in various hand/head incarnations [Rickel & Johnson1997] lives in an immersive virtual 3D environment and helps students learn physical procedural tasks such as operating or repairing complex equipment (e.g., a naval H-pack compressor). As Steve observes the dynamic state of the world, and a student's interactions with it, he may choose to intervene by giving demonstrations, by coaching the student, and by manipulating objects in the world ad hoc. Steve ``talks'' with students via text-to-speech software and uses gestures to make indications in the virtual world. Steve helps students in various ways: he can demonstrate tasks, answer questions about the rationale behind task steps, and monitor students while they practice tasks, providing help when requested. (See figure 12.)

Additionally, there are plans to extend Steve to fill in the role of missing team members during instruction in team training. This type of interactive simulation-based training is not possible without an embodied tutor that takes part in exercises within the virtual environment.

The system has three parts: (1) Steve, the pedagogical agent, (2) the virtual reality software which handles the interface between students and the virtual world, updating a head-mounted display and detecting user interactions with the objects in the virtual world, and (3) a symbolic world simlulator which maintains the state of the virtual landscape. Steve gets messages from the VR software about user actions, and also from the simulator, about e.g., the resulting new state of the world.

The two projects have roots in, and are responsible to, both the AI community, and the education and training community. One theme they share is in looking at how an embodied agent sharing a virtual 2D or 3D space permits the use of gesture and other non-verbal actions to convey critical tutoring information (e.g., such as by giving virtual demonstrations, or otherwise visually manipulating objects in the virtual world).

Some of the the hard problems this group faces have to do with maintaining diectic believability in gesture and reference (see the Lester work above), in building robust personalities (see Elliott work above and [Elliott, Rickel, & Lester1997]), and in developing authoring tools for use with the graphical, and speech, subsystems. An introduction to the work can be found at http://www.isi.edu/isd/VET/vet.html. For insight into the related work of Billinghurst and Savage at the University of Washington's Human Interface Technology Lab, who are also working on complex human interface environments, see [Billinghurst & Savage1996] and http://www.hitl.washington.edu/information/index.html.


next up previous
Next: Just ask the Playful Up: Agents that educate in Previous: Herman and his high-flying

Clark Elliott
Thu Dec 25 19:14:31 EST 1997