The Steve paradigm currently has two subsystems in addition to the virtual reality software. The first is the virtual world simulator, which broadcasts messages any time the world changes. The second is Steve, which, has a sensorimotor module that maintains a consistent model of the world as attribute-value pairs. When the user interacts with the virtual world, Steve gets messages from both the simulator, and the virtual reality software. In this way, Steve is aware of both the user action (e.g., pulling on the dipstick), and the result of the user action (e.g., the dipstick is pulled out).
Because the system tracks the user with only three sensors (one on each hand, and one on the head-mounted display), information about user interaction with the virtual world is incomplete: Steve may have a good idea what is in the user's field of vision, because of head orientation, but might be wrong, because there is no eye-tracking, or because occlusion has not been correctly accounted for.
Among Steve's goals are those requiring that a user achieve certain mental states. Information about these must be inferred from the sometimes imprecise information at Steve's disposal. For example, there are no prerequisite requirements for checking the water level in the (simulated) surge tank. If a user were to walk up close to it in the virtual world there is a greater likelihood that they have looked in the window, but they might have done this anyway.
Since Steve operates as a distinct entity in the virtual world, with which the user also interacts, it is possible for us to inspire Steve with an emotion life of his own, based on those situations arising in his world. These situations can be as a result of (a) states of the world, relevant to his own goals (and possibly caused by either his own actions, or those of the user), (b) actions performed by either himself of the user, and (c) objects that Steve happens to come in contact with in the virtual world.