Because Steve uses a sensing subsystem which requires inferences to fill in missing information about the world (e.g., Steve knows the orientation of the user's head, but does not know which way the user is looking), it is possible for us to imbue him with emotional responses that are affected by his varying degrees of uncertainty. In general, the less certain Steve is about the antecedents of emotion present in a given scenario, the less intense his emotional response should be. This sensing includes, e.g., imprecise, but usable, knowledge about where the user is located, and what the user is looking at.
For example, Steve monitors both manipulation of the
world by the user, of which pulling out the dipstick is an
example, and sensing of the world by the user, of which
checking the oil level is an example. In the former case, Steve
gets messages from the simulator telling him that the dipstick has
been pulled out. In the latter case Steve has to infer that the user
has sensed this information.
In Steve's world the strength of the
certainty inference might depend on
statically recorded context information, such as, ``If the user pulls
out the dipstick, it is likely that she has checked the oil level;''
conversely, it might depend on rules about dynamically Steve-sensed
information such as, ``The closer the student is to the object, the
more likely she sensed it's current value.''
Sense-of-reality settings can be changed as the user is believed to be more engaged. That is, the more the user acts as though the system is real, the more the sense of the interaction as a real thing that Steve will have as well. In general, this leads to stronger emotion responses, modeling what we might call, ``getting into it.''