At Stanford's Social Responses to Communication Technologies research group (http://www-leland.stanford.edu/group/commdept/) Nass, Reeves, et al. formalized the intuition that people apply social rules to many aspects of human-computer interaction (e.g., in [Nass, Moon, & Fogg1995, Nass, Steuer, & Tauber1994]). The often-cited studies of this group have been used to counter arguments that the attempt to build social intelligence into computer programs is frivolous. This counterargument goes roughly that designing software as a social interface is not something we can avoid since it happens wether we plan for it or not; we have no choice in doing it, but only in doing it right.
The studies illustrated that even when computers were not given explicitly anthropomorphic interfaces, users tended to see them in this light anyway, and showed preferences relative to artificial personalities. Although these studies were not intended to suggest that the computer programs used were autonomous agents, per se, they did serve to illustrate that the association of a persona with certain types of programs was relatively easy to establish, and in some cases, perhaps, hard to avoid. In addition to the Stanford group, Youngme Moon continues similar work at MIT's relatively new Social Intelligence Research group.