Synthetic agent characters work because people see them as social counterparts. When bonds are formed, what happens when these creations fail to act in socially responsible ways? Furthermore, is such a goal even achievable? ...after all, people themselves generally fail this test as well!
We might find it to be possible for synthetic agents to form synthetic, yet highly plausible, relationships with users, and with each other - indeed this is indirectly a goal of building believable agents. But with respect to users the concept of a synthetic relationship may well be spurious, and such a relationship is qualitative in nature at the very least. As deep social intelligence, personality and character representations, and graphical representations such as faces and body gestures get better (i.e., are better able to create the illusion of artificial life), the social bond formed between agents, and the human interacting with them, will grow stronger. New ethical questions arise. Each time we embue an agent with one or more lifelike qualities we muddy the distinction between users being amused, or assisted, by an unusual piece of software, and users creating an emotional attachment of some kind with the embodied image the lifeless agent projects.
Especially so for children, but also for adults, it is clear that agents that understand social relationships, maintain histories with users, have some knowledge of human emotion, are beginning to understand human speech, can speak themselves, and have control over media channels to deliver morphing faces, music, and theater-quality sound, all responsively and in real time -- it is clear that these devices have tremendous inherent attachment-forming capabilities. Additionally, these agents, by definition, are at least partially autonomous. They may well live on after the user walks away from the terminal, and may form relationships with other users. In short, they have their own (albeit impoverished) synthetic lives. What happens to the agent, and how it changes over time, may not be something designed for the user to fully control. When wedded with adventure games, with goal-based products such as tutoring systems where a user's well-being may be seen as depending on her relationship to the embodied agent, or with systems that deliver critical, or personal information (such as might occur in, e.g., a medical patient advocate system (e.g., see the related [Miksch, Cheng, & Hayes-Roth1997])) the possibilities for as yet undiscovered social phenomena are wonderous (...frightening?) to ponder.