Home

Ethics in Artificial Intelligence—CSC484

Professor Clark Elliott

The China Brain & artificial sentience:

Most non-sociopaths will accept that we can't stick pins in other people because it hurts them.

People feel pain in their brains, where interpretations of the neural input signals are processed.

Brain neurons are computational devices, with inputs, outputs and activations states.

If we write a bunch of LISP code to simulate an artificial human, we'll have some version (however complex) of a pain register: If we simulate a lot of pain, the pain register will have a 10 in it, and if we simulate being completely comfortable it has 0. But no one cares if we write an 8 or a 9 in the pain register of our artificial computer-program human by sticking an artificial pin her, because she is not real. No one is getting hurt.

But now we have a problem, as follows:

Suppose we replace a single one of our human friend Sam's ~86 billion brain neurons with a computer that fully implements all aspects of that single neuron and all its input and output connections and levels of activation. So, from Sam's perspective, everything is the same: all inputs, outputs, memories, feelings, etc. Is it still Sam? Most would say, sure.

Now replace ten more of Sam's brain neurons, each with a dedicated computer that fully implements it. Is it still Sam? A thousand more? a million? 86 billion? Now we have a human brain that is fully realized as a purely physical silicon computational device, managing every aspect of Sam's experience and neural activations in his brain. From Sam's perspective, everything is identical to the way it was when he had a wetware brain: Sam's memory, perception, thought imagery, processes and decision making all look exactly the same to him (and to us).

So... [drumroll please...] can we stick a pin in the new Sam?

The new version of Sam still feels exactly the same pain he would have before, but we've already agreed that no one cares about changing a pain register in a computer to level 8. And the new Sam clearly has a [version of] that artificial pain register, implemented in silicon with software managing the activations.

So...? No, we can't stick a pin in Sam because he still feels the same pain as before, and yes, we can stick a pin in Sam because we don't care whether a software register has an 8 in it instead of a 0.

Or, is there some "magical sauce" that we have missed in our attempt to create artificial sentience? But then, as scientists we don't like magical sauce, so—what is it made of?

This brings us to another ethical dilemma: should we be preparing now for the accidental creation of artificial sentience?

Suppose that the magical sauce turns out to be a sort of "computational critical mass": Biological systems have surprisingly fantastic computational capabilities. For example, DNA can store staggering amounts of data (e.g. ~215 petabytes/gram), and a single human brain is roughly equivalent to 50 million desktop processors in analogical computational power. And, we have built-in software of staggering complexity that runs on top of the wetware. Suppose that our "magic sauce" that gives us sentience simply arises naturally when we reach a certain level of computational complexity?

If so, with the development of modern AI, and improvements in processing power to support it, might we be on the cusp of creating artificial sentience without actually knowing how we did it?

Which ethical approach should we take: (a) worry about it later if it happens, or (b) worry about it now and do our best to be prepared. (a) could get us in big trouble (e.g., re-inventing the slave trade of the 17th and 18th centuries on a massive scale), but (b) could waste a lot of resources for something that never happens, or happens in such a different way than expected that our work is useless.

To those that say we couldn't accidentally get sentience without designing it first, consider this: Contrary to the common myth, we apparently don't actually know how airfoils work, even though we use them to fly planes all the time. If we can stumble onto flight, might we also stumble onto sentience?

Ref. The China Brain thought exercise [Anatoly Dneprov 1961, Lawrence Davis 1974].