Home

Ethics in Artificial Intelligence—CSC484

Professor Clark Elliott

Basic Ethics:

Utilitarian arguments: Basic utilitarian models of ethics translate all good and bad outcomes into some kind of shared currency, and then examine the summed positive and negative outcomes of any particular action: if there is more positive outcome than negative for the community as a result, then the action is ethical.

Example: Using the shortest job first algorithm in a grocery store line will result in the least amount of time people have to wait in line overall: if you only have a few items to buy, you get to cut in front; if you have a full cart you always let everyone else go first. Thus, under pure utilitarianism this is an ethical system.


Key: People in line = A, B, C / Num items = 1, 3, 6

QueueU = A1, B3, C6, total waiting time is 5 items (B & C each wait for 1; C waits for 3)
QueueN = C6, B3, A1, total waiting time is 15 items (B & A each wait for 6; A waits for 3)
Under utilitarianism, QueueU would be considered a more ethical arrangement, because it has less of a negative outcome (waiting in line) for the community than QueueN.

But pure utilitarianism can get us into trouble: under this system it is ethical to snatch healthy young people off the street and harvest all their organs to save and improve the lives of thirty or forty others, and increase the happiness of all their families. Ouch! (See also Shirley Jackson's The Lottery)

Kantian arguments: Kantianism, by contrast is absolute. You might adopt a categorical imperative to say that you never cheat your neighbors in the community by cutting in front, and thus always use a First In First Out queue in the grocery store, which is what most of us do, thereby guaranteeing an inefficient system.

Think this is all academic? Think again: We come around a corner in our autonomous (driverless) vehicle and there are five school children who have wandered into the road in front of us. Does our car (a) make the decision to swerve left and run over a single adult pedestrian instead (Utilitarian) or (b) continue on and run over the schoolchildren (Kant)? (b) saves more lives, but violates what might be an absolute catagorical imperative never to choose to harm humans. Decisions have to be made NOW, as you are reading this, whether our automous vehicles must swerve to the left.