1.3 Updates:

CSC208 Elliott

Utilitarian Calculations

To perform utilitarian analysis, we actually need to write the equations, and perform the math. Otherwise we are just pretending.


Write down the action to which we are proposing to assign an ethical value. Describe the scenario as necessary.

Example:

Alice, Bob, Carol and Dave are going to take a test, which will be graded on a curve. Alice found a draft of the exam. Should Alice, Bob, and Carol study the secret exam to get an advantage, but exclude Dave who is having trouble in school because of a learning disability and is not popular with the others? Note: Dave will be told what happened after the exam.


List all the stakeholders.

That is, all those who have a stake in the outcome. Since utilitarian analysis looks at the sum total of goodness and badness of outomes for all stakeholders, we must calculate these values for everyone that is affected by the proposed action.

Example:

Alice
Bob
Carol
Dave


[Optionally] decide a weighting factor for each stakeholder.

Ordinarily this would just be 1.0 for all stakeholders. However this option might be important. For example we might wish to include stakeholder values for both the family cat, and the family baby. Now suppose that we can only save the cat, or the baby, from getting hit by a bus. In such a scenario we could weight the baby's well-being as being a hundred times more important than the cat's. Using our calculus we would then find that saving the baby is ethical, but saving the cat is not.

Example:

[Dave is twice as important as anyone else. E.g., we've decided that the well-being of people with handicaps should be favored.]
Make a list of all outcomes relevant to any stakeholder relative to the proposed action.

Example:


Assign a percentage of how important each category of outcome is, for a total of 100 percent.

Example:


Build the grid of stakeholders, and outcomes, and assign values from -10 to +10 for each cell.

Example:

Utility Get caught, expelled 0.2 Well-being over grade 0.3 Guilt 0.2 Betrayal 0.3
Alice -5 +8 0 0
Bob -2 +5 -5 0
Carol -3 +8 -9 0
Dave * 2 0 -4 0 -7

Discussion: In other words, we are trying to capture the idea that Alice is pretty worried about getting caught, but really wants/needs to get a good grade. She does not feel the slightest bit guilty. Bob is not too worried about getting caught, but his grade is somewhat important to him. He feels badly about cheating. Carol is a little more worried than Bob about getting caught, and feels terrible about cheating Dave, but her grade is as important to her as it is to Alice. Dave will be unhappy that his grade has gone down relative to the cheaters, and will feel quite badly when he discovers how he has been betrayed by the others. Note the (*2) notation for Dave, indicating that as a stakeholder his outcomes are considered to be twice as important as any of the others.



Perform the math to see whether overall there is a positive or negative outcome.

One way to do this is to sum the positive and negative outcomes for each attribute for each stakeholder (including multiplication by any optional stakeholder weighting) and produce an intermediate result by multiplying this sum by the attribute weighting factor (percentage). Then, sum all the intermediate results to find an overall outcome according to our utilitarian calculus.

Example:



(-5 -2 -3)       = -10 [getting caught]
(8 + 5 + 8 -4*2) =  13 [well-being over grade, include Dave's weighting]
(-5 -9)          = -14 [guilt]
(-7*2)           = -14 [betrayal, include Dave's weighting]

-10 * 0.2 = -2
 13 * 0.3 =  3.9
-14 * 0.2 = -2.8
-14  * 0.3 = -4.2

-2 + 3.9 - 2.8 - 4.2 = -5.1
The overall outcome of the action is negative, and thus under Act Utilitarian Calculus is not ethical.

An alternate way to calculate this would be to calculate the sum of all category-weighted outcomes for each stakeholder. This is a little more work, but has the benefit of showing the overall outcome for each stakeholder:


(-5 * 0.2) + (8 * 0.3) + 0 + 0 = -1 + 2.4 =		     1.4 [Alice]
(-2 * 0.2) + (5 * 0.3) + (-5 * 0.2) + 0 = -0.4 + 1.5 - 1 =   0.1 [Bob]
(-3 * 0.2) + (8 * 0.3) + (-9 * 0.2) + 0 = -0.6 + 2.4 - 1.8 = 0.0 [Carol]
(-8 * 0.3) + (-14 * 0.3) = -2.4 - 4.2 =                     -6.6 [Weighted Dave]

1.4 + 0.1 + 0 -6.6 = -5.1


One difficult problem that this example does not address is how one translates different units of measure into a common, fungible, unit of measure. For example, how does one compare the costs and benefits relative to health, to those relative to emotions, to those relative to money? In our example here we have skipped any formal step specifying this translation, and instead have implicitly given well-being values for all four outcomes.

If we were to add money, say by specifying that Alice has sold the rights to preview the test to Bob and Carol, then the problem becomes more interesting. Two immediate options come to mind. First, we could translate all of the well-being outcomes into money values, something like "Alice's improved grade makes here feel about as good as getting $780," so the value in the cell of +8 would be replaced with 780; all the values would be replaced with monetary values. Alternatively, and an option I would generally prefer as more flexible, is to translate money values into the same -10 to +10 scale. In this way, "getting some money" might be worth +3 to Alice, and "spending some money" might be worth -1 to Bob, but -4 to Carol who is very poor. Supposing that we arbitrarily say that the money outcome is worth 20 percent of the outcome, and we "steal" that by dropping the importance of betrayal from 30 percent to 10 percent, this would then yield:

Utility Get caught, expelled 0.2 Well-being over grade 0.3 Guilt 0.2 Betrayal 0.1 Money 0.2
Alice -5 +8 0 0 +3
Bob -2 +5 -5 0 -1
Carol -3 +8 -9 0 -4
Dave * 2 0 -4 0 -7 0


For the Rule Utilitarian ethical framework we would extract a compiled form of this calculation (presumably based on more than one example) by making a general rule that using a draft exam to gain an advantage over other students is unethical. In this way we could then just follow the rule, and not bother performing the full calculation for each new, similar, situation.


For Bentham's Hedonistic Calculus the simple calculations above become much more complex. Each numerical value has to be constructed from sub-values corresponding to: