A continuing interest of mine has been the use of Bayesian networks, which when coupled with utilities, allows one to calculated expected utilities. As any good decision theorist knows, classical decision theory (von Neumann and Morgenstern) rests on finding the maximum expected utility (MEU). One of the standard ways of measuring utilities, albeit excluding one’s attitude towards risk, is the time-tradeoff method. In essence, one is asked a series of questions about a given situation that elicits the respondent’s values regarding the outcomes of that situation. In medicine, a classic time-tradeoff study asks how much life are you willing to give up to avoid a bad outcome. For example, would you rather live the rest of your life after prostate cancer therapy (say, 20 yr) being impotent or would you rather give up 2 years (and live 18 years) but be normally potent? What if the choice was living 2 yr of potency, then death? Typically the questioner ping-pongs back and forth in time until you conclude that the two states are indistinguishable, e.g. 20 years of impotence is the same as 14 years of potency. In classic utility theory, your utility for the state of potency is (14/20) = 0.7.
In reading a short history of Boss Tweed of New York, in which he lived a riotously good life for many years but spent the last few years in ignominy, eventually dying alone in a bleak cell, I got to thinking of how many years of riotous good living I would demand in order to balance the humiliation, loneliness and misery of some number of years after being caught. In analogy to the above example, the tradeoff is not between full health and an early death, but rather, the ratio of good years to bad years.
Can one construct a scale of morality based on such considerations?