The Ethics of Technology: A Geometric Analysis of Five Moral Principles

Placeholder book cover

Martin Peterson, The Ethics of Technology: A Geometric Analysis of Five Moral Principles, Oxford University Press, 2017, 252pp., $74.00 (hbk), ISBN 9780190652265.

Reviewed by Kristin Shrader-Frechette, University of Notre Dame

2017.10.30


Mathematical methods like benefit-cost analysis, decision theory, and quantitative risk assessment have enhanced ethical clarity and problem-solving, especially in the hands of philosophers like Oxford's John Broome, California's Nancy Cartwright, Columbia's Isaac Levi, California's Carl Cranor, and Virginia Tech's Deborah Mayo. Would geometrical methods be equally helpful in ethics?

Peterson thinks so and offers a "geometric method" for ethical decision-making (13). While Spinoza attempted to apply Euclid's method to philosophy by deriving many propositions from a few axioms and definitions, Peterson's method "derives its normative force from the Aristotelian dictum" that agents "should 'treat like cases alike'" (4). If agents repeatedly intuit the pairwise degree of moral similarity between two technology-ethics cases -- one they know how to resolve ethically and one about which they are uncertain -- Peterson says they eventually can reach a "warranted conclusion" about "what it is right or wrong to do, in each . . . case" (5,15).

After a helpful introduction, chapter 2 summarizes Peterson's method of intuiting the "moral similarity" between pairwise cases and between cases and each of the "5 geometrically construed . . . [ultima-facie] principles" that determine rightness/wrongness (24): the cost-benefit, precautionary, sustainability, autonomy, and fairness principles. Chapter 3 presents results of Peterson's "test" of his method. He asked several hundred professional philosophers and undergraduates to estimate the pairwise moral similarity between technology-ethics cases and which of the 5 principles they think applies to each (24), then analyzed their responses. Chapters 4-8 "defend five [ultima-facie] moral principles" (3,18) that Peterson says are "necessary" and "jointly sufficient for analyzing all cases related to . . . technologies" (3,16). To his credit, Peterson admits there is nothing new about these 5 principles whose discussion comprises most of his book (3). Chapter 9 criticizes some non-analytic views of technology ethics, and chapter 10 presents a 4-page conclusion.

The geometric method of Peterson is built on "three central concepts": the scientific-technological case, pairwise moral similarity between these cases, and the dimensions of his geometrical-moral space (29). Though he defines none of these concepts and says case is "a primitive concept" (29), Peterson construes a case as a situation of moral choice among alternatives, such as whether or not to use greenhouse-gas-reducing technologies (209). In Peterson's "moral space," geometric points represent cases. Straight lines between points (cases) represent the "moral concept" of pairwise-case moral similarity (30), as subjectively assessed by each agent on a 7-point cardinal scale. 0 represents "full similarity"; 7 represents "no similarity." Shorter lines (estimates closer to 0) represent greater similarity between cases; longer lines (estimates closer to 7) represent less similarity (38). Surprisingly, although Peterson says these pairwise-case estimates are "analogous [to] . . . moral intuitions" (31), neither Peterson nor agent-responders determine how many or which moral-similarity dimensions are represented by an agent's own estimates. Instead Peterson says agent estimates "merely reflect the relative positions of the data points" (39); "it is up to the researcher to propose a plausible interpretation of the [agent-estimated] dimensions" by doing the MDS [multidimensional scaling] analysis." "Dimensions are identified [by the MDS researcher] after the distance table [representing multiple agents' similarity judgments] has been created" (39).

Peterson's method begins by asking agents to identify cases of technology-ethics controversy that they "know how to analyze" regarding moral rightness/wrongness; to "determine which moral principle [of Peterson's 5, e.g., cost-benefit, fairness] best accounts for . . . [their ultima-facie] judgments about" these known cases; and then to estimate the "moral similarity" between ethically-known and ethically-unknown cases (14) on the 0-7 scale. After agents' numerically- represented, subjectively-assessed similarities are plotted in Peterson's geometric "moral space," agents estimate "the most typical case" (of all cases to which they think the earlier, single, moral principle applies) "by calculating the mean location (center of gravity)" of these cases (14). Agents treat this "most typical" case as the "paradigm case" for that ultima-facie moral principle (15). It "dictates the moral verdict" in the case (18). "Once [agents identify] a paradigm case for each [of the 5] domain-specific [ultima-facie] principles, . . . these [paradigm] cases enable" agents to determine rightness/wrongness of new cases. "The degree of [pairwise] similarity" (geometrical closeness) between new/paradigm cases "determines what . . . is right or wrong to do in each . . . case" because agents should use the same ultima-facie principle in geometrically close cases. However, as agents discover new-case applications of an ultima-facie principle, its paradigm case may change (15).

Claiming his "geometric method can . . . make philosophical views more precise" and clarify moral principles (vii), Peterson has a noble goal. He wants his method to show that using his "meticulous reasoning" can make practical philosophy "as clear, precise, and intellectually challenging as the best work in . . . moral philosophy (29).

Does Peterson deliver on this worthy aim? A first conceptual concern is that he asks agents to assess pairwise-case "moral similarity" without specifying "similarity with respect to what?" He says agents "should first assess how similar a set of cases are, . . . then [the researcher should] identify . . . dimensions in which those similarities can be accurately represented" (37). However, I agree with Nelson Goodman. It's conceptually incoherent to ask people to assess pairwise moral similarity without first specifying moral-similarity dimensions. Answering Peterson's assessment question, I would respond: "Pairwise 'moral similarity' with respect to what? Fairness? Consent to risk? Beneficial consequences? Catastrophic consequences? Some other moral-similarity dimension?"

Without pre-specified moral-similarity dimensions, each agent likely employs her own implicit dimension(s) to answer Peterson's moral-similarity request. Thus for the same two cases, one agent might estimate "moral similarity" with respect to catastrophic consequences, while another might estimate similarity with respect to fairness. If so, Peterson has a common, moral-similarity label, but no common concept. Because different agent-responses likely presuppose different moral-similarity concepts, their responses don't make logical contact. If so, there's little justification for Peterson's quantifying and aggregating many agents' moral-similarity estimates (e.g., 63-710).

Peterson defends his failure to pre-specify moral-similarity dimensions with a rhetorical question. "Why should we think that human beings are capable of identifying and making accurate comparisons of moral similarity along such predefined dimensions" (37)? But if agents really can't identify "dimensional" moral similarity, why does Peterson beg the question that agents can reliably identify his indeterminate, "something-I-know-not-what" moral similarity? Why does he beg the question that some after-the-fact, MDS researcher can reliably identify the implicit, unmentioned, moral-similarity dimensions that earlier agents used?

Explaining how to assess moral similarity, Peterson says it depends on "the nature of the cases under consideration" (36). But for Peterson, cases are primitives (29), and agents have only "intuitions" about moral similarity (31). It's contradictory to say agents both intuit and analyze/debate the different "nature" of different cases. In particular, how would agents adjudicate intuitions that do not presuppose the same moral-similarity concept?

A second coherence concern is that, although Peterson has some useful conceptual analysis (such as his previously published dissection of the Last-Man Argument on pp. 145-156), key claims in the book are unsubstantiated. For instance, Peterson nowhere defends his repeated claim that his 5 ultima-facie moral principles [e.g., cost-benefit, fairness] are "necessary and jointly sufficient for analyzing ethical issues related to . . . technologies" (3, 16; see 14, 169).

Peterson likewise begs the question when he stipulates that his 5 moral principles "cannot be overridden by other principles" (18), and that "when two or more principles clash, all [behavioral-ethics] options come out as being somewhat wrong" (18). Why should clashes between fossil-fueled appeals to its private cost-benefit principles -- and climate-scientist appeals to fairness, sustainability, and precautionary principles -- require that halting or not-halting climate change are both "somewhat wrong"? Unless he believes that reliable moral judgments are merely a matter of consensus and not analysis, why would Peterson believe that agents' choosing clashing moral principles was a sufficient condition for claiming that all choices are "somewhat wrong"? Peterson doesn't respond. Yet obviously his method should rationally assess and not dismiss moral conflict, both because many cases have moral conflicts and because Peterson says he is doing practical ethics from an analytic, not a consensus, perspective. Paradoxically Peterson claims his method "determines . . . right or wrong . . . in each . . . case" (15,20). Yet, he begs the question that when people disagree, consequently "all options are somewhat right and somewhat wrong, [and therefore] the agent is free to randomize among all these acts as long as each of the conflicting principles gets its due" (205).

Peterson also begs the question by using weasel words in his 5 ultima-facie principles. His precautionary principle mandates "reasonable" measures against "nonnegligible" threats. His fairness principle rejects "unfair inequalities." His sustainability principle rejects "significant" resource-depletion (14). Peterson's question-begging formulations of his ultima-facie principles thus raise additional coherence questions:

● How can ultima-facie principles that (by definition) have already "considered" all things, nevertheless have many different interpretations (87-184), based on scores of different "things considered"?

● How can ultima-facie principles give "non-trivial action guidance" (14), when they contain trivial or question-begging words that could "guide" different, even contradictory, actions?

● How can agents debate/discover/correct their ethics principles if there are no prima-facie principles -- only ultima-facie, indefeasible, incapable-of-being-overridden principles that are necessary and sufficient for ethics decisions (3,16,18)?

● How can Peterson's 5 principles be ultima-facie when their question-begging content provides no more justification than typical prima-facie principles?

A third coherence problem is Peterson's disjointed, perhaps inconsistent, claims about what his geometric method achieves.

On one hand, Peterson repeatedly claims his method provides "non-trivial action-guidance" (14), based on agents' pairwise-moral-similarity estimates that "generate . . . [ethical] principles" (19). In "every case" he says his method "determines" rightness/wrongness (20). It tells "how moral principles could . . . should be construed," not "how human subjects actually form moral principles" (30). It "identifies the applicable [ultima-facie] principle(s)" that are "necessary" and "jointly sufficient for analyzing all [ethical] cases" (3, 16). By aggregating moral-similarity estimates, Peterson says, "anyone who is willing to compare sufficiently many [moral-similarity] cases can arrive at warranted moral conclusions" (205). Indeed, "chapter 3" (19), "Experimental Data," has "demonstrated . . . clear intuitions about paradigm cases . . . and that all other moral verdicts can be derived from [intuitive] verdicts about paradigm cases in combination with comparisons of similarity" (19). Peterson also says people can check or "calibrate their [ethical] judgments by asking all agents to compare a small number of randomly selected cases" (205-206). In fact,

instead of asking a large number of individuals to make pairwise comparisons of [moral] similarity, we could . . . extend our own capability to make accurate moral decisions, based on large numbers of comparisons, by using the computerized system . . . delegating this task to machines. (207)

Peterson thus appeals to consensus, says that "moral decisions [can be] based on large numbers of [moral-similarity] comparisons by" people or computers (207), and that "it is fairly likely that most of us get more of the comparisons approximately right most of the time" (38).

On the other hand, Peterson repeatedly claims it's "beyond the scope of this work to adjudicate whether," provided conditions are met, "the opinion on some moral issue reported by a majority of respondents is a good approximation of the morally correct opinion" (81-82). Yet. how is it coherent for Peterson to appeal to consensus, to claim moral-similarity comparisons give "action-guidance" (14), that humans make more "accurate moral decisions" (207) by using "all-agents" comparisons to "calibrate their [own ethical] judgments" (205-206) -- yet to claim he takes no position on whether majority-of-respondents' opinions are "a good approximation of the morally correct opinion" (81-82)?

Apart from whether his method is coherent, Peterson's book has useful ethical analyses of topics such as whether cost-benefit principles are compatible with deontological constraints -- -and whether output and input filters might adequately represent these constraints (93-108). Yet some of his ethics discussions are oversimplified, bordering on misrepresentation. For instance, Peterson says "the maximin principle, [that is] . . . if the worst possible outcome of one alternative is better than that of another, then the former should be chosen . . . became popular in wider circles when it was adopted by Rawls" (120). He then surveys arguments pro/con regarding his simplified Rawlsian principle.

However, maximin was not "adopted by Rawls" in any simplistic sense but in a fourfold-more-precise sense. For one thing, Rawls rejected maximin and used expected utility in circumstances like justifying promising and punishment. His reasoning was "that punishment and promising are practices" and that "utilitarian arguments are appropriate" for "justifying a practice."[1] Yet, he says "critics of utilitarianism" nevertheless "mistake" the ethical rationale for "justifying a practice" with that for "justifying a particular action falling under it," perhaps because of "misconceiving the logical status of the rules of practices"[2] As a result, Rawls says these too-simple critics of utilitarianism forget that "utilitarian considerations should be understood as applying to practices," that utilitarianism "is a better account of our considered moral principles" for justifying practices; and that this "utilitarian view [of justifying practices] is more fundamental" than non-utilitarian rationales, partly because it is "the justification of an institution" rather than of a particular action.[3] More important, Rawls defended maximin for large-scale societal, not small-scale individual, cases; for potentially catastrophic, not small, consequences; and for choices under uncertainty (no reliable probabilities for outcome-occurrence), but neither for certainty (deterministic outcome-occurrence), nor risk (reliable probabilities for outcome-occurrence) choices.[4]

Peterson's oversimplification of Rawls ethics is worrisome because it suggests Rawls always favors maximin. Yet he's not committed to maximin choices under risk -- which comprise about 50 percent of Peterson's 15 technological-case descriptions (209-215). Instead Rawls supports maximin under uncertainty, especially for potentially catastrophic consequences. Peterson's oversimplification also means he misuses his supposed counterexample against Rawls. Peterson claims that under uncertainty, "obviously" reasonable people would choose "potential loss of some small amount of utility . . . against a sufficiently large gain . . . even if nothing is known about . . . probability" (121). Yet Peterson's "small"-consequences counterexample makes no logical contact with Rawls' proposing maximin under potentially catastrophic consequences. Peterson "refutes" only a cariacature of Rawls' position.

A second ethics problem is that Peterson says nothing about "informed consent." Yet consent is the cornerstone value in biomedical ethics and much practical ethics; both government and industry impose many technological health/safety risks on an often-unconsenting public; and within a year or two, the US Centers for Disease Control say most people will die prematurely of cancer, at least 65 percent of it caused by environmental factors (like 100,000 chemical pollutants) to which most people are unlikely to have given consent.[5] Surprisingly only 1 of Peterson's 15 case descriptions mentions something that could be interpreted as lack of consent to a health-threatening technology (213-14). Why? One reason may be Peterson's question-begging hasty generalization that "modern technology has boosted autonomy according to any minimally plausible definition of autonomy" (157). Peterson may think neither autonomy nor consent are big problems. Or he may believe his autonomy principle protects consent. However, this protection seems unlikely because Peterson says autonomy is "valuable in an instrumental and extrinsic sense, just like money and other financial assets" (162). Insofar as technology-related consent often protects rights to life, it doesn't seem "just like money and other financial assets."

A third ethics problem is the factual/normative thinness of Peterson's 100-200-word "case-descriptions" (209-215). Though they're supposed to be "describing the key facts considered to be morally relevant," so that Peterson can empirically "test" his geometric method (207), they arguably ignore most morally relevant facts. For instance, Peterson's case-description for whether it's "morally wrong not to . . . reduce greenhouse-gas emissions" (210) addresses only sustainability issues and includes only a brief 2014 IPCC quotation saying humans likely caused massive, 40-percent increases in carbon-equivalent emissions since preindustrial times. Given this thin case-description -- that ignores normative concerns about  cost-benefit, fairness, and autonomy (210) -- a majority of Peterson respondents unsurprisingly named sustainability as the one best ultima-facie principle to determine this case. Yet regarding cost-benefit, every major non-fossil-fuel-industry study has concluded that net benefits exceed net costs for addressing climate change.[6] Regarding fairness, though developed nations have caused and benefitted most from greenhouse emissions, underdeveloped nations have caused the least emissions but will be harmed most by them.[7] Regarding autonomy, unmitigated climate effects pose massive threats to the self-determination and freedom-from-coercion of citizens in underdeveloped nations because by the year 2100 unmitigated climate effects will cause most of their economies to drop as much as 73 percent in GDP, while the average global drop will be 23-percent.[8] Had Peterson's greenhouse-gas-case description also included cost-benefit, fairness, or autonomy considerations, a majority of agent-respondents might have chosen not only sustainability (63, 70), but also cost-benefit, fairness, or  autonomy principles to determine the greenhouse-gas case.

Likewise Peterson's case description -- for whether it's "morally right to phase out nuclear power in Germany" because of the Fukushima disaster (212) -- is incomplete. It says only that the tsunami caused a triple meltdown, that atomic energy supplied 22 percent of German electricity before Fukushima, but that the German government wants a reactor-phase-out by 2022. Given such a thin case-description -- that ignores normative concerns about cost-benefit, fairness, and autonomy principles (212) -- most Peterson agent-respondents unsurprisingly said the precautionary principle was the best ultima-facie principle to determine this case (70). Yet regarding cost-benefit, no nuclear plant anywhere has ever operated on the market or cost-effectively; all governments have had to subsidize 80-90 percent of nuclear-electricity costs, one reason that nuclear energy has long been the most expensive (including subsidies) source of electricity on the grid.[9] Regarding fairness, current international law limits nuclear-industry-accident liability to $300 million in Japan and 1.5 billion Euros in Europe[10] -- only 0.05-0.13 percent and 0.3-0.87 percent, respectively, of total Chernobyl-accident losses.[11] This means that after an industry-caused nuclear accident, innocent citizens would bear 99+ percent of losses, while industry would bear less than 1 percent. Regarding autonomy, in Japan, Germany, and virtually all nations, most citizens oppose atomic energy. Thus, had Peterson's nuclear-power-case description also included, respectively, cost-benefit, fairness, or autonomy issues, perhaps most agent-respondents would have chosen not only the precautionary (70), but also cost-benefit, fairness, or autonomy principles, respectively, to determine this case.

The upshot? Peterson's thin/biased case-descriptions may invalidate his empirical "test" of his method and predetermine what ultima-facie principles agent-respondents deem morally determinative. If so, Mark Twain was right. For people whose only tools are hammers (single-principle case descriptions), everything looks like a nail (single-principle moral conclusions). At the least, Peterson begs the questions that only one relevant principle should "determine" technology-ethics choices (24); that majority-chosen principles of "all agents" can "calibrate [other agents' ethical] judgments" (205-206); and that one can analyze science/technology based on 100-200-word case descriptions.

Another scientific concern is that, for a book dedicated to using mathematics and analytic methods to assess technology ethics, Peterson makes many scientific-mathematical claims that seem either inaccurate or, at best, incomplete and thus misleading, eg, his remarks about criminology (6), climate change (15), nuclear-energy costs (20), the triangle inequality (34), genetics/epigenetics/epidemiology of risk/drug reactions (130-131), or compensating variations in economics (92-93). Such lapses, along with his scientifically "thin" case-descriptions, suggest that despite Peterson's decision-theoretic expertise, he may have over-extended himself into science/technology.

For instance, Peterson says that "in the early 1990s . . . there was no scientific consensus about the causes and effects of climate change" (15). However, his book ignores nearly all the classic literature on climate change, such as the 1990 IPCC report and its consensus claims about the existence, anthropogenic causes, and extreme-weather effects of climate change. In particular, it ignores the definitive Science article that established scientific consensus, at least as early as 1993, about the fact of climate change. Surveying all peer-reviewed scientific articles on "climate change" in the Web of Science database, the Science piece showed that at least by 1993, 75% of all papers accepted the IPCC or "consensus view; 25% dealt with methods or paleoclimate, taking no position on current anthropogenic climate change. Remarkably, none of the papers disagreed with the [IPCC or] consensus position."[12]

Peterson likewise seems to err when he says that "a straightforward explanation of why people often come to different conclusions about . . . the civil use of nuclear power" is "that several conflicting principles apply equally." We

should apply the Precautionary Principle because it is better to be safe than sorry. On the other hand, nuclear power is a technology to which it seems reasonable to apply the Cost-Benefit Principle . . . which could explain why the debate over nuclear power has become so deeply polarized in many countries. (20)

However, Peterson appears wrong to claim that economics-versus-safety justifications, respectively, provide "a straightforward explanation" of pro-versus-con positions on nuclear power. As my earlier discussion revealed, atomic energy may be defensible on several grounds, but economics/cost-benefit is not one of them. At least since 1974 banks, credit-rating agencies, and economists have known that the market -- neither mainly environmentalists nor accidents -- have killed nuclear power. [9-11]

Similarly, consider Peterson's remarks about quantitative risk assessment and pharmaceutical testing. He says "numerous examples indicate" that it is often

better, when making a precautionary risk appraisal, to believe that some hazard [from a drug] is randomly distributed rather than deterministically distributed, given that there is no practically feasible way to find out who will be affected by the hazard. The veil of ignorance surrounding a random distribution helps the decision maker to make better decisions. (131)

 However, Peterson's claim that assuming random distributions of harm is often a precautionary method of risk appraisal because "there is no practically feasible way to find out who will be affected by the hazard" is at best incomplete and at worst false. It ignores the fact that for at least four decades, epidemiologists and quantitative risk assessors have known that roughly 25 % of the population is biologically vulnerable -- namely children, the elderly, and the immune-suppressed. As a result, for decades scientists routinely have applied a safety factor of 100 to risks, including pharmaceutical/chemical risks, to account for inter-individual and inter-species variability that is most manifested in children, the elderly, and the immune-compromised. For Peterson to say there is "no . . . way to find out who will be affected" ignores basic biology and decades of scientific research on sensitive sub-populations. For him to claim that it is often precautionary to assume random distributions of harm, rather than ten-or-hundred-fold greater harm to this specific, hyper-sensitive subpopulation, is not only scientifically wrong but practically dangerous. Peterson appears to defend technology policy that ignores basic epidemiology and to condone technology ethics that ignores environmental justice.

Regardless, Peterson has sets high standards for himself. He's critical of continental philosophers who do technology ethics (185-203) but whose "conclusions . . . do not follow from the premises" (27). He claims to provide more "clear, precise," "meticulous reasoning" about technology ethics than is "typically used in the field" (29). He also says he has "the first work to develop an ethics of technology founded on analytic philosophy" (3). Apart from whether his book meets analytic-philosophy standards like coherence and avoiding question-begging, it's not the first technology ethics that is analytic. For decades, Princeton's Peter Singer, Virginia's Deborah Johnson, Washington's Steven Gardiner, Cornell's Inma De Melo-Martin, Columbia's Philip Kitcher, NYU's Helen Nissenbaum and others have ably done just that.

Peterson's book does interesting social investigations of technology. It tries to use geometry in innovative ways. It has a bold (but incorrect, I think) rejection of prima-facie principles. Still, for a book whose much-repeated goal is "clear, precise . . . meticulous reasoning," it falls short.
______________________________________

 [1] John Rawls, "Two Concepts of Rules," Philosophical Review 64 (1955): 30, 5 of 3-32.

[2] Ibid., 29-30, 19.

[3] Ibid., 18, 18, 9.

[4] John Rawls, A Theory of Justice (Harvard University Press, 1971), 152, 168.

[5] E.g., K. Czene, P. Lichtenstein, K. Hemminki, "Environmental and Heritable Causes of Cancer among 9.6 Million People," International Journal of Cancer 99 (2002): 260-66. P. Lichtenstein, N. Holm, P. Verkasalo, et al, "Environmental and Heritble Factors in the Causation of Cancer," New England Journal of Medicine 343 (2000): 78-85.

[6] E.g., Marshall Burke, Solomon Hsiang, Edward Miguel, "Global Non-Linear Effects of Temperature on Economic Production," Nature 527:7577 (2015):235-39;doi:10.1038/nature15725. Solomon Hsian, Robert Kopp, Amir Jina, et al, "Estimating Economic Damage from Climate Change in the United States," Science 356:6345 (2017):1362-1369;doi: 10.1126/science.aal4369. Fergus Green, Nationally Self-Interested Climate Change Mitigation (London School of Economics, 2015).

[7] Ibid.

[8] Ibid. See esp. Burke et al. Sebastian Acevedo, Mico Mrkaic, Evgenia Pugacheva, Petia Topalova, The Unequal Burden of Rising Temperatures (International Monetary Fund, 2017).

[9] Kristin Shrader-Frechette, What Will Work (Oxford University Press, 2011), 69-109.

[10] Raphael Heffron, Stephen Ashley, William Nutall, "The Global Nuclear-Liability Regime Post Fukushima-Daiichi," Progress in Nuclear Energy 90 (2016):1-10; doi.org/10.1016/j.pnucene.2016.02.019

[11] e.g., Jonathan Samet, Joann Seo, The Financial Costs of the Chernobyl Nuclear Power Plant Disaster (University of Southern California, 2016). Shrader-Frechette 2011.

[12] Naomi Oreskes, "Beyond the Ivory Tower: Scientific Consensus on Climate Change," Science 306 (2004): 1686.