In this book, Holly Smith tackles the Usability Demand for moral theories. Many have held that for a moral theory to be acceptable, it must be "usable". The thought is that ethics, unlike other philosophical disciplines, is inherently practical, not solely theoretical. So any moral theory that fails to provide adequate guidance to agents in their particular circumstances must be false. This is an admirably clear and meticulously argued book. Anyone interested in the Usability Demand for moral theory would do well to study it carefully.
Though many do maintain that to be true a moral theory must be usable, few actually articulate in any detail the notion of usability they deem necessary, nor do they do more than gesture toward a motivation for the Usability Demand. Smith is not so neglectful. Her first few chapters are devoted to motivating the Usability Demand for moral theory and articulating the notion of usability at the heart of it.
Smith distinguishes between what she calls core and extended notions of usability. The difference is that a person who uses a moral theory in the core sense needn't be conforming to the theory when she does so, but a person who uses a moral theory in the extended sense does, necessarily, conform to the theory. Here is Smith's core notion of usability.
Ability in the core sense to directly use a moral principle to decide what to do:
An agent S is able in the core sense at ti to directly use moral principle P to decide at ti what to do at tj if and only if
(A) there is some (perhaps complex) feature F such that P prescribes actions that have feature F, in virtue of their having F,
(B) S believes at ti of some act-type A that S could perform A (in the epistemic sense) at tj,
(C) S believes at ti that if she performed A at tj, act A would have F, and
(D) if and because S believed at ti that P prescribes actions that have feature F, in virtue of their having F, and if and because S wanted all-things-considered at ti to derive a prescription from principle P at ti for an act performable at tj, then her beliefs together with this desire would lead S to derive a prescription at ti for A from P in virtue of her belief that it has F. (16)
The extended notion is identical to the core notion except that it requires that the beliefs in clauses (B) and (C) be true. So while I might use the Categorical Imperative in the core sense in deciding what to do on a particular occasion, my so using it needn't entail that I am actually acting in accord with the Categorical Imperative as I do so (if, for instance, I am acting in ignorance of the facts). My using the Categorical Imperative in the extended sense, by contrast, would entail my acting in accord with it. We can thus distinguish between a Strong and a Weak Usability Demand: according to the Strong Usability Demand, a moral theory must be usable in the extended sense for it to be true, and according to the Weak Usability Demand, a moral theory must be usable only in the core sense for it to be true.
For the Usability Demand, generally, Smith identifies four distinct rationales, two theoretical and two "goal-oriented". First, some maintain, a moral theory's being usable is entailed by the very concept of morality -- it is in the very nature of what morality is that it is usable. Second, a moral theory's usability, it is sometimes maintained, is essential for the holding of a central moral ideal of justice -- viz., that a successful moral life be open to everyone; if the true moral theory were not usable by everyone, then a successful moral life would not be available to everyone and thus there would be an unfairness at the very center of morality itself. Third, it is sometimes maintained, a moral theory's being usable is necessary for morality to serve its function as an enhancer of social welfare; only if a moral theory is usable will it be able, when adopted within a society, to promote overall wellbeing. Fourth, it is sometimes maintained, the function of morality is to produce "the best possible pattern of actions", where that best possible pattern of actions is specified by the theory itself, and only if a moral theory is usable will it be such that it can carry out this function. Not every proponent of the Usability Demand endorses all of these motivations, and few articulate them as explicitly as does Smith, but they are the kinds of things mentioned when the Usability Demand gets questioned.
According to Smith there are three main impediments to a moral theory's usability -- error, ignorance, and uncertainty -- and three main responses to the Usability Demand -- the Pragmatic Response, the Austere Response, and the Hybrid Response. A person's being in error, ignorant, or uncertain about matters of fact impedes her ability to successfully use a moral theory because, on account of those things, she likely won't succeed in doing what the theory says she ought to do even when she tries her hardest. The Pragmatic Response to these impediments is to craft a moral theory in such a way that these impediments are no longer impediments. The Austere Response is to reject the Usability Demand itself -- the correct moral theory may fail to be successfully usable by agents in various situations, but that is a deficiency in those agents, not in the moral theory itself. The Hybrid Response tries to cobble together a middle-way solution by taking a page out of both the Pragmatic Response's and the Austere Response's books -- like the Austere Response, the Hybrid Response rejects the Usability Demand for the criterion of right and wrong of a moral theory, but like the Pragmatic Response, it seeks, by way of the inclusion of supplemental decision guides, to provide agents with a practical means by which to figure out what to do in any situation.
Regarding the problem of error, Smith argues that the Pragmatic Response and the Hybrid Response are inferior to the Austere Response. The Pragmatic Response is ultimately bound to fail; no matter how one modifies it, no plausible moral theory can satisfy the Strong Usability Demand. The Hybrid Response fails as well, for it cannot surmount the meta-ethical difficulty of agents generally, if not always, not being in an epistemic position to correctly determine which decision guides best supplement an error-allowing moral theory. The Austere Response doesn't face these difficulties. It can satisfy the two theoretical rationales for the Usability Demand: it can live up to the thought that it is part of the concept of morality that it be usable by allowing that the true moral theory, whatever it happens to be, is usable in the core sense, thus satisfying the Weak Usability Demand, and it can live up to the thought that a morally successful life should be available to all people by noting that, with the right kinds of excusing principles (including a principle of excuse due to ignorance), even a purely objective moral theory can have it that a thoroughly morally blameless life is open to all. It is true that the Austere Response does not secure the two goal-oriented rationales for the Usability Demand, but, as Smith notes, since none of the responses can do this, it is no mark against the Austere Response to the problem of error that it can't.
Though she endorses the Austere Response to the problem of error, Smith maintains that it is inadequate as a response to the problems of ignorance and uncertainty. The reason is that when an agent is in a state of ignorance or uncertainty she can't use an objective moral theory in even the core sense. This is because when uncertain, or in ignorance, one fails to have the requisite full beliefs to count as being able to use a moral theory to decide what to do. Recall that Smith's core notion of usability requires, in clause (C), that the agent have full beliefs concerning her options. And if all an agent has are mere less-than-full-belief credences about her options, then for any moral theory, she won't satisfy clause (C) and thus won't be able to use the theory even in the core sense. So insofar as adopting the Austere Response to the problems of ignorance and uncertainty won't even accommodate the Weak Usability Demand, a better approach is called for. Smith endorses the Hybrid Response to the problems of ignorance and uncertainty. Her last five chapters are an in-depth development of such an approach.
According to the Hybrid Response to the problems of ignorance and uncertainty, a moral code must be a two-level affair -- containing two codes, Code C and Code C* -- the first level constituting the criterion for actions' rightness and wrongness (Code C), and the second level consisting of a set of decision guides (Code C*) for agents to use in circumstances in which they find that they don't have full beliefs concerning the circumstances of their particular situations. In pursuance of such a strategy Smith first considers a raft of Hybrid proposals, all of which offer a single decision guide to supplement the theory's criterion of rightness and wrongness. These include proposals such as maximizing expected deontic value, performing the action most likely to be objectively obligatory, doing that which would constitute trying to do that which is obligatory, etc. Each of these one-all-purpose-decision-guide strategies fails, however, to satisfy an important criterion -- Guidance Adequacy -- Smith sets for any plausible Hybrid Response to the problems of ignorance and uncertainty, viz., that "for every occasion for decision, there is an appropriate rule in Code C* which can be directly used by the agent for making a decision on that occasion" (234).
The solution Smith offers for this problem is to replace the single decision guide on which these various Hybrid theories rely with a series of different decision guides. Instead of the theory's having a single decision guide as constituting its Code C*, Smith proposes a theory which has many decision guides, DGs, as the elements of its Code C*. These various decision guides are ranked in a hierarchy from DG0 which is identical in content to the criterion of rightness embodied in Code C, through a proposal DG1 according to which only those options which maximize expected deontic value are licensed, all the way down to a DGn according to which all options an agent faces are licensed by that decision guide. These decision guides are supposed to offer advice to agents about what to do when they find themselves in situations of various degrees of ignorance and uncertainty. Smith's proposal is that what an agent who is uncertain or ignorant about the facts of her situation may do is perform the action(s) amongst her options which is (are) licensed by the highest ranked DG usable by her. If an agent can't use DG0, as she won't if she is ignorant or uncertain about which of her options has the feature on which, according to Code C, moral permissibility supervenes, she moves down the hierarchy to DG1 and, if it is usable by her, she follows its prescription concerning what she may do. If she is unable to use DG1, then she considers DG2 and, if she is able to use it, does what it says she may do; and if she is unable to use DG2, she moves down to DG3, and so on. If the hierarchy is fleshed out fully (something Smith does not do, nor professes to be able to do) the hope is that every agent will find some DG at some point in the hierarchy which she can use to decide what to do.
The hierarchy of decision guides which constitute an agent's Code C* is bound to include very many distinct decision guides (perhaps infinitely many?). One might wonder whether such a plethora of decision guides could ever be comprehended, let alone used, by any actual agent. But put that worry aside. There is a more significant problem for Smith's proposal, one which she goes to great lengths to try to defuse. Consider an agent who is uncertain not simply about which of her options has the properties on which permissibility, according to Code C, supervenes, but also about the correct ranking of decision guides in the hierarchy of decision guides which constitute the Code C* of her theory. How is such a person to use Code C* to determine what to do? She can't do what is enjoined by the highest ranked usable DG if she is uncertain about how the DGs themselves are to be ranked. What is needed is some standard, S, by which to rank the DGs. The problem, of course, is that one can be just as ignorant or uncertain about S as one is about which of one's options satisfies Code C. The solution Smith comes up with is to postulate a series of precepts, PRE0 - PREn, which, like the DGs, with respect to the various actions an agent can perform, can be used by the agent to rank the DGs themselves when she is uncertain what the correct standard for ranking them is. What an agent should do when she's uncertain how to rank the DGs is to apply the highest ranked precept which is usable for her to rank them.
But the very problem which bedeviled our uncertain agent at the level of the DGs, obviously, can also bedevil her at the level of the PREs -- she might be ignorant or uncertain of what the correct standard for ranking precepts is. This motivates Smith to postulate a ranking of maxims MAX0 - MAXn, with which to rank the PREs, and then a ranking of counsels, by which an agent might rank the maxims. And so on. What we get in the end is a Code C* with an infinite series of (infinite?) rankings from which an agent can gain guidance about what to do when she is uncertain or ignorant about the facts (moral and non-moral) of her situation. Where in this infinite hierarchy of rankings does our uncertain agent stop? Wherever in the series of rankings her uncertainty runs out. That is, the hierarchy will deliver her guidance at the level where she lacks uncertainty and instead has full belief as regards the antecedent of some DG, or PRE, or MAX, etc. Once her uncertainty comes to an end, there, at that level in the series of rankings, she will ultimately get her guidance about what to do in her situation.
All of this makes for a very elaborate structure, maybe just a little too elaborate. (Smith even wonders whether her favored theory is too epistemically demanding, a worry she found devastating for other theories as responses to the problem of error. In the end, though, she claims that the epistemic difficulties posed by her favored theory are distinct from, and less worrisome than, those plaguing the other theories she dismisses on epistemic grounds.) But even setting aside all its elaborateness, a worry remains. What if our uncertain agent's uncertainty never ends? That is, what guidance will Smith's theory provide to an agent who is uncertain not only about the ranking of the DGs, but also about the ranking of the PREs, and about the ranking of the MAXs, and so on ad infinitum? Smith considers this possibility and grants that her theory won't provide guidance for any such agent (and thus will fall afoul of the Guidance Adequacy criterion). She nevertheless seems sanguine that such cases won't be common. I wonder whether this is correct. Why think that for those who are uncertain at the ground level things get less uncertain the more we proceed to higher and higher levels of abstraction (as we do when we go from DGs to PREs to MAXs to . . . )? I'm inclined to think that our uncertainty expands, rather than dissipates, the more we move up through the hierarchies, and so far more agents bedeviled by uncertainty may find no help from Smith's theory.
But this isn't the only worry I have about Smith's project. Though it is to Smith's credit that she clearly and precisely articulates the notion of usability she thinks is at the heart of the Usability Demand for moral theories -- her notion of core usability, it is doubtful whether that notion does indeed capture the sense of 'usable' according to which it is intuitive that moral theories must be usable for them to be true. Insofar as clauses (B) and (C) of Smith's core notion of usability require full beliefs, her notion of usability rules out as unusable in certain situations of uncertainty many theories which seem perfectly usable in those situations. If all I know is that pressing a certain button will have a 50% chance of producing more utility than my doing nothing and a 50% chance of doing nothing, then it seems I most certainly can use Objective Utilitarianism -- according to which what I ought to do is maximize utility -- to determine what to do in this situation. In fact, if I am uncertain what to do, then come to believe Objective Utilitarianism, and because of that belief am moved to press the button, then surely I have used Objective Utilitarianism in deciding what to do. But if I've used the theory to decide what to do in my situation, then the theory is usable for me in deciding what to do in that situation.
Smith is up front about this consequence of her notion of core usability. She writes:
Of course most agents must make decisions in light of less-than-certain credences, so this feature of the definitions [of core and extended usability, that they require that agents have full beliefs] entails that agents are often unable to use standard moral principles [like that of Objective Utilitarianism] in making their decisions. This -- perhaps startling -- implication of the definitions is correct. An agent who merely has a moderately high degree of belief that her opening the safe would carry out her job responsibilities does not thereby have the ability to use a principle requiring her to carry out her job responsibilities. (26)
I'm inclined to think, however, that the fact that the definitions have this consequence doesn't so much show that theories like Objective Utilitarianism are unusable in situations in which agents have only less-than-full-belief credences, like the scenario I presented above. It rather shows that the definitions fail to capture what makes a moral theory usable in the sense in which it seems intuitive that a moral theory must be usable for it to be true. My hunch is that modifying Smith's definitions of 'usable' in those ways necessary to adequately capture the sense according to which it is intuitive that moral theories must be usable for them to be true, will result in definitions that don't have the consequence that a theory is unusable in any situation in which she has only less-than-full-belief credences. This will open the door to an across-the-board Austere Response to the problems of error, ignorance, and uncertainty. All the reasons Smith offers in favor of the Austere Response to the problem of error will transfer over directly to the Austere Response to the problems of ignorance and uncertainty; the Austere Response to those problems will thus, and for analogous reasons, be superior to the Pragmatic and Hybrid Responses.
Adopting an Austere Response across the board is in fact the approach I favor. Though Smith has done an incredibly admirable job in laying out how one might go about developing a Hybrid Response to resolving the problems of ignorance and uncertainty, going that route, I believe, is unnecessary. Though, in the end, I favor a different response to the Usability Demand than the one Smith advocates, her treatment of the issues is nonetheless careful, meticulous, and insightful, and all who are interested in the Usability Demand for moral theories will benefit from engaging with it.