The Cambridge Handbook of Information and Computer Ethics

Placeholder book cover

Luciano Floridi (ed.), The Cambridge Handbook of Information and Computer Ethics, Cambridge UP, 2010, 327pp, $36.99 (pbk), ISBN 9780521717724.

Reviewed by Richard A. Spinello, Boston College

2010.09.22


Luciano Floridi, the editor of this fine collection of essays, has emerged as one of the leading figures in the expanding field of information and computer ethics (ICE). Floridi, who holds the Research Chair in Philosophy of Information at the University of Hertfordshire, has developed a theory of Information Ethics (IE) to ground ICE which he describes as an "ontocentric, patient-oriented, ecological macroethics." (83) Since this ontocentric macroethics applies to all reality, Floridi must consider the moral properties that all beings share in common. Accordingly, he develops a metaphysical foundation for IE which understands all conceivable entities and processes as information objects consisting of "appropriate data structures" that determine the nature and identity of that object (Floridi 2002). According to Floridi, all of these animate and inanimate objects that occupy the infosphere are "dephysicalized, typified, and perfectly clonable" (10). IE represents a radical shift from a materialist perspective to an informational one, from a fragmented view of reality to a holistic view. Floridi is optimistic that the re-ontologizing of reality exemplified in nanotechnology and biotechnology will facilitate the fusion of physis (nature) and techne (technology) in a way that no longer privileges the former as the "only authentic dimension of human life" (19).

IE is a "patient-oriented" theory because it is concerned with what qualifies as a moral patient, that is, an object worthy of moral consideration. All information objects, from rocks and software bots to plants, animals, and human beings, have "intrinsic worthiness," a right to persist and flourish, though Floridi concedes that the moral worth of some objects will be minimal (84). Floridi argues that both anthroprocentric and biocentric axiology are inadequate because they arbitrarily exclude inanimate entities from moral consideration. Given that they too are information objects, there is no valid reason for this exclusion, and so he proposes an ontocentric axiology that regards all information objects as centers of moral worth. While biocentric ethics is predicated on the intrinsic value of life and the negative value of suffering, IE claims that there is something more primordial than life, that is, being or existence, and something more fundamental than suffering, which Floridi calls "entropy," understood as the destruction or corruption of informational objects. IE, therefore, is a truly ecological ethic that transforms environmental ethics into an ethics of the whole infosphere.

Several of the essays in this book elaborate on Floridi's paradigm, which has grown in popularity though it is certainly not embraced by all ICE scholars. The Introduction contains an essay by Floridi, which lays out the rudimentary elements of his philosophy, and one by Terrell Ward Bynum tracing the historical roots of ICE back to the ground-breaking work of Norbert Wiener who wrote Cybernetics back in 1948. Wiener, who anticipated certain aspects of Floridi's ontology, regarded animals, computers, and even communities as cybernetic entities. He theorized that the traditional distinction between living and non-living entities was a "pragmatic choice," rather than an "unbreachable metaphysical 'wall' between kinds of beings" (27). Bynum's essay is particularly instructive for anyone looking for a concise overview of the evolution of this field and serves as a complement to the discussions about the uniqueness of computer ethics that appear later in the book.

Part II, with essays by Philip Brey and Jeroen van den Hoven as well as Floridi, introduces ethical theories and methodologies typically deployed by computer ethicists. These essays consider the extent to which standard ethical theories such as utilitarianism or Kantianism can be effectively applied to the dilemmas triggered by computing technologies. Professor van den Hoven provides a useful survey of these frameworks differentiating between high level theories (such as Kant) and mid-level theories (such as Rawls's theory of justice or Nussbaum's capability approach) that are more suited to be used as sources of moral arguments. Brey proposes an alternative framework which he labels "disclosive computing ethics" because of its focus on the disclosure of opaque but ethical features of computer systems. Once those aspects are liberated from their obscurity, the moral issues can be analyzed on a philosophical level. For example, in order to assess the moral suitability of restricting speech by filtering technologies, it will be necessary to discern the precise functionality of these filters in order to expose the values embedded within them.

Brey's philosophy echoes the pioneering work of Langdon Winner who famously opined that "software is politics." Computer code is not neutral or amoral since it is often laden with values that are buried within it. Little consideration has been given to what's implied when the logical constraints of private code become the surrogate for legal and ethical constraints. Specifically, how do we regulate and educate software developers and provide them with coherent rules for writing this value-laden code?

Part II concludes with another important essay by Floridi further elaborating the nuances of IE. In this essay he considers, among other topics, the theme of moral agency, though he broadens the class of moral agents to include robots, software bots, and other IT systems. He defines the moral agent as an interactive, autonomous, and adaptable transition system capable of performing "morally qualifiable actions" (86). With this broad definition, Floridi hopes to get beyond the anthropromorphic attitude toward moral agency. He concedes that although artificial moral agents, like robots and corporations, can be held morally accountable, they lack moral responsibility for their actions. In the infosphere, however, we must transition from a responsibility-oriented ethics based on punishment and reward to an ethics based on "accountability and censure" (88).

Part III includes a number of articles surveying the specific issues that fall under ICE. Bernd Carsten Stahl's chapter discusses the challenges to intellectual property law provoked by the digital revolution, while the chapter by John Sullins reviews issues such as privacy, free speech rights, and surveillance. In the provocative chapter on security and cyberwar, John Arquilla suggests that there are sound ethical reasons for embracing cyberwar given the alternative of physical warfare. Alison Adam expands on an argument that she has made elsewhere by proposing that a tenable ICE methodology must take account of gender-related biases along with other inequalities that are embedded in the design and use of IT systems. We need, she argues, virtual communities where the needs of different groups such as the elderly and disabled are acknowledged and addressed.

The chapter by Charles Ess and May Thorseth concentrates on globalization issues which take on a particular salience in light of controversial policies by China and Iran that prescribe heavy filtering of the Internet. In 2007 Google entered the Chinese market to compete against China's search engine, Baidu, but the company knew that it had little chance of success unless it filtered search results. Google initially complied with Chinese law, though it has since had second thoughts about its China policy. Ethical pluralism can create havoc for technology gatekeepers like Google who face the choice of conforming to local custom or exporting their own moral values. Ess and Thorseth are optimistic about achieving convergence in cyberspace policy despite this evident pluralism and cite the example of data privacy protection. Countries like Japan and China have adopted privacy laws mirroring those of countries in the West offering their citizens some degree of privacy protection. On the other hand, one might contend that the proliferation of censorship regimes and the intractability of many sovereign states on this issue do not augur well for "fostering a shared ICE that 'works' across the globe" (164).

Part III concludes with a brief chapter by John Weckert and Adam Henschke on the intersection of ICE with other applied ethics fields. The authors contend that ICE issues are pervasive in areas from bioethics to environmental ethics. For example, the challenge to responsibly determine the parameters of digital privacy has been especially acute for medical ethicists, given the highly sensitive nature of medical data.

Part IV perceptively considers the novel moral themes that emerge in artificial contexts. Stephen Clarke's chapter explores the ethical issues that arise from cutting-edge technologies, such as RFID tags and nanotechnologies, while the articles by Vincent Weigel and Colin Allen lay out the contours of the debates about artificial moral agency and the ethical propriety of artificial intelligence applications. Weigel's essay weighs in on the ethics of IT artifacts (including hardware, software, and use manuals) with the criterion of moral agency as the main axis of discussion. Should we subscribe to the "neutrality thesis" which regards technology as a tool for human beings and nothing more? Or do these artifacts have some intrinsic moral significance and perhaps even moral agency? It would seem that IT artifacts lack the mental state of intentionality, and without intentionality there can be no authentic moral agency. But Weigel seems to favor a more "pragmatic approach" that regards humans, animals, and technical artifacts as "intentional systems" that are designed to perform certain functions (pp. 208-9). Therefore, an IT artifact has some form of moral agency and intentionality since it is actively involved in bringing about a moral impact, a good or bad state of affairs. However, because the moral evaluation of these artificial entities poses problems, Weigel admits that there is still a need to stretch our moral concepts in order to come to terms with the moral status of an IT artifact. Nonetheless, consonant with Floridi's position, Weigel maintains that moral agency is "no longer a privileged human position" (218).

The final section on "Metaethics" includes an essay by Herman T. Tavani on the foundationalist debate in computer ethics and Floridi's epilogue. The foundationalist debate is focused on the "nature and justification" of ICE (252). Tavani pursues several of the themes suggested in Bynum's essay, concluding that while ICE qualifies as a valid field of applied ethics that warrants philosophical analysis, it does not require a novel ethical theory, contrary to what expansionists have argued. This might leave the reader wondering about Floridi's own theory which has been proposed as a macroethics, a more robust framework than traditional theories capable of addressing the conundrums of cyberspace. However, Tavani concludes that IE may be less radically foundational than it seems, since it is actually a supplement rather than substitute for the standard theories used in ethical analysis.

Readers are generally well served by all of the essays in this book which now takes its place among other computer ethics handbooks and collections of readings. One difference is that this collection is a bit less eclectic because it is shaped to some extent by the doctrine of one thinker which is the focus of several articles. Floridi's innovative philosophy deserves thoughtful consideration and has made an immense contribution to this field. Readers and ethics students should be wary, however, of uncritically assimilating the Floridi paradigm. Despite its merits, Floridi's approach to information ethics represents a disquieting trend in modern ethics to transcend any sort of anthroprocentric morality that gives a privileged place to the human person based on her ontological prerogative. The philosophical tradition has convincingly defended the unique moral status accorded to human beings because the person is different in kind from all other creatures, real and artificial, by virtue of his or her intellectual nature. Floridi, however, favors a more reductive approach that considers all beings as clusters of information that possess intrinsic worth even though such worth is minimal in some creatures and can be overridden by other ethical considerations. Issues about the distinctive nature of the person and the rights or duties derived from that nature are peripheral to this macroethics, which evaluates moral duty primarily in terms of the moral agent's contribution to the augmentation of the infosphere.

Additional problems with Floridi's paradigm stem from the expansion of the category of moral agency to include artificial agents. It is not so evident, however, that artificial beings have moral agency which implies some level of moral accountability. The conditions of moral agency delineated by Floridi in this book and elsewhere (see Floridi and Sanders, 2004) are insufficient, since moral agency also requires free choice and some capacity for moral deliberation and judgment. Moreover, the distinction Floridi makes between moral accountability and moral responsibility needs further elaboration. He claims that if we hold beings accountable, they deserve censure, while responsible persons are subject to reward and punishment. But what's the difference between censure and punishment? Is it intelligible to "censure" a software bot for invading someone's privacy by automatically collecting that person's data? Either the bot has been programmed to act this way, which means that the programmers and designers deserve the blame, or it is blindly malfunctioning, which hardly makes it deserving of censure or our moral indignation.

Ethicists and social scientists define accountability as an expectation "that one may be called on to justify one's beliefs, feelings, and actions to others" (Lerner and Tetlock, 1999). We have no expectations that a software bot can offer a normative justification for its actions. It is simply untenable to ascribe moral responsibility or accountability to an agent unless that agent acts freely and intentionally and has the capacity to justify what it has done. Robots and other IT artifacts are artificial agents that can "cause" harm or achieve certain beneficial results. However, their causal responsibility is analogous to natural causal responsibility that we assign to other non-intentional agents such as earthquakes or rain storms that inflict injury on their victims. It's possible, of course, to radically revise the notions of "intentionality" and moral agency so that they fit many non-human entities occupying the infosphere. Deterministic theories that pragmatically re-conceive the person as some sort of de-centered, mechanistic, interactive "system" represent one such approach. This strategy, however, refuses to take seriously the person's higher, spiritual properties such as self-consciousness which are not reducible to the physico-chemical activities of the brain. It also occludes our idea of personhood and diminishes human dignity, which can now be more easily relativized for utilitarian reasons.

All of these issues need further debate and scrutiny, and with this timely and enlightening book Floridi and his co-authors have provided the intellectual community with a rich resource to advance this discussion.

References

Floridi, Luciano and Sanders, J. W. 2004. "On the Morality of Artificial Agents," Minds and Machines 14 (3): 349-379.

Floridi, Luciano. 2002. "On the Intrinsic Value of Information Ethics and the Infosphere," Ethics and Information Technology 4 (4): 288.

Lerner, J. and P.E. Tetlock. 1999. "Accounting for the Effects of Accountability." Psychological Bulletin 125 (2): 255.