The Ethics of Information Technology and Business

Placeholder book cover

De George, Richard T., The Ethics of Information Technology and Business, Blackwell Publishing, 2003, 289pp, $24.95 (pbk), ISBN 0631214259.

Reviewed by Norman Mooradian, unknown

2003.09.17


The Ethics of Information Technology and Business is an examination of a wide range of ethical questions that arise from the use of information technology in business and the business of information technology itself. Among the many issues discussed, privacy has a central place. Two chapters are devoted to the topic (chapter two: Marketing, Privacy, and the Protection of Personal Information; and chapter three: Employees and Communication Privacy). Privacy comes up repeatedly in other chapters of the book as well, such as chapter five, Ethical Issues in Information Technology and E-Business, where Web tracking and data mining are discussed, and chapter six, Ethical Issues on the Internet, in which the issues of anonymity and security are raised.

Another central issue is that of intellectual property, in particular, digital assets such as software programs. Chapter four: New, Intellectual, and Other Property focuses exclusively on this issue, though again, as in the case of privacy, it comes up in other chapters as well. The last chapter is a broader reflection on the impact of information technology on society (chapter seven: Information Technology and Society: Business, the Digital Divide, and the Changing Nature of Work.

While privacy and intellectual property are central issues that are worked out in detail in the earlier chapters of the book and applied to different cases in later chapters, there are a number of other topics as well, too numerous to list. They include taxation of e-commerce, assigning domain names, the changing nature of work, liability for system failures, and censorship, just to name a few.

Four themes pervade the book and provide the closest thing to an overarching structure to its many arguments. They are the Myth of Amoral Computing And Information Technology (MACIT), the Lure of the Technological Imperative (TI), the Danger of the Hidden Substructure, and the Acceptance of Technological Inertia.

MACIT is described in various ways throughout the book. In the preface it is described as a tendency to ignore the ethical dimensions of computing. However, for the most part it is treated as a propensity to mistakenly believe or reason that one cannot assign moral responsibility to agents’ for failures of various sorts where computer technologies play a causal role. The reasoning implicit in this mistake is that computers are amoral entities and as such cannot be responsible for the harm they cause. Human agents may be involved in the causal nexus of the harm, but since they are not the direct or central cause, they are not responsible or if so to a minimal degree.

TI is described in various places in various ways. Putting a few of the descriptions together, we might say it is a tendency to develop an information technology because it is possible to do so and meets some objective, irregardless of its ethical consequences (pages 175, 194, 260 et al.). Since the book is an extended argument against TI and MACIT, TI must also be manifested as a form of belief as well. On a descriptive interpretation it says that for any given technology, it will be developed if there is a reason to do so, regardless of its ethical consequences. The prescriptive interpretation is that for any technology, it should be developed if it is possible to do so.

The other themes, i.e., the hidden substructure and technological inertia complement the first two themes and receive much less attention. When they are mentioned it is usually in support of the other two themes. The hidden substructure topic helps explain MACIT, because much of the causal nexus is unknown to most people. Technological inertia is the flip side to TI, that is, accepting the status quo once it has been established. Also, De George is not always careful to distinguish the themes. MACIT and TI seem to blend together from time to time (page 7). This may be because belief that a technology is inevitable might lead to belief that its developers are not morally responsible for its development.

In the first chapter, Ethics and the Information Revolution, De George describes his approach to the ethical questions he will discuss. He locates the issues within a common and universal framework of ethical norms. Murder, stealing and other such acts are generally inconsistent with societal norms across societies despite their cultural differences. Within a society, norms exist for many practices that bear certain similarities to new and emerging practices made possible by information technologies. This suggests a two-step method. First, when evaluating a new practice such as monitoring e-mail, one can use analogical reasoning from similar practices and norms; for example, opening and reading private correspondence. If the dissimilarities are significant or if societies differ in the compared practices, one can move to the second step, which is to appeal to “pertinent considerations of a variety of kinds” (page 26). De George does not attempt to characterize these considerations, but it is fair to say from the way he argues that they can be described as consequentialist or deontic and that they must cohere with the general framework of fundamental ethical norms.

De George then draws a distinction between an empirical approach and an analytical or conceptual approach. The empirical approach is reactive, waiting for harms to be done before a response is formulated. The conceptual approach is proactive. It consists of identifying the logical presuppositions of a practice, institution or system, identifying its structure and the possible ethical weak points of that structure, and considering ways in which values might be built into those structures that would eliminate or mitigate its weak points. This conceptual approach is the one he endorses.

De George does not say how these two methods are meant to fit together, although I think it can be inferred that the place of the kind of conceptual analysis he describes lies in the second step, which moves beyond analogy and takes into account a wide range of “pertinent considerations.” If this is the case, then De George’s method can be summarized as a two-step process that first attempts to apply existing norms to new practices via analogical arguments and then, if that fails, attempts an analysis of the practice or system along the lines described above.

One of the most interesting parts of the book is in chapter one, where De George applies his method of analysis to the general system of IT taken as the basis for the information society. Here he argues that core values of an information-centric society are truthfulness, accuracy, information sharing, and trust. While important in other types of society (agricultural, industrial), these values take on greater role in an information economy, in contrast to punctuality, for example, which is critical in an assembly-line industrial economy. Appeal to these values plays a role in a number of arguments throughout the book.

De George’s discussion of privacy in chapter two also illustrates his method. He distinguishes between four kinds of privacy: Space privacy, body/mental privacy, personal information privacy, communication privacy, personal privacy and cyberspace privacy. Space privacy has to do with control of one’s space against intrusion or observation by others. Body/mental privacy concerns one’s ability to control access to ones thoughts and body. Personal information privacy concerns control over information about oneself. Personal privacy has to do with one’s ability to reveal or not to reveal certain aspects of oneself to certain people. Finally, cyber privacy is similar to some of the others such as space privacy and body privacy and might be thought of as the virtual equivalent of these.

After making these distinctions, De George addresses the problem of tracking people in public. Surveillance technologies are often employed in public places to reduce crime or traffic congestion. An argument can be made that there is nothing wrong with such surveillance. One can observe someone in public and can take a picture, and one can even video someone. If one were to use computer technology to coordinate video images in order to track people’s movements, this would just be an extension of permitted activities. De George identifies the fallacy in such reasoning by describing the argument as claiming that public + public = public.

The argument fails, De George claims, because private and public are not necessarily opposed concepts. One expects a certain amount of anonymity in public, and it is precisely this anonymity that is undermined by aggressive tracking. While De George does not explicitly use the distinctions above, it is clear that they play an explanatory role. The public-public argument assumes that all privacy rights are waived when one enters a public area. Hence it presupposes the frictionless extension of greater and greater observation. It seems plausible because we may be thinking of space privacy, which is certainly waived when we enter most public places. However, body/mental privacy, personal information privacy, communication privacy and personal privacy are not necessarily given up by leaving one’s private spaces. Also, De George’s treatment of privacy shows that it is a degree concept. He does not use this description, but his argumentation in a number of places implies it. If privacy can be held in different degrees, it can be valued in different degrees, and hence can be violated in different degrees. The public-public argument fails because it uses the same justificatory coin to buy more and more of one’s privacy without offering further reasons proportionate to the loss the individual suffers.

De George’s treatment of intellectual property provides another example of his methodology. He challenges the appropriateness of copyright to software programs by showing that the analogy between computer programs, on the one hand, and literary and artistic works, on the other, is not strong enough to support the full extension of copyright protection (mainly its duration) to programs. Computers are more like lists of instructions than literary expressions. Defenders of copyright protection argue the value of the particular expression within a program, but, De George argues, people do not buy programs for their literary value. They normally buy object code, not source code, and hence cannot read the programs. Moreover, I think anyone familiar with programming would agree that one could change the names of all objects in their source code without diminishing the program’s value as an intellectual work or product.

De George also argues against patent protection on the grounds that software innovations are so rapid that that the twenty-year duration of patent protection should be unnecessary. Also, deciding what aspect of the program can be patented is a problem. Is it the look and feel of the interface, the architecture of the program, or the specific instructions of any and every subroutine? De George calls for a form of protection especially crafted for software instead of stretching protections designed for different sorts of intellectual property. He does not tell us what shape such protection should take, but his analysis shows that the grounds for such protection should be reasonable compensation for those who work to develop and market software products and fairness to them for investing time, money and effort in such development. What we need to do, therefore, is look at the special circumstances of software development to determine what is needed to afford such protection.

Like producers of books, developers are threatened by the unauthorized copying of their software, especially since it is so easy to do. But De George is right in thinking that protection against such copying should not span decades. Software changes quickly and versions older than a few years are usually obsolete. Reverse engineering is also a threat, especially in the form of decompiling code and adopting it into a competing product for sale. However, it should be sufficient to ban particular forms of reverse engineering without using the strong protection afforded by patents, which prevents anyone but the first recognized inventor of the innovation from using it without a license. If someone writes a similar program with similar functions to an existing program, but does not steal code from a competitor, it seems unreasonable to deprive him or her of the benefit of his or her work. De George is correct in thinking that this would stifle competition. Also, it is hard to see how society would be benefited. Software developers do not face the same kinds of cost barriers that producers of pharmaceuticals or manufacturers of computer hardware do, so they do not need the incentive of barring competition to recoup massive investments. They just need to have reasonable assurance that no one can compete with them by stealing their code instead of developing their own.

De George puts a lot of weight on MACIT and TI as characterizations of kinds of errors in thinking that can be corrected through argumentation. However, it is not clear that each is a single kind of error or that it occurs on a single level. In fact, it is not clear that they always describe errors in thinking. For example, in the case of TI, it is not always clear where an error is committed when considering the beliefs of agents. In the practical context, TI is less like a kind of error than a decision-theoretic dilemma in which individually rational choices lead to an “irrational” outcome. Individual developers of a technology are often in the position of seeing an opportunity to create something with clear benefits that also carries with it a hard to define risk of being considered unethical at a later time. There may be no clear norms in place against the technology, the implementation they envision may be unproblematic in itself, and coordination with other groups or individuals for the sake of clarifying the issues may not be feasible. The only answer is for society to establish clear norms in advance. Hence, if there is an error here, it may be in found in the reflective belief that this cannot be done, not in the context-dependent, individual decision making.

Here the connection between De George’s argument against TI and his methodology is evident. If TI, as a general claim about technological development, is false, then moral norms can be established in advance of the emergence and deployment of information technologies. For moral norms to be identified, the sort of conceptual analysis De George describes will have to succeed in identifying problems and providing answers. It is probably a bit optimistic to think that this can be done without relying on the reactive, empirical approach of assessing the extent of moral damage done. Nonetheless, De George provides a good example of how to do such conceptual work in the service of identifying and clarifying such issues.

This book is certainly a contribution to the field. It is well placed as part of a series on the foundations of business ethics and should prove essential reading to scholars in computer and information ethics, as well as related fields.