Russell Blackford and Damien Broderick (eds.)

Intelligence Unbound: The Future of Uploaded and Machine Minds

Russell Blackford and Damien Broderick (eds.), Intelligence Unbound: The Future of Uploaded and Machine Minds, Wiley Blackwell, 2014, 329pp., $89.95 (hbk), ISBN 9781118736289.

Reviewed by Pete Mandik, William Paterson University

This work collects contributions to the growing literature in what might be dubbed "singularity studies," where the singularity in question is the technological singularity, a hypothetical future moment when the rate of change of human technology reaches a speed that surpasses human capacities to predict and prepare for further changes. On the presumption that technological increase follows an exponential curve, the singularity is the knee of the bend, the point beyond which the graph abandons the nearly horizontal for the nearly vertical. Despite the danger that merely human cognition won't keep up with post-singularity events, perhaps cognition that is either artificially enhanced or wholly artificially constituted will be able to thrive in post-singularity times. But if such cognitive systems appear on the scene, we must wonder what implications this will have for humans of the sort currently predominant. Can wholly non-human super-intelligent minds be tamed or otherwise coaxed into friendliness toward their human creators? This is the core question of super artificial-intelligence (super AI). Can the essence of what presently counts as a human be preserved in a wholly artificial substrate? This is the core question of mind uploading. These core questions, as well as equally important and intriguing related questions orbiting the cores, are engagingly tackled in a variety of styles by the tome's contributors.

As philosopher Massimo Pigliucci notes in his own contribution, many of the discussants of the core and related questions of super AI and mind uploading "are operating at least partially outside classical academic institutions" (p. 120). I note that of the volume's 29 contributors, only ten (e.g., David J. Chalmers, Susan Schneider) list academic philosophical affiliations (and two of those ten are students). Nonetheless, there is plenty of philosophical interest in most, if not all, of the book, especially for those philosophically minded readers who welcome empirically oriented philosophy and interdisciplinary projects.

The book's 26 chapters are preceded by twin introductions by the editors and followed by a synoptic afterward. It's curious that the majority of the chapters focus on mind-uploading instead of super AI. (The first four chapters focus on super AI, whereas the remaining seventeen are largely dedicated to mind uploading.) Perhaps this is due to the relative oldness of the topic of super AI. Since its very possibility has been debated for decades, perhaps little remains of general interest to say about it. Or perhaps the increased attention to uploading is due to a widespread assumption that minds with artificial substrates are more easily made by simply plagiarizing a design known already to support intelligence -- perhaps we can just rip off mother nature's shining achievement without fully comprehending it.

Setting aside the issue of whether super AI is possible, we may ask about whether super AI is desirable. Adding to the case in favor of the desirability of super AI is James J. Hughes's chapter, "How Conscience Apps and Caring Computers Will Illuminate and Strengthen Human Morality," wherein the main benefits for humans are how technology can make humans morally better. For instance, apps can be used for offloading moral decision-making onto systems less susceptible to self-deception and lapses in willpower. Many of the artificial systems that Hughes focuses on fall short in both the power and autonomy of oft imagined super AI systems. These later sorts of systems may raise separate questions of desirability, for the serious question arises of whether they can be trusted to not utilize their autonomy and superior intellects to commit grievous harms against humans. Such concerns are addressed in the chapter "Nine Ways to Bias Open-Source Artificial General Intelligence Toward Friendliness" by Ben Goertzel and Joel Pitt, wherein they urge the superiority of an open source approach for democratically determining which values we should make sure that machine intelligences internalize.

Another connection between the topic of super AI and topics of human value concerns issues of economic value, and one particularly interesting such connection concerns the question of what sorts of economic structures and processes would support the eventual appearance of super AI. Our existing technology has emerged not from the brow of some lone genius but rather from a complex network of generations of economic actors. We might likewise expect any future technologies to be similarly dependent on economic contingencies. Michael Anissimov runs the numbers in his chapter, "Threshold Leaps in Advanced Artificial Intelligence," and sketches a detailed and plausible scenario whereby AI's may sell their labor over the internet to finance their own software and hardware upgrades. However, like all such hypothetical forecasts, serious questions arise about how much we can really trust that the future will resemble our present speculations. In their chapter, "Who Knows Anything about Anything about AI?," Stuart Armstrong and Seán Ó hÉigeartaigh develop an analysis, both quantitative and qualitative, of past successes (and lacks thereof) in predictions about AI.

Starting with chapter five, "Feasible Mind Uploading" by Randal A. Koene, the book shifts to its primary focus: the set of issues surrounding AI's based directly on individual pre-existing human minds. Much of Koene's discussion draws on existing successes in, e.g., cognitive neural prosthesis and 3D reconstructions of scanned neural circuits. As Koene details, the project of mind uploading is not wholly speculative, and much progress has been made toward developing the relevant technologies.

The next six or so chapters are largely dedicated to the metaphysical problems surrounding mind uploading, with some of the contributors offering more optimistic appraisals and others more pessimistic. The central concerns here are put in starkest relief when we consider "destructive" versions of mind uploading, versions in which your brain undergoes a scan that is so thorough in extracting high-resolution information -- information pertaining even to sub-cellular levels of grain -- that the original brain is thereby destroyed. The good news, however, is that the information retrieved from the destructive scan serves as input to a machine that then runs a high-fidelity simulation of your neural activities. This machine thereby matches your verbal and other responses in a way that would allow it to pass a Turing test (or Turing-ish test) for being you. But is passing such a test -- a test confirming similar behaviors and perhaps even a fine grain of internal causal detail -- sufficient for being you?

The core question here might best be broken into two questions: First, would the resultant post-brain mechanical system support a consciousness? Second, would the creation of such a conscious entity suffice for your survival despite the destruction of your original organic brain? Seen this way, mind uploading doesn't pose anything particularly new for philosophical investigation. The first question is essentially the one addressed in decades and even centuries of discussions of the relation of minds to physical and mechanical systems. The second question largely resolves into old metaphysical puzzles about material and personal survival. Here the pre-existing personal identity literature on, e.g., Star Trek-style matter transporters provides a road map for how the metaphysical dialectic over survival-via-upload is going to shake out. The main stances as well as the methodologies are largely old news, with lots of thought experiments and appeals to intuition. Though, to be sure, they are tempered here by self-conscious worrying about just how far we can expect to get with thought experiments and appeals to intuition.

One wonders just how much human intuition is going to matter if the stepping stone of uploading leads to a genuinely post-human condition -- how much will posthumans care about the intuitions of mere humans? More importantly, to what degree is the existence of posthumans constrained by the intuitions of and about humans? Whereas most of the metaphysical chapters under consideration focus on the identity criteria and survival conditions of humans, it would be interesting to see further attention devoted to what the identity criteria and survival conditions of posthumans might be. Maybe, as is hinted at in some of the chapters, the life and times of a society of posthumans would get on just fine (by their lights, at least) with metaphysical criteria for identity and survival that more closely resemble present day digital media than present day human individuals. Nonetheless, the present readership is largely human, and so there's no real surprise about whose survival conditions they are most likely interested in.

Of the chapters from this middle (and largely metaphysical) part of the book, one stands out for the way it attempts an end run around what threatens to be a metaphysical impasse. In his "On the Prudential Irrationality of Mind Uploading," Nicholas Agar takes inspiration from Blaise Pascal, whose famous wager presented an end run around debates over the existence of God. Weighing foreseeable costs and benefits, Pascal urged belief in God on prudential grounds. Presenting a decision matrix he dubs "Searle's Wager," Agar urges that the balance of considerations tell against allowing oneself to undergo uploading. One wonders, however, how eager certain self-styled transhumanists and posthumanists will be to follow Agar's advice. They may already embrace value systems so different from Agar's that they would offer radically different assessments of the costs and benefits.

Roughly around two-thirds of the way through the book, the focus shifts even more strongly away from metaphysical concerns and toward more value-theoretical ones. Transhumanists Max More (in "The Enhanced Carnality of Post-Biological Life") and Natasha Vita-More (in "Design of Life Expansion and the Human Mind") share with most of the contributors in this last third of the book a general optimism about the benefits awaiting us and our posthuman offspring in an upload-dominated future. Especially fun are the futuristic phenomenologies offered in "Qualia Surfing" by Richard Loosemore and "What Will It Be Like To Be an Emulation?" by Robin Hanson.

Two chapters stand out in this generally upbeat last third of the book for hitting much darker and more sober notes. In "Against Immortality: Why Death is Better than the Alternative," Iain Thomson and James Bodington argue that being immortal would be necessarily hellish. Their argument is largely composed of two sub-arguments. The gist of the first is that being truly immortal would entail the impossibility of suicide since any possible mechanism for self-extinguishing would have a non-zero probability of being either hacked by would-be murderers or of simply failing, as entropy generally dictates. The gist of the second sub-argument is that in the indefinite long run, anything that can happen will happen, and that includes some truly hellish possibilities. Further, given the thrust of the first sub-argument, suicide would not be an available exit strategy.

The second chapter that especially stands out from the generally cheery and optimistic last third of the book is "Being Nice to Software Animals and Babies" by Anders Sandberg. How much of the experimentation currently conducted on living humans and non-human animals might we instead conduct on software simulations? And are such simulations, if sufficiently complex, to be accorded their own rights? Sandberg works through a series of careful considerations about what sorts of rights to life and to non-suffering might (or might not) be accorded to software entities. He does an impressive job of noting, on various occasions, the existing relevant laws and ethical views concerning the treatment of regular old-fashioned biological entities in various special circumstances and suggests numerous arguments-by-analogy that one might construct in taking care of eventual software subjects who might truly be capable of suffering.

Overall I found the collection to be an intriguing and enjoyable read, with much of interest to both academic and nonacademic audiences. I can also see it being an appealing text for use in a variety of courses, for instance, those courses focusing on the philosophy of technology or on concepts of the person.