Alan Mathison Turing (1912-1954) was first and foremost a mathematician, both pure and applied. Before his premature death by suicide, he made groundbreaking contributions in logic, cryptography and computer science, as well as being one of the key codebreakers at Bletchley Park during the Second World War, and a central figure in the post-war development of digital computers in Great Britain. However, his contributions have continued to intrigue professional philosophers. A central focus of his work was a preoccupation with the extent of mechanical explanations of the world; it is this that has led to the continuing interest in Turing's impact on philosophy.

The current volume is the published version of a session of the Boston Colloquium for Philosophy of Science held November 11-12, 2012, to celebrate the centenary of Turing's birth. Several other volumes have appeared recently as part of "The Alan Turing Year," but the present book is distinctive in concentrating on the philosophical aspects of Turing's work.

The fifteen chapters in this collection divide into three rough groups corresponding to areas of philosophy to which Turing contributed. The first of these is the definition of computability through the concept of the Turing machine, the second the universal Turing machine that forms the foundation of current computers using the notion of a stored program, and the third, artificial intelligence. Juliet Floyd contributes an introduction that contains an intellectual biography of Turing, as well as a précis of the papers in the volume.

Turing was inspired to give his famous definition of computable numbers by the lectures of Max Newman, who observed that Hilbert's *Entscheidungs**problem* (the decision problem for first-order logic) was open. Although Alonzo Church had proved this problem unsolvable (a little earlier) as an application of his identification of computable functions with the λ-definable functions, Turing's definition has come to be accepted as definitive, because of his penetrating analysis of what it means for a (human) computer to follow a step-by-step algorithmic procedure. Kurt Gödel, in particular, did not find Church's identification persuasive, but later wrote that "due to A.M. Turing's work, a precise and unquestionably adequate definition of the general concept of formal system can now be given" (78), and singled out Turing's work as a rare example of a fully worked out conceptual analysis in mathematics. In his Princeton Bicentennial Lecture in 1946, he wrote that the great importance of the analysis is "largely due to the fact that with this concept one has succeeded in giving an absolute definition of an interesting epistemological notion, i.e. one not depending on the formalism chosen" (81).

The chapter by Daniele Mundici and Wilfried Sieg, "Turing, the Mathematician," gives an excellent survey of Turing's definition of computable functions, and of the reasons that we can give for its being a correct analysis. Juliette Kennedy's "Turing, Gödel and the 'Bright Abyss'" gives a very interesting overview of the historical developments in the 1930s leading up to Turing's definitive solution of the problem of defining computable functions, and also elucidates notions such as that of "absoluteness" employed by Gödel in the quotations above.

Floyd ("Turing on 'Common Sense': Cambridge Resonances") sets Turing's famous 1936 paper, "On Computable Numbers, with an Application to the *Entscheidungsproblem*," in the context of the Cambridge of Turing's undergraduate days, especially as it relates to Wittgenstein and other figures connected with him, such as Russell, Ramsey, Sraffa, Hardy and Littlewood. Floyd argues for Wittgenstein's influence on the 1936 paper; however, there does not seem to be any direct evidence for this claim. It is well known that Turing attended Wittgenstein's class on the foundation of mathematics in 1939, and the transcripts of their interactions have been published. But the evidence of interactions before 1937, when Alister Watson introduced them (123) appears to be lacking. Still, the similarity between the *Blue and Brown Books*, with their analysis in terms of language games consisting in simple concrete rules, and Turing's famous paper, is striking. It may be that there was no direct influence here, but rather a close resemblance between Wittgenstein and Turing; both men were deeply unconventional, suspicious of abstractions and determined to understand fundamental ideas in a concrete manner.

The section of the book devoted to the Universal Machine begins with a short but highly informative essay by Martin Davis, "Universality is Ubiquitous." The key insight underlying the Universal Turing Machine is that the distinction between machine, program and data is illusory. We can, for example, simplify the machine if we implement some of the basic operations in software. This was an idea that Turing promoted in the early development of computers, and is the ultimate inspiration for Reduced Instruction Set (RISC) computers. Turing poured scorn on computer developments in the USA -- he thought they partook of the "American tradition of solving one's difficulties by means of much equipment rather than by thought" (155).

Given the primitive state of computers in the early 1950s, Turing's vision of their future possibilities is amazing. He not only foresaw their uses in mathematical research, playing games and simulation of biological processes, but even wrote some of the early programs in these areas. Two fascinating chapters in the book describe some unusual applications.

Craig Bauer ("The Early History of Voice Encryption") describes the apparatus used by Churchill and Roosevelt to communicate by telephone over the Atlantic. This was a system, named SIGSALY, that first encoded speech digitally, then enciphered it by adding a one-time pad in the form of 16-inch records (that were never re-used). This system was so secret that it was not revealed until 1976. Turing himself was not involved in the development of SIGSALY, but travelled to the USA in 1943 to evaluate the security of the system. This inspired him to develop his own speech encryption device, Delilah, that used a pseudorandom sequence rather than a one-time pad. Unlike the room-sized SIGSALY, it was a portable system -- a rebuilt Delilah machine can be seen at the Bletchley Park museum.

B. Jack Copeland and Jason Long provide a chapter on "Turing and the History of Computer Music." The Manchester Mark II Computer was programmed to produce different musical notes on a loudspeaker to indicate its internal state. Turing himself does not appear to have had any interest in computer-generated music, but his assistant Christopher Strachey wrote some programs for this purpose. The Mark II's greatest hits were "God Save the King," "In the Mood" and "Baa Black Sheep." These are among the first computer-generated pieces -- Copeland and Long have reconstructed the programs that produced them.

Although Turing's idea of the stored program computer has proved fantastically successful in both theory and practice, two chapters express some dissatisfaction with his work. Armond Duwell, in "Exploring the Frontiers of Computation: Measurement Based Quantum Computers and the Mechanistic View of Computation," claims that Turing's work cannot serve as an analysis of computation. Instead, he favors a "mechanistic view of computation" due to Piccinini, and gives an argument that the measurement based quantum computers introduced by Raussendorf and Briegel fit the definition. S. Barry Cooper ("Embodying Computation at Higher Types") expresses dissatisfaction with the paradigm of Turing computability, which although powerful and successful, he takes to be misleading. Cooper finds the disembodiment of Turing's artificial intelligence unreal, and thinks that the model of digital computers fails to give due prominence to the role and structure of information.

The chapters on artificial intelligence are quite varied, as could be expected from such an amorphous subject. Patrick Henry Winston's "On Computing Machinery and Intelligence" is the only essay in this group that adheres to the classical approach of Turing. He reports on his experience with Genesis, a program designed to understand stories and answer questions about them, research closely related to the famous Turing Test. Although he admits that Genesis has a long way to go, he expresses optimism about the long-term prospects for machine intelligence.

Michael Rescorla, in "From Ockham to Turing -- and Back Again," argues against the formal syntactic view of computation as a model for mental activity. Instead, he advocates semantically permeated computation, in which the basic objects manipulated are "Mentalese symbols" that are inherently representational.

Diane Proudfoot's "Turing and Free Will: A New Take on an Old Debate" starts from some remarks of Turing to the effect that intelligence is an "emotional concept." By this phrase, Turing meant the following: whether we consider an object as intelligent depends as much on our own state of mind and training as on the properties of the object. Proudfoot suggests, following some ideas of Turing, that free will is an emotional concept in the same sense. This gives rise to a new form of free will compatibilism.

Susan G. Sterrett ("Turing on the Integration of Human and Machine Intelligence") examines Turing's remarks on the development of intelligence used in various kinds of search in the light of two recent projects, the question-answering program (Watson) that won against humans in the game show Jeopardy, and the machine learning program NELL that learns by reading material on millions of web pages.

Finally, there are two chapters that are somewhat tangential to the main themes of the volume. The first, "Justified True Belief: Plato, Gettier and Turing," by Rohit Parikh and Adriana Renero, presents two simple technical results bearing on the question of whether justified true belief is knowledge. The second, "Is there a Church-Turing Thesis for Social Algorithms?," by Rohit Parikh, uses tools from epistemic logic in discussing the question of the title. He concludes that the simple answer to the question raised in the title is "No."

The papers in this volume illustrate the continuing stimulus that the work of Turing provides to philosophers, mathematicians and computer scientists, and should prove rewarding reading for anybody interested in the contributions of this great British scientist.