Philosophy and Computing

Placeholder book cover

Thomas M. Powers, Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics, Springer, 2017, 242pp., $119.99 (hbk), ISBN 9783319610429.

Reviewed by Colin Klein, The Australian National University

2018.05.07


This book collects papers from the 2015 meeting of CEPE-IACAP, an interdisciplinary international conference focusing on philosophy of computing. Including the introductory overview by editor Thomas M. Powers, there are 12 papers by authors from a variety of disciplines, including non-academic co-authors. Though not organized as such, the papers fall under three main themes: traditional philosophy of computation, ethical issues raised by new technology, and the possibility of creating ethical constraints for new technology.

The first group of papers considers venerable questions in philosophy of computing, such as the role of representation, different levels of computational explanation, and the relationship between computation and our interpretation of computing objects. William Rapaport's contribution, delivered as a talk on the occasion of winning the 2015 Covey Award from IACAP, gives that perspective. Rapaport notes that there are two strands of philosophy of computation: one which focuses narrowly on syntactically formulated descriptions of (parts of) the world, and one which focuses on the world itself. Rapaport's wide-ranging, conversational piece uses this distinction to illuminates many debates in philosophy of computation.

One such debate, which continues in the following pages, concerns the role of representation in theories of computation. Paul Schweizer stakes out an extreme syntactic position: only syntax matters to computation, and this fact is in contrast with the computational theory of mind. This is a venerable position, stretching back at least to Searle: computers manipulate uninterpreted symbols, so any meaning those symbols do have is derivative. Schweitzer elaborates a modern defense of this position, pushing back against a recent resurgence of semantic accounts.

Michael Rescorla's illuminating discussion of the concept of different levels of computational explanation offers an interesting counterpoint. He notes that the traditional distinction into computation, syntactic, and hardware levels of explanation works well for artifacts. But it is difficult to find such a neat division in natural systems. Many models in computational neuroscience, for example, contain no obvious syntactic level. Rescorla argues that the focus on syntax might come from contingent facts about how humans have built computers, rather than from the needs of computation itself.

The contrast between Schweizer's and Rescorla's and contributions is striking: the former says that syntactic description is at the heart of computation, the second that it may be irrelevant for computation as such. My sympathies lie with Rescorla: a focus on syntax seems to confuse features of the programming languages, in which we describe computational processes, with features of the processes themselves. This is a line pushed elliptically by Brian Cantwell-Smith (1996), and Rescorla has (in my opinion) latched onto a nice expression of this. Exploiting Rapaport's distinction between, broadly speaking, descriptions of processes and descriptions of talk about processes, we can see that syntax belongs to the latter group: that is, it is a feature we find useful when we go to build computational processes, not the metaphysical foundation of computation itself.

The second batch of papers are more loosely related, and center around broadly ethical/practical problems that arise with technological innovation. John F. Symons and Jack K. Horner take on the important problem of software bugs in complex scientific software. For example, a recent study suggested that a standard package for fMRI analysis had a statistical flaw that could affect the conclusion of up to 40,000 studies (Eklund et al. 2016). Symons and Horner give a technical proof that the expected distribution of bugs in non-trivial programs cannot be calculated or even characterized. I think this may be overly pessimistic: modern modular design and unit testing is done precisely to reduce the problem of bug-checking to that of checking small, trivial-ish programs. That said, the gap between best practice and actual code is large, and the issue deserves more study.

Selmer Bringsjord and Alexander Bringsjord attempt to look at the near-term implications of what they call the 'MiniMaxularity,' roughly speaking the point when AI destroys most of our jobs but before we get uploaded into the singularity. The focus is on economics (one of the authors is a senior associate at PricewaterhouseCoopers), and the jobs that are predicted to survive are creative ones (including computer programming) and computer repair. There are only so many of those jobs to go around. They end by considering but rejecting as unfeasible shared ownership of robots, a universal basic income, and negative income tax. Their primary considered recommendation is for us to stay abreast of developments by having the NSF and other government agencies direct significant funding towards basic AI research, leaving one nostalgic for the halcyon political landscape of 2015.

The final three papers of this cluster focus on the ethics of big data. Markus Christen, Josep Domingo-Ferrer, Dominik Herrmann, and Jeroen van den Hoven argue that a focus on informed consent isn't enough to tackle the problems raised by big data. They lean on Michael Walzer's notion of spheres of justice, and argue that contextual integrity between spheres ought to be a guiding principle for big data research. Breaches of spheres should be governed by principles of autonomy, fairness, and responsibility. They suggest useful concrete prescriptions for meeting these norms, mostly surrounding participant anonymization and data management.

Soraj Hongladarom and Frances S. Grodzinsky both take a larger view. Hongladarom relies on the resources of the extended mind account to argue that one's digital traces are part of the self, and so worthy of comparable protection that we give to more traditional bits of us. The plausibility of the positive claim will probably depend on what you think of the extended mind account. That said, the discussion of norms of group privacy and its possible undermining by big data are thoughtful, and stands on its own. Grodzinsky concludes by suggesting a virtue epistemology for big data scientists. Good data scientists should be open-minded, rigorous, and honest; in most cases, this means acknowledging the uncertainty inherent in their enterprise.

The third and final cluster of papers focuses on machine ethics -- that is, the ethics of trying to make artificial intelligence with a meaningful moral sensibility. Don Howard and Ioan Muntean argue that rather than finding concrete rules to program into AI, we ought to harness the power of machine learning and have AI extract moral principles from the real world. They argue that our moral decision-making processes must be effectively computable. Barriers to machine ethics are therefore barriers to creating appropriate decision procedures. Machine learning has overcome similar apparent barriers in other domains, so why not here?

As a concrete example, they present the results of a set of neural networks trained to discriminate on lifeboat-style moral dilemmas, some of which get decent results. While I think this is a great study, I am suspicious of the approach. Supervised machine learning requires a training dataset with clear class labels, and finding this for more realistic moral scenarios will obviously be nontrivial. There has been much recent work on the susceptibility of machine learning to so-called "adversarial examples" which differ trivially from test cases yet give fantastically different classifications. These suggest that the principles used by these networks do not resemble the principles we use (Athalye 2017).

Most important, however, is that machine learning is preceded by a feature selection step, in which the features to be represented in the training set are (automatically or manually) extracted. Feature selection for moral choice is itself morally laden. So, for example, Howard and Muntean's model includes "variables that encode the categories of humans involved (women, men, children, elderly people)." That these are relevant variables -- and race, marital status, or class of passage aren't -- itself embodies deep moral commitments. Whether it is the correct carving is somewhat beside the point. Rather, part of our moral skill lies precisely in being able to extract morally relevant factors from ambiguous situations. This is not to disparage the model (one must start somewhere), but to emphasize the sorts of difficult practical problems that face realistic machine ethics implementations.

Finally, Shannon Vallor and Mario Verdicchio both focus on the role of human programmers in ensuring that autonomous machines are ethical. Verdicchio emphasizes the role and responsibility of the human designers of AI systems. Much of the argument relies on the claim that humans are autonomous while computers are just "an electronic embodiment of logical and arithmetical rules," and that all machines "operate to reach a goal established by their human programmers." That's something of a minority position these days. Vallor makes the strong claim that the "only acceptable use" of AI and automation is "enlisting their power in the full support of our own moral and intellectual perfectability," and looks to Aristotelean virtue theory for how to do so.

The more normative papers cover a wide range of topics. Each is interesting in its own right, but together they reveal a larger background disagreement in views about the purpose of computers in our lives, both now and in the future. At one end are authors like Christen et al., Grodzinsky, Vallor, and Verdicchio, for whom computers are a kind of tool for us to use. The interesting ethical issues around computers are thus comparable to the issues surrounding other technological advances. Infrared scanners and parabolic microphones and big data are all ways that we might snoop on people, and the important thing is to keep that on the up-and-up. This is likely to be complicated by the fact that the users themselves have heterogeneous interests. Even if Vallor is right that the only good use of AI is to perfect ourselves as virtuous agents, Facebook and the NSA likely have different agendas.

Towards the other end of the spectrum are the authors who think that computers could be something more than a mere tool -- or, perhaps, the sorts of 'tool' like plows and sails that transformed the material and cultural world in which they arose. From that perspective, the bigger problem is not so much having the right values as it is preserving our current values. The economic speculation of Bringsjord and Bringsjord, for example, emphasizes the transformative power that AI has and will continue to have.

The volume is wide-ranging and rich, and contains much that cannot be covered in a single review. It should have something of interest to anyone working on contemporary philosophy of computers and computation.

REFERENCES

Athalye, Anish et al. (2017) "Synthesizing Robust Adversarial Examples".

Cantwell-Smith, Brian (1996) On the Origin of Objects. MIT Press.

Eklund, Anders et al. (2016) "Cluster failure: why fMRI inferences for spatial extent have inflated false-positive rates". Proceedings of the National Academy of Sciences 113(28): 7900-7905.