In both the big and the small, science has changed in the last decades. We have huge sky surveys capable of following regions of stars over time. We can measure the joint time course of changes in physiology of tens of thousands of small regions of the brain. We have daily data on the surface and atmosphere of the entire Earth at 1 km2 resolution. We can measure the genetic and sub-genetic variations in entire populations and try to discover their phenotypic effects. We can measure protein concentrations in a single cell, and experiment on them in a limited way. We can trace axons from terminus to cell body. And on, and on.
"Dark matter" was discovered from anomalies in the shapes of galaxies. How can we improve anomaly detection in large spatial or temporal data sets? Densities of dark matter are estimated from simulations; how can that be validated, and can it be improved? Dark matter does not fit in the standard particle zoo; there are no quantum numbers or associated conservation laws for the stuff. What strategies should scientists take when their fundamental theories prove inadequate? Climate measurements show a systematic increase in average Earth temperatures, while a lot of other things increase as well: population; industrialization; conversion of forests to farmland; urbanization; desertification, etc. It has been known for over a century that monotonically related time series are correlated. How can causal relations be extracted from the correlations? How can we extract the interactions of multiple proteins from experiments that are intended to -- but may not -- directly alter the state of just one of those proteins? Functional magnetic resonance can indirectly measure physiological activity in the whole brain every second or so down to 2 mm3 spatial resolution. People in the machine can be given simple cognitive tasks but we cannot experiment directly on their many individual brain cells, or on small clusters of them. What information about whether and how each little region of the brain is signaling others can be obtained from fMRI time series?
What these and many other cases share is that: 1) We want information about the causal processes that produce the data, and their regularities; and 2) We have available only experimental interventions that are themselves insufficient for those goals. This is not your grandfather's science, or Popper's.
These and other scientific endeavors could surely use help from whatever source, including professional philosophers -- good, new ideas about how reliably and feasibly to get at the causal information we want. Indeed, in The Dynamics of Reason, Michael Friedman proposed that such new frameworks for scientific inquiry are the very job of philosophy, and main lines in the history of philosophy from Aristotle to C.S. Pierce are in accord with his opinion. Rani Lill Anjum and Stephen Mumford are about none of that. Instead, they continue the tradition in recent philosophy of science of avoiding the real scientific problems, or even reporting on new frameworks -- once radical -- to solve them. In their view, the contribution of philosophy is to provide and justify norms for science. The justification is supposed to be metaphysical. But there actually aren't many norms to justify in this book, and so, not much justification.
It would seem that the broad norms of scientific method are pretty clear: follow procedures that have an empirically or mathematically warranted good chance -- and preferably the best chance among available procedures -- of finding the true and avoiding the false. It's means-end. The justification on that scale of abstraction is elementary decision theory. The hard part is finding such methods suitable for the kinds of data we now collect and showing that any particular method or class of methods satisfy those criteria. The authors avoid all of the hard parts. Even their discussion of randomized clinical trials -- one of the few methods that they actually come close to engaging -- adds nothing to the known limitations of such trials or contributes anything to how to improve their reproducibility.
Now to some of the chapters. Chapter 1 tells us the job of philosophy for science is to provide norms and justify them. It gives a banal summary of norms which it proposes to improve on: Be objective; be consistent with the data, "more or less"; the more empirical evidence for a theory, the more acceptable it should be; prefer theories with the greater explanatory scope; predictive success counts in favor of a theory.
Chapter 2 wrangles with philosophers, current (Norton) or past (Russell, Hume), who had or have reservations about causality in science.
Chapter 3 is entitled "Evidence of Causation Is Not Causation." Yes, we know.
Chapter 4, "What's in a Correlation?" doesn't answer the question. There is, after all, a formula, but the authors say they are not concerned with that. They are talking "folk," not science. In passing, they implicitly distinguish correlation from dependence, but do not say what the difference is. They give a short list of conclusions that might be drawn from correlation of event types A and B, including "no causal connection,' but do not describe the well-known ways correlations can be generated when there are no causal relations between A and B.
Chapter 5, "Same Cause, Same Effect" is about non-deterministic relations in data, which they catalogue as: Exceptions; Outliers; Noise or Error; Non-responders; and Interferers. There is nothing more in the chapter than the catalogue, with some brief examples. The uses and sources and kinds of distributions of variables are unmentioned; they do not discuss how to decide what distributions of values, and changes of distributions, are attributable to, or how to make use of them in causal inference; there is nothing about how to identify whether data points should be rejected, nor what to do about missing values. Much is known about these matters, but it is not reported here.
Chapter 6, "Ideal Conditions" observes that data are often messy and there are hard problems about extracting regularities from messy data, and reliably extrapolating from inside the lab to outside the lab. Some people, the authors observe, resort to probabilities, but, they object, that is not so helpful in deciding individual cases. What to do about these hard problems? Nothing said.
Chapter 7, "One Effect, One Cause" tells the reader that isn't always so.
Yes, I read the rest of the book, but you shouldn't. The whole thing is a mixture of banality, light criticism of other philosophers, and restatements of views about the disunity of science and plurality of causes that will be familiar to readers of Nancy Cartwright. Even the traditional puzzles about causal inference are unanalyzed: conditioning reversals; the Monte Hall problem. For all the talk about norms and justification, one of the most pressing problems about justification goes entirely unrecognized: how can scientists assess the accuracy and informativeness of procedures for identifying causes in domains such as astronomy and neuroscience and climate science where experimental testing for the main issues is impossible now and perhaps forever?
I am puzzled: for whom was this book written? Only the most naive scientist or methodologist would find it informative. Students and those outside of professional methodology would miss all of the interesting developments of the last decades in measurement and methodology. On the topic of almost every chapter title there is a huge, informative and interesting computational and statistical literature. The reader would learn nothing about any of that, or even its existence, from this book. This book is further evidence, if further were wanted, of how disengaged much of philosophy of science has become from Friedman's vision and even from any recognition of recent proposed solutions to enduring problems of scientific discovery. I cannot do better than the old witticism: this book fills a much-needed gap in the literature.