Physics Avoidance: Essays in Conceptual Strategy

Placeholder book cover

Mark Wilson, Physics Avoidance: Essays in Conceptual Strategy, Oxford University Press, 2017, 427pp., $81.00 (hbk), ISBN 9780198803478.

Reviewed by Thomas Ryckman, Stanford University

2018.11.12


Wandering Significance (WS) (OUP, 2006), and now this book (PA) establish Mark Wilson as the moral compass of analytic philosophy. Both volumes are so wondrously rich in argument, incorporating such an abundance of fascinating detail, as to resist any compressed joint summary. But taken together they comprise a many-count indictment of reigning philosophical complacency while opening up broad new horizons for investigation. WS began at the foundation, targeting the myriad obfuscations of the semantic framework of sense and reference accompanying the logic-based "classical theory of the concept" still at analytic philosophy's core. It showed the intuitive "semantic finality" thesis of this doctrine to be an atrophied rational reconstruction of context-sensitive piecemeal conceptual innovation within real-life discursive and inferential practices. PA builds on this foundation at higher level as an extensive critique of the "Theory T thinking" prevalent in much contemporary philosophy of science and analytical metaphysics. "Theory T thinking" itself is a congeries of doctrines stemming from "the logic-centered conceptions of scientific organization canonized by the logical empiricists in the mid-twentieth century". (136) A chief presumption of Theory T thinking is that physical theories are axiomatized structures that can and do succeed in capturing what Wilson calls "the freely autonomous behaviors of nature within their mathematical netting". Explanation as deduction from laws (the ordinary differential equations of point-particles) is the guiding paradigm. "Autonomous behavior" simply indicates the common presupposition that, without noticeable interference from extraneous factors, a physical system's evolution is brought about from its initial conditions under the guidance of its own internally determined dynamics.

Theory T thinking leads philosophers to overlook the "physics avoidance strategies" of real life modeling practices or to dismiss them as "in principle" dispensable in accounts of how fundamental science works. "Physics avoidance", a term that appears already in WS, is not "avoidance of physics". It is Wilson's umbrella term for the multiscale modeling techniques of scientists and engineers that readily exploit "work arounds" circumventing intractable, difficult, or unreliable calculations from first principles and fundamental forces. Through dozens of examples, we learn a great deal about how applied mathematicians assemble "computational architectures" that blend together reliable data extracted from empirically attested models and sub-models, each targeting the system at its own characteristic length scale. These patterns of reasoning establish that the macroscopic laws characterizing the behavior of a system are independent, or largely independent, of fundamental laws governing the system's microconstituents. PA amply illustrates how "Theory T thinking" is simply oblivious to the manifold ways in which scientists and engineers apply classical mechanics to extended matter in pursuit of descriptive, explanatory and design success.

"Gizmos" are the most important dramatis personae in both volumes: plucked violin strings, hammered and/or heated steel beams, shock absorbers, cranks, beads sliding on a wire, weights suspended on a string, supposedly elastic billiard balls, loaded struts, vibrating drumheads. Gizmos furnish readily visualizable illustrations of how the diverse approximative techniques of multiscalar modeling bring even a supposedly well-understood theory like classical mechanics to bear on observation. Both volumes celebrate the same pantheon of heroes: in the first rank are names not widely known to philosophers: Jacques Hadamard, J.L. Lagrange, Franz Reuleaux, Oliver Heaviside, Walter Noll, A.E.H. Love. Those in the second rank are more familiar, but here Wilson's tribute is not entirely without censure: J.L. Austin, the late Wittgenstein, Thomas Reid, W.V.O. Quine, John Dewey. In both volumes Wilson extolls "common sense thinking" in its employ against logicized abstractions; in PA, a "biologically plausible" common-sense practical attitude of accommodation is detected underlying many modeling strategies since "nature rarely arranges its affairs for our calculational convenience". (364)

PA consists of nine self-standing "essays in conceptual strategy"; most are accompanied by one or more appendices. We are told that each essay originated as a self-sustained lecture and can be read independently of the others. Taken together, the essays comprise a veritable avalanche of insightful and detailed considerations that thoroughly belie "Theory T" pretensions to give accurate characterization of the scientific enterprise of bringing theory to bear on observation.

Essay 1 ("Pragmatics' Place at the Table") revisits main themes of WS that put bite into the rather toothless aphorism "meaning is use". Contextualized adjustments and variations in meaning, even for such semantic stalwarts as "force" and "pressure", lead to notable efficiencies in communication and empirical description (reduction of syntactic complexity, reduction of variables, reasoning compression, streamlined inferential paths). Nine "Lessons" are drawn from reflections on the descriptive practices of multiscale modeling of solids (steel rails, granite, etc.). Emphasis is on how modeling chores are broken down into "bite-sized" sub-models, focusing on behavior dominant at each given scale, then "harmonized" by interscalar homogenization, correcting faulty descriptive detail at one level by drawing from more secure results at other levels. The sophisticated "computational architectures" of multiscale modeling techniques, Wilson argues, show how descriptive policies in applied mathematics are partitioned into "integrated networks of localized tasks focused upon achievable strategic goals". (16) Regarding himself a philosopher of language "at heart", Wilson makes a strong case that the "alterations in semantic moorings" characteristic of multiscalar modeling practices demonstrate that "adequate forms of linguistic system can be achieved through strategies other than those contemplated within contemporary philosophy of language". (40) Pragmatics "in the robust sense of practical objectives" is squarely placed at the table of present day philosophy of language.

Essay 2 lends its title to the volume. Wearing the hat of "descriptive philosopher of science", Wilson bemoans the lack of attention paid to physics avoidance rationales by those of the Theory T persuasion. In particular, he points to the predominate tendency of philosophers of science to characterize the explanatory landscape of theories solely with the blunt tools of logic-or probability-derived notions rather than capturing explanatory inferences via the appropriate type of mathematical tool with attendant boundary and side conditions. Theory T folks are encouraged to substitute logistical surrogates for the latter by supposing that fundamental physical laws are formally represented by the ordinary differential equations (ODE) familiar from point-particle mechanics in which time is the only independent variable. Looking at science "through ODE eyes", philosophers suppose causal evolution from given initial conditions (positions and instantaneous velocities) can be framed as successive logical steps (integrations) in the manner of celestial mechanics. But the ODE paradigm of "flow generated by evolutionary differential equations" does not work for such mundane classical mechanical systems as simple fluid flow, heat transfer through a metal, loaded struts, or drumheads piled with rocks: in general, where spatial variations must also be taken into account. Here, and with flexible continuous matter more generally, the appropriate tool is a partial differential equation (PDE). One can then invoke time-symmetric (steady state or equilibrium) considerations and drop the time variable altogether, or one can reduce the descriptive variables by grouping them under the control of a higher-scale constraint. Two distinctions, absent in Theory T parlance, play a crucial role for multiscale modeling strategies of these and similar systems: that between active and constraint forces, and that between initial and boundary conditions. The former is basic to the Lagrangian "least work" technique in allowing physicists and engineers to ignore inactive factors when reasoning about the system; the latter received essential clarification by Jacques Hadamard's early 20th c. classification of PDEs into equations of hyberbolic, elliptic, and parabolic signatures. The correspondingly different explanatory architectures, their relations to one another, and their distinct senses of "causal process" are descriptive data that cannot be crammed into "the inadequate pigeon holes of Theory T thinking". (89)

The third Essay ("From the Bending of Beams to the Problem of Free Will") is directed not only to philosophers of science but to Leibniz scholars as an unexpected tribute to Leibniz's prescient anticipation of the contextual complexities of continuum mechanics. In scattered remarks on the behavior of continuously flexible matter, Leibniz stressed difficulties encountered in trying to extend differential analysis down to the infinitesimal level. Abstracting from the overt details of Leibniz's theology and metaphysics of monads, Wilson finds contemporary engineering modeling parallels in Leibniz's duality of two "kingdoms of explanation" (efficient causation, and teleological explanation). In Leibniz's hands, this explanatory duality is exemplified in the descriptive cutoffs and scaling considerations that harmonize "top-down" explanation of observable behavior in real materials invoking final causes with impossibly complex ("labyrinth of the continuum") micro-mechanical pushings and pullings of "bottom up" explanations framed in terms of efficient causation. An example is a loaded beam: it behaves (ignoring friction) like an elastic body, it possesses a "memory" of a natural equilibrium state to which it strives to return on being bent. The return to a "constrained equilibrium state" cannot be explained solely by appeals to lower-level efficient causation; rather, the macroscopic state constrains any putative microphysical account. "Pre-established harmony" results from a Deity that plans the universe from the macroscale of human actions, beliefs and desires while arranging beforehand the complexities of microphysics precisely backing up the phenomena of the human scale world.

Essay 4 ("Two Cheers for Anti-Atomism") is similarly a paean to at least one incarnation of Pierre Duhem. Since Duhem is "often a lousy writer" (139), readers who know only The Aim and Structure of Physical Theory understandably come away cognizant of his anti-atomism, fierce anti-realist polemics and advocacy of a purely "symbolic" conception of physical theory that, according to Wilson, encouraged the "syntactical rhabdomancy" of Theory T thinking. Wilson's Duhem is above all the prophet of a richer "New Mechanics" coupling thermal and mechanical description. This Duhem is celebrated for his principle theoretical endeavor, an attempt to construct a viable thermomechanics in which thermodynamic concepts (temperature, heat, entropy) are treated on a par with (and not reduced to) mechanical concepts (force, mass, potential, stress). In so doing, Duhem repeatedly identified major impediments to Theory T modeling assumptions, viz., the extraction of a "fundamental" yet reliable "autonomous behavior" model of certain target physical systems where thermal (and thermal-chemical) considerations appear merely as distinct components complicating the autonomous behavior portrait. In many cases such systems cannot be idealized as a single point trajectory in a high dimensional state space with an internally determined dynamics; rather dynamical flows are built up in patch-work fashion in an enlarged temporal manifold from equilibrium or steady-state atemporal base manifolds in a manner analogous to Lagrange's construction of analytical dynamics from statics. Duhem's attempt to construct a New Mechanics stands (in the words of a contemporary author quoted on p. 137) as precursor of the post-1945 "firm grounding of continuum mechanics in a thermomechanical framework."

To contemporary philosophers of language (i.e., proponents of "natural kind" semantics à la Kripke or Putnam) Duhem's productive conjoined employment of thermal and mechanical vocabularies is a lesson that terms like "heat" or "temperature" do not have "semantical underpinnings (that) are exceptionally simple", admitting of primitive baptism. Philosophers of science are enjoined to respect Duhem for resisting the tendency to "essential idealization" among his turn-of-the century methodological peers such as Karl Pearson and still common to Theory T thinking. Wilson characterizes "essential idealization" as a modal thesis that "a modeler must permanently misdescribe her targets so that the descriptive enterprises of mathematical physics can get underway". (140) The previous essay already noted the failure of the "essential idealization" of "rigidification procedures", i.e., of extending rigidity down to the smallest scales of material bodies in the attempt to attain "bottom-up" explanations by differentially acting forces. The complexities here were surmounted only later in the 20th century by setting up coherent modeling equations for flexible media in the continuum mechanics. The simple example of a bending steel beam already shows that whereas gravitational force can be treated as differentially pulling on each point, tensions and stresses cannot; they tug on finitely construed surface elements. The proper mathematical tool for combining them, a stress tensor, was not available until the late 19th century. Duhem's bias for top-down rather than bottom-up modeling schemes manifests a persistent opposition to the presuppositions of autonomous behavior modeling endemic to Theory T thinking.

Essays Five and Eight ("The Greediness of Scales", "Semantic Mimicry") expose further coarse categories of Theory T thinking by taking us significantly deeper into the weeds of multiscalar modeling and its conceptual dynamics revealed by the development of modern continuum mechanics. We learn in Essay Five that Quine's recipe (in "On What There Is") for discerning ontological posits via single sorted quantifiers and regimented syntax disregards a trove of scientific discourses where differential equation models register physical information at different choices of characteristic scale. "Greediness of scales" refers to the problem of amalgamating these informational inputs in a straightforward way, a problem solved through various "division-of-labor" techniques that communicate with, and correct, one another through interscalar relationships known as "homogenizations". The upshot is "novel forms of explanatory architecture . . . worthy of philosophical attention". (205) "Semantic mimicry" tells the tale of how the descriptive devices of applied mathematics can fail to match their intended physical systems, requiring revision by development of sophisticated mathematical techniques (e.g., different classes of Green's functions, the Schwartz theory of distributions, Sobolev spaces, etc.) The essay has squarely in its sights that shibboleth of Theory T thought, the D-N model of explanation. Universal laws (of the ∀x (Fx →Gx) variety) have no place in the explanatory architectures of continuum mechanics; rather they are "semantically mimicked" by local equations governing the target physical behavior, equations that are surrounded by a dense swarm of approximations and complicated by considerations of geometry, boundary conditions, loads and material constants.

Essays Six and Seven ("Believers in the Land of Glory", "Is There Life in Possible Worlds?") comprise a two-part contrapuntal fugue on a downstream consequence of Theory T thinking, the misguided aspirations of Lewisian-style analytical metaphysics. Essay Seven gives a carefully detailed account of productive physics avoidance, counterfactual inferences (interpreted as Woodward's manipulationalist counterfactuals) within a Lagrangian "virtual work" methodology that permit circumvention of lower-scale modeling of constrained systems. This is twined with skeptical remarks regarding the meaningfulness of grounding counterfactual reasoning in the jargon of possible worlds.

Essay Six tackles two alternative proposals for analytical metaphysics. The first comes from metaphysicians who suppose that terms such as "parts", "wholes" and "causes" have a timeless and necessary a priori semantic fixity stemming from general principles of any reasoning about physical objects. Those of this stripe must take on board the semantic variability of the term "cause" in different explanatory architectures within continuum mechanics. In Essay Two, Wilson explained that the root notion of "causal process" (in Humean terms, "cause first, effect later") acquired early in life is indeed captured by an evolutionary dynamics usually possessing a formal feature (a PDE of hyperbolic signature). But a number of examples here show that where various physics avoidance strategies come into play, the notion of cause remains even though purged of explicit temporal ingredients (as in PDEs with elliptic signature). Even the simple case of a wave motion propagating along a violin string can be Fourier-transformed into an energy representation where each vibrational mode displays its energetic content periodically in a standing wave pattern with fixed boundaries. The problem is thus transformed from characterizing translational motion along a string to that of determining the system's eigenfunction modes when these are frozen into positions of pure potential energy. Yet the word "cause" still occurs in the descriptive variables that preserve these energetic allocations. The second type of proposal is from analytical metaphysicians who hitch the wagon of metaphysics (as a "prescience") to the deliverances of a hypothetical future final physics. This type of analytical metaphysics suffers from Theory T misapprehensions regarding the character of science. Here the presumption that a suitably logically articulated theory T will "implicitly fix the meaning" of its specialized terms is the source of a belief that the "kind terms" of final science will line up with the "perfectly natural properties" of metaphysics. Wilson argues that the practice of modeling with differential equations reveals that important quantities of physical interest are not to be anticipated axiomatically, but only emerge as fixed-point limits within contextual (holistic) approximations.

The final essay ("A Second Pilgrim's Progress") turns to philosophy of mathematics and an attempt to chart a naturalistic course forward that, like Penelope Maddy's Second Philosophy, is critical of Quine-Putnam naturalism, but is distinct from Maddy's approach by a near-exclusive concentration on applied, not pure, mathematics. In examples ranging from the bio-mechanics of a frog catching flies, card-guessing tricks and Sturm-Liouville descriptive analysis of Chladni sand patterns on a vibrating drumhead, Wilson argues that any naturalism worth its Darwinian salt has to take into consideration "our computational place in nature". Throughout the earlier essays, he has emphasized that applied mathematics over and again displays the plasticity of the human mind in adapting old reasoning routines to more refined and complex situations by borrowing inferential policies from unrelated and unexpected sources. Now he urges that philosophers of mathematics and philosophers of science alike should be attuned to recognize this tendency to "strategic adjustment" of computational architectures in describing real-life science. Applied mathematicians' resultant detours into "Greater Mathematicsland" should not be pruned by Occam's razor, as many naturalists are prone to do. Indeed, set theory provides "the natural vocabulary for articulating the relationships that we lowly calculators bear to nature's more abundant collection of processes." (398) In this regard, mathematicians, not philosophers of language, have done a much better job of codifying inferential practices to be followed in reasoning and in providing richer models of how linguistic expressions might adequately reflect their physical referents.

With descriptive cutoffs, approximation techniques, boundary layers, shorthand formulas, asymptotic homogenization relationships between scales and the like, the toolkit of modeling strategies for continuous materials within the applied mathematics of classical doctrine appears eminently pragmatic: emphasis is on reliability of results, not on portraying "what really happens at very small scales". Yet Wilson characterizes himself as a "stout scientific realist" (114, note) even as he rejects central tenets of mainstream Scientific Realism, particularly regarding that doctrine's inflexible opinions regarding semantics. For example, he regards as "simplistic" the 'Fido'/Fido semantics of the platitude "terms in a mature science typically refer". (383) And he holds that "ongoing science accumulates a large set of reliable 'truths'" while objecting to the usual realist story of isomorphism-correspondence truth rules. (361, note) Moreover, Wilson remains modestly agnostic regarding the existence of a "universe wide" fundamental theory whose mathematical netting would capture "the fully autonomous behaviors of nature", allowing that the best theoretical understanding of nature science might attain turns out to be only a "patchwork" (185) should the representational tools provided by mathematics fail to establish satisfactory accord with fundamental entities and processes. Wilson's "stout scientific realism" seems to rest on two considerations. The first invokes an inference to the best explanation rationale: the patchwork approach of classical modeling is so efficient because "the sundry routines of physics avoidance neatly cover the quantum realm", providing an "outer fitting suitable for a quantum mechanical knight" (WS, 197). The second is an interpretation of objectivity as "correlates successfully with the world", a line in the sand against modern idealism, in particular neo-Kantian philosophies. As the correlations with which Wilson chiefly deals in both volumes lie at the empirical level of dominant behavior models of the mechanics of continuous matter, it is not clear to me how "stout" Wilson's scientific realism really is, or indeed needs to be. And it may not matter. For what he has given analytic philosophy is a prolonged blast of artic air that signals change of paradigm.