PHYSICS AVOIDANCE
MARK WILSON
Reviewed by Michael Liston
Physics Avoidance: Essays in Conceptual Strategy
Mark Wilson
Oxford: Oxford University Press, 2017, £60.00
ISBN 9780198803478
This collection of nine free-standing essays offers a fresh perspective on science and language and a fascinating critique of much of contemporary philosophy. The essays can be grouped as follows: Chapter 1 serves as an introduction and a brief for pragmatism; Chapters 2 and 5 concern science; Chapters 3 and 4 deal with historical figures (Leibniz and Duhem); Chapters 6 and 7 critique contemporary analytic metaphysics; and Chapters 8 and 9 discuss language and mathematics. The style is opinionated, contrarian, humorous, and expansive. The essays focus on case studies that develop in detail some of the themes of Wilson’s earlier Wandering Significance ([2006]). Though they avoid general dogma, they reflect a unified critical perspective organized around puzzles that emerge when one considers the ‘how’ and ‘why’ of the practical successes that attend conceptual development in applied mathematics. Those already familiar with Wilson’s work will appreciate the novel developments in this long-awaited publication; those new to his work, despite its sometimes technical challenges and hard-to-tame aspects, will find ample reward in the surprising new light it sheds on contemporary philosophical issues in language and science.
I begin with some stage-setting. As products of biological evolution, Wilson often reminds us, we are finite beings with limited observational and inferential or computational capacities. Philosophy has paid a lot of attention to our observational limitations and how we overcome them. Wilson is more interested in our inferential limitations and how we reliably extend the concepts and reasoning patterns that have evolved to exploit the limited descriptive opportunities available to us in our natural setting and scale. In our naturalistic times, most philosophers should find this approach palatable, yet much of contemporary philosophy preaches naturalism while trafficking in projects involving grand a priori visions and pretensions that ignore what Wilson calls our humble ‘computational position’.
On one such grand vision, which Wilson calls ‘Theory T thinking’, a scientific theory is a set of laws that, together with suitable conditions, allow us to deduce the phenomena from the behaviour of underlying structures and entities. The view models science in terms of initial or boundary-value problems involving the temporal evolution of a physical system governed by differential equations (expressing the laws). A central aim of the book is to show how narrow this focus is: there is a big gap between this schematic picture of a world governed by differential equations and the usable information we can extract from them. Here, I note, Wilson sets a high standard: the nineteenth-century pragmatic requirement that good scientific reasoning should reliably carry us to numerical values that match the behaviour of target systems. Generally, differential equations supplemented with initial or boundary conditions do not directly or reliably deliver numerical values. Nevertheless, science manages to bridge the gap. Using examples from continuum mechanics and multi-scale modelling, the book explores in great detail how applied mathematics has found ingenious, efficient, reliable, but often indirect ways to deliver usable numerical answers to the questions we put to nature.
Physical avoidance comprises a class of strategies for reducing computational complexity and efficiently delivering trustworthy results. An example will give a sense of it. Consider Euler’s treatment of the critical load problem for a vertical strut, S, fixed at each end: finding the maximum load, Wc, at which S will not buckle. Euler’s equation for the strut (Newton’s F = maspecialized to S) is EI·∂2x/∂y2 + Wx = ρ·∂2x/∂t2, that is, the resultant force (horizontal torque due to W’s weight offset by S’s ‘stiffness’, represented by the EI-term) will produce a horizontal acceleration of a vertical section, y, that is proportional to S’s density at y. The equation models an initial boundary value problem, namely, given the boundary (fixed endpoints) and initial conditions (displacement x and velocity ∂x/∂t for each y at initial time t0), the equation tracks the moment-by-moment temporal evolution of the waves that will move through Sunder W’s weight. But that is a lot to compute and finite element approximations of the evolution do not efficiently or reliably deliver accurate results.
Euler avoided these problems by reasoning that if we want to know only the critical load at which S will buckle, we needn’t track the temporal evolution of the wave produced by the loading; we can wait until the energy initially produced is drained by friction. At that point, S is in equilibrium—the action due to elasticity will balance the action due to gravity—and the problem can be modelled as a pure boundary-value problem using a reduced Euler equation that gets rid of time: EI·∂2x/∂y2 + Wx = 0. The elimination of time greatly simplifies computation. The reduced equation accepts solutions that can be accurately approximated by an iterative shooting method that homes in on a value for the critical load, Wc. The original and reduced Euler equations respond to different problems and have different formal characteristics that inform us how smooth their solutions are and what type of approximating computational approach (for example, marching or shooting methods) best fits the problem.
These differences between evolutionary approaches (that track moment-by-moment actions in S) and equilibrium approaches (that describe S’s stationary state after a suitable relaxation period) mark what Wilson calls different ‘explanatory architectures’: distinctive patterns of reasoning that are well suited to exploiting descriptive opportunities that nature makes available to us. Given our computational position in nature, we have available to us only information at certain scales (spatial, temporal, energetic, and so on). Fortunately, nature sometimes accommodates our practical and explanatory quests by providing dominant patterns of behaviour in the systems of interest to us, and mathematics has developed sophisticated ways of organizing information about those systems around the dominant behaviours of their sub-systems. Though S doesn’t give us easily computable information about the rapid actions that result from the loading, for example, it does give us computable information about its dominant buckling behaviour at macroscopic scales.
The reason philosophers of science should care about such seemingly abstruse cases is that they reveal how standard conceptions of laws, conditions, explanations, theories, and so on are distorted. By becoming aware of the great diversity of modelling traditions and inferential practices within science, Wilson hopes, we will be less prone to conceptual confusion and the deceptive attractions of apriorism. For example, Wilson calls an ‘aspirational hope’ the Theory T assumption of simple logical unity (that we can, in principle, deduce a unified description of nature from its lawful behaviour at some low level); we are currently unable to logically unify the various explanatory architectures we use and have no reason to expect future Theory T unity. Part of the difficulty is that Theory T views are too rough grained. Differential equations can be overarching framework principles without concrete detail (like Newton’s laws) or concrete specifications of these principles that govern a particular kind of system (like Euler’s equations for S). Both have differential form. However, only the former are candidates for universal Theory T laws. The latter are not, since their form depends on constitutive features of the material modelled (like the E and I parameters in Euler’s equations) and on boundary conditions (the fact that S is pinned). These latter particularities lead to the diversity of explanatory architectures that resist Theory T unity. Explanatory architectures are designed to fit dominant behaviour at a particular scale of analysis, but dominant behaviour can change across scales so that different explanatory architectures are often inconsistent. For example, when W initially compresses S, it will cause a pulse to move down S only if it wiggles (which can be captured by high-speed photography). But the boundary conditions require that S’s endpoints be pinned for all times of interest and thus do not allow W to wiggle. We are led to a contradiction: W wiggles and does not wiggle.
Typically, inconsistencies are avoided by appeal to scale. In this case, the time scale in which W wiggles is so much smaller than that at which S settles into equilibrium that it can be ignored for most macroscopic applications. However, if it becomes important to investigate these finer internal details and their effects on S’s large-scale behaviour and vice versa, we must develop a model that represents dominant behaviour at a finer scale of analysis and figure out ways to represent exchange of information between the two scale levels. A significant portion of the book—one of its most novel contributions—examines multi-scale modelling techniques that link together sub-models that enable scientists to understand such complex multi-scale structures.
The philosophy in these essays has as much to do with language as with science. Wilson agrees that language learnability requires that humans possess systematic capacities. But, he argues, this ought not lead us to overestimate our semantic powers by thinking that by a certain age we have secured the kind of simple, global referential meanings for our lexical items (‘temperature’ refers to mean molecular kinetic energy, per degree of freedom) that could support a Tarskian model of our language or a soundness proof for the inferences expressed therein. This amounts to yet another grand a priori pretension. On a more plausible approach, one that duly credits our humble computational position, the semantic classifications we initially learn fit the descriptive opportunities that nature provides and that contribute to practical success in our local environment; then this initial semantic knowledge gets extended, adapted, and re-purposed as need arises in novel contexts. This results in a picture of language–world alignments that—unlike the simple, direct, and global correspondences often posited in philosophical semantics—are complicated, indirect, local, and context-dependent.
The support for this approach comes from examples in applied mathematics. Consider, for example, the adaptation of wave language and reasoning patterns, which are ubiquitous in physics. Superficially, pendula, spring-block oscillators, acoustic systems, and electromagnetic motions share little in common and don’t look much like waves on a pond’s surface, yet all can be modelled under appropriate conditions as simple harmonic oscillators that are describable by wave language. Provided their motions are periodic and bounded, these different oscillating systems can be understood as the projection of uniform circular motion onto its diameter; the circumference of the circle can be ‘unwound’ into a wave that mimics the back and forth motions of the oscillating system; this in turn allows them to be modelled by means of sine and cosine functions that describe how their total energy cycles between pure potential and pure kinetic states. Success comes about by repurposing old tricks to new circumstances, and representation is much less direct and much more mediated by mathematical detour via projections and the like than we might have thought.
Moreover, reference (and truth) are context dependent and local. In the context of an evolutionary investigation governed by Euler’s original equation, ‘cause’ refers to a state in a process of mass–energy evolution, in which previous states lead to later states in endogenous, natural time. In the context of an equilibrium investigation governed by Euler’s reduced equation, where the natural endogenous time needed for S to relax into equilibrium has been suppressed, ‘cause’ refers to a state or event in a sequence of processes that occur in exogenous, manipulation time, in which an experimenter gradually increases W in increments until it reaches Wc. Because ‘time’ and ‘cause’ shift meaning across the two contexts, ‘W causes S to buckle’ has very different truth conditions in the two contexts. Many disputes about causation can be seen to rest on failure to recognize this kind of context dependency. More generally, because it evolves by exploiting descriptive opportunities, language aligns with the world locally but not globally. Taking seriously our computational place in nature suggests that language–world alignments are best construed in patchwork, localized fashion, where a given patch is subordinated to an investigative mode tied to exploitable dominant behaviour at a restricted scale and the patches are linked together by approximating and homogenizing techniques.
Wilson advocates a radically anti-apriorist stance. Quine was right, he argues: we sail on Neurath’s boat where nothing is epistemically sacrosanct, including theses about logic or semantics. But Quine did not go far enough. He relied on Theory T principles uncritically inherited from philosophical tradition and not supported by examination of science. If scientific language aligns with the world in a patchwork manner, then a unified, regimented theory is a methodological pipe-dream, as are various Quinean methodological dicta, like his criterion of ontological commitment, that depend on it. This anti-apriorism has implications for contemporary analytic metaphysics. For example, contrary to what possible worlds theorists assume, we have no grasp of global possibility. Instead, our everyday grasp of modals and counterfactuals is built upon understanding how local sets of possibilities are associated with the manipulation of a limited set of controls; from these we extend out in patchwork fashion. The support for this draws on modal reasoning in Lagrangian mechanics, where counterfactuals like ‘if we were to infinitesimally wiggle system M in manner X, then M would respond in manner Y’ are used to understand the conceptual core of the principle of virtual work. In such explanatory architectures, only a highly constrained set of local possibilities is initially considered, but this set is later extended in complicated ways. Moreover, the counterfactuals employed suggest a very different analysis than the standard analyses. In the standard analyses, the truth conditions of counterfactuals are given by appeal to possible world similarity, where the laws obtaining at a world carry a lot of weight in similarity determinations; in Lagrangian settings, they are tied to the manipulation of controls that in turn underpin our understanding of physical principles. In the standard analyses, laws are prior to counterfactuals; in Lagrangian settings, counterfactuals are prior to principles.
Unsurprisingly, given the prominence of applied mathematics, Wilson’s approach also challenges contemporary philosophical debates about the applicability and indispensability of mathematics. Quine became a mathematical Platonist on the grounds that mathematics is indispensable to science; others argued that Platonism is difficult to reconcile with naturalism; and some have responded that either mathematics isn’t indispensable to science or if it is, we don’t need to consider our uses of mathematics to involve serious commitments (mathematics is like fiction). Wilson argues that these debates rest on faulty understandings of science, mathematics, and the interface between symbols and the world they represent. Mathematics is indispensable to a proper understanding of that interface. It is required not only to formulate our models (by means of differential equations and so on) and to align our computations with their worldly targets (by assisting in physics avoidance, multi-scale modelling, and the like), but also to understand the scope and limits of model–target alignments and to adjust and correct models to bring them into better (more reliable and efficient) alignment with the world. Set theory should be regarded not as ontologically mysterious but as the ‘science of strategy’, the ultimate court of appeal whose authority rests on its historical success in illuminating inferential technique.
I close with some final remarks. Given the richness of detail and complexity of argumentation in this book, it is likely that readers will not agree with everything Wilson argues. Few readers, however, will fail to feel the pressure that his sustained investigation of cases puts on grand unified views of science and conceptual content. Real science and concepts develop in a far messier manner than such views allow. Of course, it is no news that real science is messy. But what is new in these essays is their celebration of the ingenious ways that science deploys to overcome limitations by exploiting descriptive opportunities. The result is a measured, qualified optimism that occupies a sensible middle path between will-o’-the-wisp optimism and gloomy pessimism concerning science’s enhancement of our representational and inferential capacities. This qualified scientific optimism and the accompanying sustained critique of apriorism are best seen as an expression of pragmatism, the methodological stance that the worth of theories and concepts is to be judged by their practical consequences. Pragmatism is often criticized, with some justice, for being a hazy view. What is most successful about Wilson’s pragmatic approach in these essays, in my view, is the tightly constrained manner in which he understands practical consequences in science: they must typically issue in empirically measurable data (numbers). Wilson’s pragmatism is hard-nosed and, for that reason, it should be taken seriously.
Michael Liston
University of Wisconsin-Milwaukee
mnliston@uwm.edu
References
Wilson, M. [2006]: Wandering Significance: An Essay on Conceptual Behaviour, Oxford: Clarendon Press.