The Hard Problem of Physics (or: Why Sean Caroll is Wrong on Consciousness)

Physicst Sean Carroll has a new(ish) book out, The Big Picture, in which he basically lays down his answers to the Big Questions (the very thing this blog is interested in). I haven’t read it, but I’m a huge fan of Sean and from what he says about it in his promotional material I’m sure the book will be great.  I intend to buy it as soon as I have a little spare time to actually read it.Sean talks about how we already know the physics of everyday life, and much of the big picture stuff too, and what it means for morality (there are no objective moral truths; we have to make up our own morality) and so on. It really does look great.

But of course I’m not gonna write about any of that. I’m gonna focus on the negative, like the mean old curmudgeon that I am. Specifically, I want to say something about his view on consciousness, at least as it is presented in his teaser blog post “The Big Picture Part Five: Thinking“. Specifically, I want to raise the Hard Problem of Physics.

Sean very rightly mentions the Hard Problem of Consciousness. This is a term invented by philosopher David Chalmers to describe the difficulty of explaining how it is that certain configurations of matter (thinking brains) have the subjective feelings, or conscious experiences, that they do. We can, though physics and biology and so on, establish and explain how a system behaves and thus explain in detail (in theory) the information processing that goes on in the brain. But none of that tells us how the system feels. There is nothing in physics, nothing in science, that lets us jump from what something is (how a physical system is configured and how it changes dynamically) to how something feels (what  is it like to be this system, how does it feel like form the inside). This can be called the “is feels problem” (in analogy to the “is ought problem“). David Chalmers called it the Hard Problem.

In response to Chalmers, computer scientist Scott Aaronson recently coined the term “The Pretty Hard Problem”. The idea is roughly as follows: the Hard Problem implies that there are “psycho-physical laws”, namely that there are laws that would tell what a certain physical system (that’s the “-physical” part) would feel (that’s the “psycho-” part). The Hard Problem is all about understanding why these rules hold – why is it that such-and-such systems feel like that. The “Pretty Hard Problem”, in contrast, is the problem of establishing what these psycho=physical rules are. This is a very hard question, but it’s ultimately a scientific one – one proposes a set of rules, and one tests them empirically (in psychological & neural experiments) and judges them on the basis of their empirical success, simplicity, and so on.

Note the difference from the Hard Problem – it’s going to be very, very hard to establish the psycho-physical laws, but at least it’s a scientific question. I’t going to be impossible to establish why the fundamental psycho-physical laws are what they, however, including why they exist at all. That’s not a scientific question, so we don’t really have a handle on how to establish an answer to it.

There is another Hard Problem, the Hard Problem of Physics: why are the fundamental laws of physics what they are (including why there are such laws at all)? Again, it’s not a scientific question, so we don’t really have a handle on how to answer it. The best answer I’ve come across yet is Max Tegmark’s answer which is, roughly, that everything that is possible, exists; a position he terms the “Mathematical Universe” hypothesis. But even this feels unsatisfactory, as I’m left scratching my head as to why this should be the case (if indeed it is the case; it’s far from clear that it is).

Back to Sean Carroll. From the snippets he offered on his blog, Sean seems to maintain that the Hard Problem “will just gradually fade away as we understand more and more about how the brain actually does work”, and that ” the statement “I have a feeling…” is simply a way of talking about those signals appearing in your brain. There is one way of talking that speaks a vocabulary of neurons and synapses and so forth, and another way that speaks of people and their experiences.”

I disagree. Sort of. I think that understanding more and more about how the brain works will eventually lead us to solve the Pretty Hard Problem of consciousness, namely to experimentally establish a simple and successful “theory of consciousness”, that will be based on psycho-physical laws. These laws will appeal to aggregate entities such as attractors of the dynamic system or the irreducible causal-information within the system, and will associate these with consciousness. These aggregate quantities in this sense will be consciousness, just like “temperature” is the average kinetic energy of matter. And this will allow us to provide a detailed mechanical explanation for mental causation. We could, for example, show that “anger caused him to lash out” in that we could very-specifically identify the aggregate that feels-like anger in the person’s brain and show how its activation causally led to the person lashing out. (In theory. This is a hopeful, far-future scenario.)

Nevertheless, even this future successful theory will not solve the Hard Problem of Consciousness. It will not reveal why these psycho-physical laws hold, or why any such laws hold in the first place. It will not solve the central mystery of consciousness – why and how is it that this piece of matter, this mushy agglomerate of fat and neural tissue, feels. It won’t explain to us, as Sean puts it, “how can collections of such cells or particles ever be said to have an experience of “what it is like” to feel something?”. All it will do, is describe how consciousness works; it won’t provide an ultimate explanation for why it works this way.

Now, this is supposed to be a big deal I guess. I’m not too worried about it. Just like we will never be able to solve the Hard Problem of Physics, we won’t be able to solve the Hard Problem of Consciousness. They are too hard. We can, through science, describe the fundamental laws of nature, including both the laws of physics and the psycho-physical laws. But we cannot, scientifically, establish why these laws hold. And I doubt very much we can establish why they hold philosophically (philosophers have been trying for millennia, and we aren’t any smarter or more knowledgeable about this then they were). The best I can hope for is that the fundamental laws would turn out to be so simple, that we’d be led to think that they kinda make sense – like Max Tegmark’s Mathematical Universe hypothesis.

On Scholastic Metaphysics: Me Against Aristotle


Aristotle is my greatest philosophical hero. We’re talking about the guy that took Plato’s haphazard, mystic philosophy and turned it into a down-to-earth, rigorous, systematic investigation of all aspects of reality. Aristotle is the father of nearly every field of science, and every branch of philosophy. In the few cases where I think Aristotle was right (e.g. the Correspondence Theory of Truth), I wear my Aristotelianism with pride. So you can see why I’d be sympathetic to claims that Aristotle was fundamentally right, that Modern philosophy was wrong to reject virtually everything Aristotle said.

It is thus with great hope that I purchased Edward Feser’s magnum opus, Scholastic Metaphysics. This is the (small) book that’s supposed to show all those contemporary, analytic philosophers that they’re wrong and Aristotle was right. This blog-post series will be my reading diary of this book, my attempt to grapple with Feser’s arguments. As per my education in analytic philosophy, I’m opposed to his thesis – but I approach it not with fear he might be right, but with hope that he is! I would like nothing more than to see Aristotle vindicated.

Now, I disagree with Feser about, well, just about everything. But I do hope he is right, about the core of Aristotelian thought at least. With this in mind – let us read Scholastic Metaphysics!

  1. Feser vs. Scientism

Strong Emergence = Holistic Physics

In January (2015), Marko Vojinovic wrote a two-part attack on reductionism over at Scientia Salon (Part I, Part II). Based on his reasoning, I’d like to offer a new definition of strong emergence as “holistic physics”.  (Well, perhaps not that new; regardless…)

The idea is that any full description of the underlying-level dynamics must either refer to the emergent concept (strong emergence) or refer to concepts that it reduces to (weak emergence).

Let’s consider a physical system. It is described at some level of description by a certain physical theory, let’s call it the effective theory. There is also a more detailed description, let’s call it the underlying theory, so that when the details of these underlying dynamics are summarized in a certain manner you get the effective theory. For example, the behavior of gas in a canister might be described by the ideal gas law (the effective theory), while this equation in turn can be derived from the equations of Newtonian mechanics (the underlying theory) that apply to each molecule.

For now, let’s assume both the effective and fundamental theories work, they are not in error. We’ll address errors in a moment.

If the underlying theory is mechanical in the sense that it only discusses small parts interacting with other small parts (such as molecules interacting with other molecules) then we can say we have weak emergence: the “higher-level” behavior of the effective theory is reducible to a “lower-level” behavior of the parts. For example, we can define “pressure” as a concept in the effective theory, as a certain statistical property of the velocities and masses of gas molecules. If the movement of the molecules can be described by an underlying mechanical theory, a theory that only takes into account the interactions of molecules with each other – then we can calculate everything in the lower theory, and then “summarize” the right way to see what the result of this calculation means in terms of “pressure”, and in this sense talk of “pressure” has been reduced to talk of molecules.

If, however, the underlying theory is holistic in the sense that the small parts it talks about also interact with parts that are summaries of the small parts, i.e. with the concepts that the effective theory talks about, then we can say that we have strong emergence. For example, if the interaction of molecules in the underlying theory also refers to pressure (instead of just to other molecules), then pressure acts as a strongly emergent property – you cannot reduce talk about it to “lower levels”, since the lower level already includes talking about it.

In the Real World

All indications are that physics is multiply mechanical – it is mechanical at various levels, not just the fundamental one. In other words, there is only weak emergence, but there is weak emergence at many levels: nuclei emerge from quarks, atoms emerge from nuclei and electrons; solids from atoms; and so on. In our investigations, we have never established an holistic scientific theory, a theory that refers to higher-level entities. And we have, on numerous occasions, seen reductive success – we were able to calculate, from underlying theories, aspects of or even entire effective theories.

Now, in his original piece Marko argued for strong emergence by shifting the burden of proof to those disputing it. But a mechanical theory is simpler (as he seems to agree) so more likely a priori, and reduction is empirically successful so it’s more likely a posteriori. (Reductionism has shown empirical success by deriving higher-level theories or aspects thereof, and by consistently finding that the underlying theories are mechanical.) Thus, “weak emergence” is well established and the burden of proof is now firmly on those wishing to overthrow this well-established theory.

A Note On Errors

Why ignore errors? Because they are not philosophically interesting. If the effective theory is correct but the underlying one is wrong, then all we have here is a mistaken underlying theory. If the small parts it talks about do exist, a correct description of their dynamics can always be given, and constitutes the correct underlying theory (which, however, need not be mechanical!). If the small parts it talks of don’t actually exist, then either some others exist and we’ll settle for them or else no small parts exist in which case we can just call this “effective” the fundamental theory – a theory that has no underlying theory.

If the underlying theory is correct but the effective theory is wrong, then we have just miscalculated what the sums over the underlying theory say. It’s also possible we wrongly identified the summaries with concepts taken from other domains (e.g. that “pressure” as defined statistically is not what our pressure-gauge measures), but again this is not a very interesting question as all we need to do is to define properly what these new concepts are in order to see what the underlying theory says about them.

And finally, if both underlying theory and effective theory are wrong then we just have a mess of errors from which nothing much can be gleaned.

In all cases, the errors have nothing to do with emergence. Emergence relates to how things do behave, not to how things don’t behave.

A Note on the Original Argument

In Part I, Marko attacked reductionism by three examples. First, he noted that the Standard Model of cosmology cannot possibly be reduced to the Standard Model of particle physics, because the latter does not include any dark matter while the former does. While correct, this simply indicates that one model is mistaken: the reason that the Standard Model of particle physics does not yield the Standard Model of cosmology is that the Standard Model of particle physics is wrong! That is not an indication that the actual dynamics of the particles are determined by higher-level concepts, such as (for example) whether or not they are near a sun. One cannot conclude from an error in the model that the correct model will show strong emergence.

As his second example, Marko noted that the Standard Model of elementary particles with massless neutrinos fails to correspond to the standard model of the sun.  While true, this merely indicates a failure of the Standard Model, which has since been corrected (neutrinos apparently have mass!). It has nothing to do with emergence, which is all about correct theories.The failure of the zero-mass standard model did indeed indicate that the effective sun-model did not reduce to it, but it did so in a philosophically boring way – it said nothing about whether the sun model reduces to the corrected standard model, or more generally it said nothing about whether the sun model reduces to any underlying theory.

His third example is more interesting, in that he complains that one cannot explain the direction of time with appeal to the dynamical laws alone; one needs to make another assumption, one of setting the initial conditions. That’s not an issue of errors, at least. But again, his true statement has no implication for emergence. The initial conditions are set at the underlying level, at the level of each and every particle. This state in the underlying level then leads to a certain phenomena at the higher-level description, which we call the directionality of time (e.g. the increase of entropy with time). But that’s just standard, weak, emergence. There is no indication that the dynamics of the particles refers to the arrow of time – the dynamics always is mechanistic, referring only the particle-level description. Thus, not only is there no strong emergence here but there is an explicit case of weak emergence. Just as the sun (supposedly) emerges from a particular initial condition (a stellar gas cloud) in the corrected particle Standard Model and thus the sun-model is reduced to the standard model, so too does the arrow of time demonstrably emerges from a particular initial condition and thus the arrow of time actually is reduced to the dynamical laws. It’s one example, out of many, of successful reduction.

In Part II, Marko maintains that

“given two sets of axioms, describing the effective and the [underlying] theory, one cannot simply claim that the effective theory a priori must be reducible to the [underlying] theory.”

I think Marko here mistakes the meta-scientific theory that says “in our world, there is only weak emergence”, which follows from all of our science as well as from parsimony, with the logical theory that says “reduction must hold as a metaphysical principle”. I agree one cannot simply claim the effective theory must be reducible, but one can a priori claim it is more likely that there is one underlying mechanistic level (i.e. a “fundamental theory”) and that all higher-level effects are emergent from it, and one can claim a posteriori that weak emergence is overwhelmingly scientifically established.

Marko also raises a few other arguments in Part II, based on Godel’s theorem. He notes that there would always be true theorems that one cannot prove from a given (underlying) theory (this stems from Godel’s theorem). While true, this again has no bearing on emergence. For one thing – we’re discussing what’s true here, not what is finitely-provable. Secondly, there is no reason to expect that the unprovable theorems will lead to an holistic behavior of the particles described by the underlying theory, i.e. there is no reason to connect incompleteness to holism.

As his final argument, he notes that even if we accept an ultimate “theory of everything”, there would be uncalculable results from it. Again true, and again not relevant. In his example, he imagines there are six “gods” determined by this theory, and that their actions are incalculable. But if the “theory of everything” is a fundamental mechanistic theory, then the actions of these gods and hence all of what occurs is weakly emergent – even though it cannot be calculated. Whereas if the “theory of everything” refers to the overall brain-states of these gods (say), rather than to just the fundamental particles or so on, then the gods are strongly emergent phenomena. Whether there is weak or strong emergence has nothing to do with the uncalculable nature of these “gods”.

Reduction in Two Easy Steps

Over at his Scientia Salon, philosopher Massimo Pigliucci wrote a piece on the disunity of science, discussing favorably some arguments against a unified scientific view (favoring instead a fragmented worldview, where each domain is covered by its own theory). The discussion really revolves around reduction – are high-level domains, such as (say) economics, reducible to lower-level domains, such as (say) psychology? Ultimately, the question is whether fundamental physics is the “general science” that underlies everything and presents a single unified nature, with all other sciences (including other branches of physics) being just “special sciences” interested in sub-domains of this general science. All domains and disciplines therefore reduce to physics. This is the Unity of Science view that Pigliucci seems opposed to.

I’m on the side of reduction. What are the arguments against it? Well, first off lets clarify that no one is disputing “that all things in the universe are made of the same substance [e.g. quarks]” and that “moreover, complex things are made of simpler things. For instance, populations of organisms are nothing but collections of individuals, while atoms are groups of particles, etc.” Everyone agrees that this type of reduction, ontological reduction, is true. The arguments instead are aimed at theoretical reduction, which is roughly the ability to reduce high-level concepts and laws to lower-level ones. Putting arguments from authority to the side, Pigliucci raises a few arguments against theoretical reduction:

(1) The Inductive Argument Against Reduction: “the history of science has produced many more divergences at the theoretical level — via the proliferation of new theories within individual “special” sciences — than it has produced successful cases of reduction. If anything, the induction goes [against reduction]”

However, this argument is based on the false premise that if reduction is true then reductive foundations for a science would be easier to find than new high-level sciences. This premise simply does not follow from reduction, however. Instead, reduction entails that

(A) As science progresses more and more examples of successful use of reduction will be developed. This prediction is borne out by things like the calculation of the proton’s mass from fundamental particle physics, the identification of temperature with molecule’s kinetic energy, the identification of (some) chemical bonds with quantum electron-sharing, and so on.

(B) As science progresses, no contradiction will be found between the predictions of the lower-level theories and the higher-level ones. For example, it won’t be found that the proton should weight X according to fundamental physics yet weighs Y in nuclear physics; it won’t be found that a reaction should proceed at a certain rate according to physics yet that it proceeds in a different way according to chemistry. Clearly, the success of this prediction is manifest.

Thus the inductive argument against reduction is very wrong-headed, misunderstanding what reduction predicts and ignoring the real induction in its favor.

(2) How would reduction even look like?

Pigliucci further maintains that we reductivists are bluffing; we don’t really even know what reduction could possibly look like. “if one were to call up the epistemic bluff the physicists would have no idea of where to even begin to provide a reduction of sociology, economics, psychology, biology, etc. to fundamental physics.”

This is again false – we know in general terms how this reduction takes place (chemistry is the physics of how atoms bond into molecules and move; biology is the chemistry of how numerous bio-molecules react; psychology is the biology of how organisms feel and think; and so on). The only caveat here is that consciousness is somewhat problematic; the mind-body issue aside, however, the picture of how reduction proceeds is clear enough (even if vague and not at all actually achieved, of course) to make this objection moot.

(3) Cartwright’s disjointed theories

Supposing that all theories are only approximately-true phenomenological descriptions (something most scientists would agree to), Pigliucci somehow concludes that therefore “science is fundamentally disunified, and its very goal should shift from seeking a theory of everything to putting together the best patchwork of local, phenomenological theories and laws, each one of which, of course, would be characterized by its proper domain of application.”

But the fact that some theories apply only in some cases does not imply that they are not part of a bigger theory that applies in all these cases. There is no case being made against reduction here – reduction is perfectly comfortable with having multiple phenomenological theories, as long as they all reduce to the fundamental physics. It is even comfortable with there being an infinite series of “more fundamental physics”, as long as each theory reduces in turn to an even-more fundamental theory.

What is Reduction?

I was prompted to write this post because one may not make long/many comments over at Scientia Salon. The thing I wanted to say there was what reduction is. Reduction, as is meant by those actually believing it, is something like “Physics + Weak Emergence”.

Reduction = Physics + Weak Emergence

By “Physics” I mean that what ultimately exists is described by fundamental physics – things like “atoms and void”, “quarks and leptons”, and so on.

By “Weak Emergence” I mean that high-level concepts are arbitrarily defined, and then used to analyze the lower-level descriptions. When this is done, it is revealed that the high-level phenomena that the high-level concepts describe actually exist. This is rather abstract, so consider a simple example: temperature in a gas canister. The gas molecules can be fully described at the low, microscopic level by things like the molecules’ position and velocity. “Temperature” is then defined to be their average kinetic energy. Doing the math, one can show from the microscopic state that the gas indeed has a certain temperature.

In this way the temperature is “reduced” to the lower-level concepts like the molecules’ speed and mass. But the concept of “temperature” was defined by us, it isn’t to be found in the microscopic physics or state!

For this reason, Democritus said “Sweet exists by convention, bitter by convention, colour by convention; atoms and Void [alone] exist in reality”. The idea isn’t that temperature doesn’t exist in reality, however, but rather that we choose to consider nature in terms of temperature by arbitrary choice, by “convention”.

Supernatural Minds

In a recent post, the Christian apologist and philosopher Victor Reppert presents the view that things are ‘supernatural’ if there are mental properties “on the ground floor” of existence, at the most basic level of existence and explanation. This view was linked to sympathetically in a post by the naturalist Robert Oerter [1], and has also been championed by the naturalist philosopher and historian Richard Carrier (e.g. here), among others. Let’s call this the “Fundamental Materialism” thesis – it posits that naturalism maintains that at bottom, there is only inanimate matter.

I don’t understand why naturalists – those who reject the supernatural – take this position. I think it’s mistaken on several levels.

Perhaps most importantly, the fundamental materialism thesis doesn’t understand what naturalism is. If there is a slogan for naturalism, it is “everything is the same”. When lightning is understood to be just another instance of electrical discharge, just like numerous other phenomena all around us – then it becomes natural. When lightning is unlike other things, unlike the normal course of nature – when, for example, it is a bolt thrown by an angry Zeus – it is then that lightning is supernatural.

Given this understanding of ‘natural’ – what is the place of the mental in the universe? There are two options. One is to extend the uniformity that is at the core of naturalism to the mental domain, and maintain that “everything is the same” also in the sense that everything is mental. On this view every thing – every fundamental particle – has some mental content, such as some consciousness; although not every complex thing has a full mind, with unity, will, purpose, or so on. This position is known as Panpsychism.

The other option is to maintain that “everything is the same” in the sense that mental properties emerge from certain configurations of regular, non-mental matter, much like ‘pressure’ is only emerges when there are lots of atoms impinging on a surface. This view is known as emergence.

Personally, I think emergence makes no sense (due to what David Chalmers called the Hard Problem of Consciousness, see e.g. here), so I’m a panpsychist [2]. But the important point is that both views are forms of naturalism! In both cases reality is uniform. There is no special pleading, no ‘thinking matter’ set apart from ‘extended matter’ (as in Cartesian dualism), no ‘souls’ set apart from ‘matter’, no violations of the laws of physics – there is just nature. So naturalism may include mental stuff at the bottom (panpsyhcism), or not (emergence) – it doesn’t matter.

Now Victor Reppert raises three conditions on what would constitute a naturalistic world-view, and I think the first one exemplifies a second major point of confusion: the erroneous belief that ascribing mental properties to things means they can step outside the laws of physics. He writes

First, the “basic level” must be mechanistic, and by that I mean that it is free of purpose, free of intentionality, free of normativity, and free of subjectivity. It is not implied here that a naturalistic world must be deterministic. However, whatever is not deterministic in such a world is brute chance and nothing more.

Notice the implicit assumption here, that mental causation is incompatible with mechanistic causation. If an agent acts with purpose, then his actions are not caused by (say) quantum mechanics. Reppert limits the condition to the basic level only, but the point stands – an electron cannot have some subjective consciousness (‘subjectivity’) and at the same time follow quantum mechanics.

But this is the very thesis that naturalist theories of mind maintain – that an agent acts in a mechanistic way, yet at the same time in a purposeful way. So the naturalist rejects the implicit assumption – the fact that the physical stuff moves in mechanistic ways does not imply it doesn’t have mental content, and having mental content doesn’t imply freedom from physics.

Reppert’s assumption that what is mental is not mechanistic is understandable in a theist – this metaphysical intuition is what allows them to hold at the same time that God is a mind and that god is not physical.

But I cannot understand how naturalists fall to this trap. They too often seem to think that putting in mental stuff at the bottom level would invalidate physics, so it’s not in agreement with naturalism. But yet at the same time they maintain that ascribing mental properties to brains (say) doesn’t mean that brains violate the laws of physics. I don’t understand why they can’t see that their second point stands in regards to mental properties at the bottom just as much as it applies to those at the higher, complex, levels such as the human brain.

I don’t really have much of a point. I just wanted to say – boo on this dreadful definition of the ‘supernatural’. In addition to being wrong, putting the emphasis on the place of consciousness is just not productive. We are not served by a definition of naturalism that speaks about the place of consciousness in nature, but doesn’t speak about the content of nature! Carrier’s definition that “every mental thing is entirely caused by fundamentally nonmental things” tells us nothing about what the reality that these fundamental (supposedly nonmental) things constitute. It tells us nothing about the fact that lightning is  just an electrical discharge; about the regularities and sameness in nature, which is what allows us to explore it, understand it, and call it ‘nature’. It’s useless for building a picture of what the world is like, irrespective of the metaphysical status of consciousness in it.

We naturalists need a definition that leads to the fact that the world behaves naturally, which is what the naturalism-as-uniformity definition does. When everything is the same then, implicitly, the place of consciousness in nature is revealed to be not independent of the laws of physics. But the focus is on the general principle of uniformity, that underlies contemporary physics and naturalistic explanations in general and that, ever since David Hume, defines what ‘nature’ and laws of nature are all about.

[1] That’s how I know of Reppert’s post – I follow Oerter’s blog.

[2] Chalmers prefers terms like panprotopanpsychism to emphasize that the fundamental mental properties are not full minds; I’m not sure that’s helpful. Since one of the things he’s trying to imply here is that they have no phenomenal properties, no ‘subjecivity’, which is not my position – I prefer to stick to the more conventional panpsychism.

Principle of Motion versus Inertia

This post will be about a recent paper by Feser, “The Medieval Principle of Motion and The Modern Principle of Inertia“. Feser argues that contrary to first appearances, the principle of inertia in Newtonian physics is not in contradiction to the corresponding “principle of motion” in Aristotelian metaphysics. He defines the two principles as follows:

  • The Principle of Motion: “Whatever is in motion is moved by another”.


    The Principle of Inertia: “Every body continues in its state of rest or of uniform motion in a straight line, unless it is compelled to change that state by forces impressed upon it”.

I note that the conflict between the two lies in how they seem to imply other things affect motion. The Newtonian principle of inertia maintains that a body maintains uniform motion when nothing external acts on it, while the Aristotelian principle of motion that a body maintains uniform (or any) motion because something external acts on it. To succeed, Feser will need to show this is not really what they say. 

Formally Consistent?

Feser notes that the Newtonian Principle of Inertia only denies any “external forces” are acting on the body during the inertial motion. This leaves the “formal” possibility of having some other “mover”, which is not an external force (or an object exerting an external force), that is “moving” the object along the inertial motion.

The problem here is that this other “mover” is simply denied by reasonable formulations of the Newtonian principle. While Feser’s formulation is that an object continues in a straight line unless it is acted on externally, an equally reasonable formulation is that

  • The Principle of Inertia 2: A body that is not acted on externally [a “free particle”] will continue in uniform motion.

The core of the dispute between the principles is whether change requires external influence. This conflict cannot be brushed aside by careful phrasing to avoid “formal” conflict between the statements.

So Feser can combine the two, but not in a satisfying manner. He can combine them only by invoking “non-physical” causes which do not change velocity, but rather sustain it – I shall call these Sustaining causes. These causes are not invoked by the Newtonian principle and have no place in Newtonian physics. This “formal” success is achieved only by needlessly multiplying entities.

Inertia as Stasis

Feser’s strongest argument proceeds by two main steps. It begins by explicating the principle of motion in Aristotelian thought. Aristotelian “motion” means change, and change is the transition from potentiality to actuality. So the principle of motion “really” says that

  • Principle of Motion 2“Any potency that is being actualized is being actualized by something else (…that is already actual)”.

Now the second leg of the argument is that inertia in modern Newtonian physics is seen as a “state”; it would be more accurate to say that motion in a certain relative velocity (i.e. at a certain velocity relative to a particular observer) can be seen as a metaphysical “state” of the object [1]. The Principle of Inertia then indeed says that any change to this relative velocity will occur only by the influence of another object (exerting an external force on it), just like the Principle of Motion 2 says. So the two principles are actually compatible.

The problem with this interpretation is that it does not account for the change in the particle’s position. It shows that the changes to the particle’s (relative) velocity correspond to the principle of motion, but not that the changes to the particle’s (relative) location do.

Can Feser successfully argue that change of location during an inertial motion is not an actual change in the Aristotelian sense? I doubt it.

Change of velocity and change of location are of course related. To claim that change in location isn’t really change Feser would have to argue that location isn’t a real property; it is velocity and the passage of time that determine it. This is opposed to both Newtonian physics and common sense. 

Further, change of location seems by itself to be genuine change. Physically, it is a “real” property – an invariant property, not depending on perspective (much like relative velocity or whether a motion is accelerated or not). Metaphysically, it appears absurd to the highest degree to claim that a sudden change in location is not change, and I can’t see why a smooth and uniform change will be any different.

So it appears to me that this line of argument fails. 

Inertia as Natural Motion

A second argument that could have been strong is the suggestion to see Newtonian inertia as analogous to Aristotelian “natural motion”. Aristotle believed objects naturally move towards their place – stones move down towards the center of the earth, fire moves up towards the heavens, and so on. Feser concedes this belief is false, but notes that this “natural motion” does not require “something extrinsic”. According to Aristotle and Aquinas, at least. If we replace the natural motion to be inertia instead of towards the proper place, then, it appears inertial motion could proceed without “something extrinsic” as well.

It appears to me, however, that this amounts to saying that “natural motion” or “inertial motion” can be actualized without being actualized by “something else”! Feser unfortunately does not explicitly explain how this notion of “natural motion” fits with Principle of Motion 2. He says only that “a body will of itself tend to move towards its natural place by virtue of its form” [emphasis added] – but the object’s form (it’s essence, or structure) isn’t “something else… that is already actual” [emphasis added], as Principle of Motion 2 requires.

Without such an explicit explanation of how natural motion conforms with Principle of Motion 2, this argument fails.

Inertia as Change

I have argued that inertial change of location cannot be seen as a “state”. The only way left to Feser is to treat it as real change, then. Here Feser’s arguments become quite convoluted, however.

Feser first considers attributing the motion to its initiator, but dismisses this option, seemingly because the mover will no longer be actual. He argues that the motion can nevertheless have a metaphysical cause. Such a cause can be internal or external.

Considering an internal cause – an “impetus” imparted to the object upon its acceleration or generation – he raises two problems: a finite object can have only finite qualities, and such an impetus will apparently be an infinite one; and a finite impetus will change (since apparently finite causes that bring changes undergo change), so we’ll need to explain the impetus’ own change and thus our explanation will not advance us anywhere.  

I note that these objections invoke yet further Aristotelian principles. More importantly, the very idea is in direct contradiction to the principle of motion! The whole question is whether the change requires an external influence.

Feser then reaches the most stupefying part of his argument. Considering external causes to real change, Feser argues that since inertial movement is eternal (in potential) what is required to sustain it are “necessary beings” in the sense that they “have no natural tendency toward corruption the way material things do”. He concludes that

“Hence, the only possible cause of inertial motion – again, at least if it is considered to involve real change – would seem to be a necessarily existing intelligent substance or substances …(Unless it is simply God Himself causing it directly as an Unmoved Mover.)”

I’m going to simply ignore the “intelligent” bit there, as that is not borne out by Feser’s argument above (although it might be by yet further Scholastic principles). I note, however, that Feser is reduced to hypothesizing non-physical sustaining causes to maintain the principle of motion. Which is precisely where we started.


I have shown that Feser has to explain the change in location during inertial motion as real change. He cannot explain it as stemming from an internal cause, as (notwithstanding his own arguments) that would violate the principle of motion. He cannot explain it with an external physical cause, as that implies contradicting the principle of inertia. He is reduced to invoking hypothetical “metaphysical” external causes such as God or necessary substances, whose causal effect is not a force. The only such possible cause is a sustaining cause – positing that something needs to cause the object to maintain its current velocity.

In short, Feser fails to combine the two principles in a satisfying manner. Combining the Aristotelian principle of motion with the Newtonian principle of inertia is only possible if one is ready to assume ad hoc redundant invisible sustaining causes.

Not A Metaphysical Principle

Finally, I would argue that Feser’s position is self-defeating. I have already showed that he must commit to additional external causal entities. But the Newtonian physics is fully consistent without assuming these other entities. Hence, the principle of motion cannot be a metaphysical principle, since it is possible to conceive of change without it – either by invoking internal causes such as impetus, or by declining to demand a cause to explain inertial motion at all. 

Appendix: Some Weak Arguments

There are several other arguments Feser raises, that I think are very weak and don’t fit the above scheme, so I’ll take them on in this section.

Feser argues that while Principle of Motion 2 speaks of actualizing potentials, the Principle of Inertia formally doesn’t, so there isn’t a formal conflict. Well, the conflict is substantial so cannot be wiped away by word games. If the principle of motion is to be put in the language of actuality and potentiality, then the principle of inertia should likewise be put in a similar language or else the principle of motion’s implication in Newtonian terms need to be spelled out for them to be comparable. You can’t demonstrate there is no conflict by putting the principles in different languages!

Feser also argues that the Newtonian principle of inertia is a principle of physics, describing how the world really acts. The Aristotelian principle of motion, in contrast, is a principle of metaphysics which gives an account of the “intrinsic nature of that which moves”. 

I find this argument rather obtuse. Feser appears to attempt to reconcile the two principles by restricting the principle of inertia to talking about the mathematical description of motion, while maintaining that the principle of motion discusses the causal relations that underlay that description. However, the conflict between them is about whether or not something external acts on the object during inertial motion, so the question is about causal relations in the first place. 

Feser also addresses the modern Relativistic idea that the whole world – past, present, and future – exists timelessly together as spacetime. He correctly notes that in Aristotelian terms, this means that the world is entirely actuality, with no real potential and no real change. Feser argues that the principle of motion will be relevant in two ways even in this scenario, but he’s mistaken.

First, he argues that “change really occurs at least within consciousness itself”. But on the contrary, the Parmenidean/Einstenean view is that change doesn’t “really” occur within consciousness – rather, there are different states of consciousness at different places in the man’s worldline.

Secondly, he argues that the laws of nature governing spacetime are contingent, and hence “are merely potential until actualized”. But, in this Parmenidean view potential and the passage of time are an illusion. There is no “until”, nor are the laws “contingent” in the sense that they are actualization of a wider potential. There is simply reality, as it actually exists. There is thus no room for the all the actual to be an actualization of a potential.

The question is, however, besides the point. It does not bear on whether the two principles are compatible.

Similarly, Feser notes that for the Aristotelian what exists are “concrete material substances with certain essences, and talk of “laws of nature” is merely shorthand for the patterns of behavior they tend to exhibit given those essences”. He fails to note that for the Parmenediean, the same is true minus the essences. Talk of “essences” is redundant; the laws of nature suffice to describe the patterns of behavior, and thus essence is made redundant and dismissed as empty metaphysical speculation and dogmatism.

He also argues that for the Thomist things like fundamental particles require an (external) explanation about what keeps them existing. That may be true, but for the Parmenedian this is no need for such an explanation – what exists exists as spacetime, explanations are within this spacetime not about it.

[1] This is not what the physicists would call a “state”. The physical state of an object in modern Newtonian physics consists of both its position and its velocity at a particular time.