On Scholastic Metaphysics: Me Against Aristotle


Aristotle is my greatest philosophical hero. We’re talking about the guy that took Plato’s haphazard, mystic philosophy and turned it into a down-to-earth, rigorous, systematic investigation of all aspects of reality. Aristotle is the father of nearly every field of science, and every branch of philosophy. In the few cases where I think Aristotle was right (e.g. the Correspondence Theory of Truth), I wear my Aristotelianism with pride. So you can see why I’d be sympathetic to claims that Aristotle was fundamentally right, that Modern philosophy was wrong to reject virtually everything Aristotle said.

It is thus with great hope that I purchased Edward Feser’s magnum opus, Scholastic Metaphysics. This is the (small) book that’s supposed to show all those contemporary, analytic philosophers that they’re wrong and Aristotle was right. This blog-post series will be my reading diary of this book, my attempt to grapple with Feser’s arguments. As per my education in analytic philosophy, I’m opposed to his thesis – but I approach it not with fear he might be right, but with hope that he is! I would like nothing more than to see Aristotle vindicated.

Now, I disagree with Feser about, well, just about everything. But I do hope he is right, about the core of Aristotelian thought at least. With this in mind – let us read Scholastic Metaphysics!

  1. Feser vs. Scientism

Strong Emergence = Holistic Physics

In January (2015), Marko Vojinovic wrote a two-part attack on reductionism over at Scientia Salon (Part I, Part II). Based on his reasoning, I’d like to offer a new definition of strong emergence as “holistic physics”.  (Well, perhaps not that new; regardless…)

The idea is that any full description of the underlying-level dynamics must either refer to the emergent concept (strong emergence) or refer to concepts that it reduces to (weak emergence).

Let’s consider a physical system. It is described at some level of description by a certain physical theory, let’s call it the effective theory. There is also a more detailed description, let’s call it the underlying theory, so that when the details of these underlying dynamics are summarized in a certain manner you get the effective theory. For example, the behavior of gas in a canister might be described by the ideal gas law (the effective theory), while this equation in turn can be derived from the equations of Newtonian mechanics (the underlying theory) that apply to each molecule.

For now, let’s assume both the effective and fundamental theories work, they are not in error. We’ll address errors in a moment.

If the underlying theory is mechanical in the sense that it only discusses small parts interacting with other small parts (such as molecules interacting with other molecules) then we can say we have weak emergence: the “higher-level” behavior of the effective theory is reducible to a “lower-level” behavior of the parts. For example, we can define “pressure” as a concept in the effective theory, as a certain statistical property of the velocities and masses of gas molecules. If the movement of the molecules can be described by an underlying mechanical theory, a theory that only takes into account the interactions of molecules with each other – then we can calculate everything in the lower theory, and then “summarize” the right way to see what the result of this calculation means in terms of “pressure”, and in this sense talk of “pressure” has been reduced to talk of molecules.

If, however, the underlying theory is holistic in the sense that the small parts it talks about also interact with parts that are summaries of the small parts, i.e. with the concepts that the effective theory talks about, then we can say that we have strong emergence. For example, if the interaction of molecules in the underlying theory also refers to pressure (instead of just to other molecules), then pressure acts as a strongly emergent property – you cannot reduce talk about it to “lower levels”, since the lower level already includes talking about it.

In the Real World

All indications are that physics is multiply mechanical – it is mechanical at various levels, not just the fundamental one. In other words, there is only weak emergence, but there is weak emergence at many levels: nuclei emerge from quarks, atoms emerge from nuclei and electrons; solids from atoms; and so on. In our investigations, we have never established an holistic scientific theory, a theory that refers to higher-level entities. And we have, on numerous occasions, seen reductive success – we were able to calculate, from underlying theories, aspects of or even entire effective theories.

Now, in his original piece Marko argued for strong emergence by shifting the burden of proof to those disputing it. But a mechanical theory is simpler (as he seems to agree) so more likely a priori, and reduction is empirically successful so it’s more likely a posteriori. (Reductionism has shown empirical success by deriving higher-level theories or aspects thereof, and by consistently finding that the underlying theories are mechanical.) Thus, “weak emergence” is well established and the burden of proof is now firmly on those wishing to overthrow this well-established theory.

A Note On Errors

Why ignore errors? Because they are not philosophically interesting. If the effective theory is correct but the underlying one is wrong, then all we have here is a mistaken underlying theory. If the small parts it talks about do exist, a correct description of their dynamics can always be given, and constitutes the correct underlying theory (which, however, need not be mechanical!). If the small parts it talks of don’t actually exist, then either some others exist and we’ll settle for them or else no small parts exist in which case we can just call this “effective” the fundamental theory – a theory that has no underlying theory.

If the underlying theory is correct but the effective theory is wrong, then we have just miscalculated what the sums over the underlying theory say. It’s also possible we wrongly identified the summaries with concepts taken from other domains (e.g. that “pressure” as defined statistically is not what our pressure-gauge measures), but again this is not a very interesting question as all we need to do is to define properly what these new concepts are in order to see what the underlying theory says about them.

And finally, if both underlying theory and effective theory are wrong then we just have a mess of errors from which nothing much can be gleaned.

In all cases, the errors have nothing to do with emergence. Emergence relates to how things do behave, not to how things don’t behave.

A Note on the Original Argument

In Part I, Marko attacked reductionism by three examples. First, he noted that the Standard Model of cosmology cannot possibly be reduced to the Standard Model of particle physics, because the latter does not include any dark matter while the former does. While correct, this simply indicates that one model is mistaken: the reason that the Standard Model of particle physics does not yield the Standard Model of cosmology is that the Standard Model of particle physics is wrong! That is not an indication that the actual dynamics of the particles are determined by higher-level concepts, such as (for example) whether or not they are near a sun. One cannot conclude from an error in the model that the correct model will show strong emergence.

As his second example, Marko noted that the Standard Model of elementary particles with massless neutrinos fails to correspond to the standard model of the sun.  While true, this merely indicates a failure of the Standard Model, which has since been corrected (neutrinos apparently have mass!). It has nothing to do with emergence, which is all about correct theories.The failure of the zero-mass standard model did indeed indicate that the effective sun-model did not reduce to it, but it did so in a philosophically boring way – it said nothing about whether the sun model reduces to the corrected standard model, or more generally it said nothing about whether the sun model reduces to any underlying theory.

His third example is more interesting, in that he complains that one cannot explain the direction of time with appeal to the dynamical laws alone; one needs to make another assumption, one of setting the initial conditions. That’s not an issue of errors, at least. But again, his true statement has no implication for emergence. The initial conditions are set at the underlying level, at the level of each and every particle. This state in the underlying level then leads to a certain phenomena at the higher-level description, which we call the directionality of time (e.g. the increase of entropy with time). But that’s just standard, weak, emergence. There is no indication that the dynamics of the particles refers to the arrow of time – the dynamics always is mechanistic, referring only the particle-level description. Thus, not only is there no strong emergence here but there is an explicit case of weak emergence. Just as the sun (supposedly) emerges from a particular initial condition (a stellar gas cloud) in the corrected particle Standard Model and thus the sun-model is reduced to the standard model, so too does the arrow of time demonstrably emerges from a particular initial condition and thus the arrow of time actually is reduced to the dynamical laws. It’s one example, out of many, of successful reduction.

In Part II, Marko maintains that

“given two sets of axioms, describing the effective and the [underlying] theory, one cannot simply claim that the effective theory a priori must be reducible to the [underlying] theory.”

I think Marko here mistakes the meta-scientific theory that says “in our world, there is only weak emergence”, which follows from all of our science as well as from parsimony, with the logical theory that says “reduction must hold as a metaphysical principle”. I agree one cannot simply claim the effective theory must be reducible, but one can a priori claim it is more likely that there is one underlying mechanistic level (i.e. a “fundamental theory”) and that all higher-level effects are emergent from it, and one can claim a posteriori that weak emergence is overwhelmingly scientifically established.

Marko also raises a few other arguments in Part II, based on Godel’s theorem. He notes that there would always be true theorems that one cannot prove from a given (underlying) theory (this stems from Godel’s theorem). While true, this again has no bearing on emergence. For one thing – we’re discussing what’s true here, not what is finitely-provable. Secondly, there is no reason to expect that the unprovable theorems will lead to an holistic behavior of the particles described by the underlying theory, i.e. there is no reason to connect incompleteness to holism.

As his final argument, he notes that even if we accept an ultimate “theory of everything”, there would be uncalculable results from it. Again true, and again not relevant. In his example, he imagines there are six “gods” determined by this theory, and that their actions are incalculable. But if the “theory of everything” is a fundamental mechanistic theory, then the actions of these gods and hence all of what occurs is weakly emergent – even though it cannot be calculated. Whereas if the “theory of everything” refers to the overall brain-states of these gods (say), rather than to just the fundamental particles or so on, then the gods are strongly emergent phenomena. Whether there is weak or strong emergence has nothing to do with the uncalculable nature of these “gods”.

Reduction in Two Easy Steps

Over at his Scientia Salon, philosopher Massimo Pigliucci wrote a piece on the disunity of science, discussing favorably some arguments against a unified scientific view (favoring instead a fragmented worldview, where each domain is covered by its own theory). The discussion really revolves around reduction – are high-level domains, such as (say) economics, reducible to lower-level domains, such as (say) psychology? Ultimately, the question is whether fundamental physics is the “general science” that underlies everything and presents a single unified nature, with all other sciences (including other branches of physics) being just “special sciences” interested in sub-domains of this general science. All domains and disciplines therefore reduce to physics. This is the Unity of Science view that Pigliucci seems opposed to.

I’m on the side of reduction. What are the arguments against it? Well, first off lets clarify that no one is disputing “that all things in the universe are made of the same substance [e.g. quarks]” and that “moreover, complex things are made of simpler things. For instance, populations of organisms are nothing but collections of individuals, while atoms are groups of particles, etc.” Everyone agrees that this type of reduction, ontological reduction, is true. The arguments instead are aimed at theoretical reduction, which is roughly the ability to reduce high-level concepts and laws to lower-level ones. Putting arguments from authority to the side, Pigliucci raises a few arguments against theoretical reduction:

(1) The Inductive Argument Against Reduction: “the history of science has produced many more divergences at the theoretical level — via the proliferation of new theories within individual “special” sciences — than it has produced successful cases of reduction. If anything, the induction goes [against reduction]”

However, this argument is based on the false premise that if reduction is true then reductive foundations for a science would be easier to find than new high-level sciences. This premise simply does not follow from reduction, however. Instead, reduction entails that

(A) As science progresses more and more examples of successful use of reduction will be developed. This prediction is borne out by things like the calculation of the proton’s mass from fundamental particle physics, the identification of temperature with molecule’s kinetic energy, the identification of (some) chemical bonds with quantum electron-sharing, and so on.

(B) As science progresses, no contradiction will be found between the predictions of the lower-level theories and the higher-level ones. For example, it won’t be found that the proton should weight X according to fundamental physics yet weighs Y in nuclear physics; it won’t be found that a reaction should proceed at a certain rate according to physics yet that it proceeds in a different way according to chemistry. Clearly, the success of this prediction is manifest.

Thus the inductive argument against reduction is very wrong-headed, misunderstanding what reduction predicts and ignoring the real induction in its favor.

(2) How would reduction even look like?

Pigliucci further maintains that we reductivists are bluffing; we don’t really even know what reduction could possibly look like. “if one were to call up the epistemic bluff the physicists would have no idea of where to even begin to provide a reduction of sociology, economics, psychology, biology, etc. to fundamental physics.”

This is again false – we know in general terms how this reduction takes place (chemistry is the physics of how atoms bond into molecules and move; biology is the chemistry of how numerous bio-molecules react; psychology is the biology of how organisms feel and think; and so on). The only caveat here is that consciousness is somewhat problematic; the mind-body issue aside, however, the picture of how reduction proceeds is clear enough (even if vague and not at all actually achieved, of course) to make this objection moot.

(3) Cartwright’s disjointed theories

Supposing that all theories are only approximately-true phenomenological descriptions (something most scientists would agree to), Pigliucci somehow concludes that therefore “science is fundamentally disunified, and its very goal should shift from seeking a theory of everything to putting together the best patchwork of local, phenomenological theories and laws, each one of which, of course, would be characterized by its proper domain of application.”

But the fact that some theories apply only in some cases does not imply that they are not part of a bigger theory that applies in all these cases. There is no case being made against reduction here – reduction is perfectly comfortable with having multiple phenomenological theories, as long as they all reduce to the fundamental physics. It is even comfortable with there being an infinite series of “more fundamental physics”, as long as each theory reduces in turn to an even-more fundamental theory.

What is Reduction?

I was prompted to write this post because one may not make long/many comments over at Scientia Salon. The thing I wanted to say there was what reduction is. Reduction, as is meant by those actually believing it, is something like “Physics + Weak Emergence”.

Reduction = Physics + Weak Emergence

By “Physics” I mean that what ultimately exists is described by fundamental physics – things like “atoms and void”, “quarks and leptons”, and so on.

By “Weak Emergence” I mean that high-level concepts are arbitrarily defined, and then used to analyze the lower-level descriptions. When this is done, it is revealed that the high-level phenomena that the high-level concepts describe actually exist. This is rather abstract, so consider a simple example: temperature in a gas canister. The gas molecules can be fully described at the low, microscopic level by things like the molecules’ position and velocity. “Temperature” is then defined to be their average kinetic energy. Doing the math, one can show from the microscopic state that the gas indeed has a certain temperature.

In this way the temperature is “reduced” to the lower-level concepts like the molecules’ speed and mass. But the concept of “temperature” was defined by us, it isn’t to be found in the microscopic physics or state!

For this reason, Democritus said “Sweet exists by convention, bitter by convention, colour by convention; atoms and Void [alone] exist in reality”. The idea isn’t that temperature doesn’t exist in reality, however, but rather that we choose to consider nature in terms of temperature by arbitrary choice, by “convention”.

The best arguments for God ?

Over at “Why Evolution is True”, Jerry Coyne has made a post I’m deeply disappointed with and would like to rant on. It’s about a new book that, it seems, provides the standard (Scholastic) proofs for god, focusing on the arguments from Contingency (god must exist to support the existence of all other things) and an argument from Divine Simplicity (God exists because good, beauty, etc. exist, and god is identical to them). Coyne is actually responding here to another atheist (?), Oliver Burkeman, that scolds atheists for not facing such arguments and pointing to the new book as something they should read to contend with them. Unfortunately, Cyone demonstrates in his response precisely why Burkmen is right. So I’m going to write this post to demonstrate and bemoan this fact.

1. Anthropomorphism vs. Theology

Cyone begins his post by noting that the theologian’s god isn’t the normal believer’s god. 

“The vast majority of believers don’t even read theology, and are barely aware of the arguments for God made by Sophisticated Theologians™.  So is it our real duty as atheists to refute those arcane theological arguments, or to prevent the harm done by religion?

Well, that depends on your goals. If you want to have social impact – sure, go ahead and demolish the less sophisticated and common views. But all people also want to be right. If you want to believe in the right thing, and you believe in atheism – then you need to look at the best arguments for god, not at the most common views. 

I want to be right. It’s what draws me to philosophy. And delving into theology is fun, too, as Coyne says. So this is why I vote for going after the theologian’s god. (As well as the common one, of course; we can do both.)

There is another point to be made here, however. I don’t believe most believers are as shallow as Coyne makes them out to be. He points to polls showing great belief in demons, for example, as evidence against belief in Sophisticated Theology. But I think many believers – theologians and laypeople – do both. A religious person might think of “demons” and hold rites to exorcise them, for example, yet maintain that this is an anthropomorphic allusion to, say one “inner demons”. Certainly many people will go fully anthropomorphic, but still.

2. Irrefutable because it’s untestable

Coyne then discusses three interpretations of “the opposition’s strongest case”. I have no big beef with the first two. It’s the third one that sets the tone of his post, however, and boy does he get this one wrong. It’s so wrong I don’t know where to begin, so let’s just go over it slowly.

“…people like Hart have proposed conceptions of God that are so nebulous that we can’t figure out what they mean. 

No. The concepts of a “Necessary Being” or “Divine Simplicity” may be incoherent, but they aren’t particularly nebulous. The problem with traditional theology isn’t that it’s nebulous – it’s that it’s wrong.

 And because they are not only obscure but don’t say anything about the nature of God that can be compared to the way the universe is, they can’t be refuted. To any rationalist or scientist, this automatically rules them out of rational consideration, for if an observation comports with everything, and can’t be disproven, it is totally useless as an explanation of reality.

First of all – the phrasing of the scientific method here is awful. If a theory [not an observation] equally comforts with everything [theories need to make predictions, not iron-clad predictions] then it can’t be verified [never “disproven”, and not only falsification but also positive verification is possible], and belief in it cannot be empirically justified [which does not mean that it’s useless as an explanation; consider, e.g., interpretations of quantum theory].

But the whole point is that the theologian claims his theory is justified by pure reason – by philosophy alone. He claims, for example, that only god can explain existence. We need to show why this non-empirical argument is wrong – not to dismiss it out of hand because it isn’t empirical.

 I might as well say that there’s an invisible teddy bear that sustains the universe, and without my Ineffable Teddy there would be no cosmos.  But nobody can see that bear, for he is the Ursine Ground of Being: ineffable and undetectable, though his Bearness permeates and supports everything.

You might. And to counter that I’d need to argue in turn why it is unreasonable to assume existence requires such ursine support. I could not dismiss the theory for lack of empirical evidence, for the theory has been designed to be immune to such empirical evidence. C’est la vie – you need to contend with the actual hypothesis being raised, not with what you want the hypothesis to be.

On this “ground of being”, Coyne continues

Not only is this meaningless (I’ll read Hart’s book to see if I can suss out any meaning), but it’s also untestable.  And there is not an iota of evidence for such a God, so on what grounds should we believe it?

The “meaningless” here refers to things such as ” God is what grounds the existence of every contingent thing, making it possible, sustaining it through time, unifying it, giving it actuality. God is the condition of the possibility of anything existing at all”. I don’t think that’s meaningless. I think it’s not true, and even ultimately incoherent – but that’s not the same as “meaningless”. [Just like “1+1=3” is not true and ultimately incoherent in that it’s self-contradicting, but yet it isn’t at all meaningless.] Coyne is simply failing to understand the opposition.

What follows is, in a sense, even worse – from lack of understanding to sheer ignorance.

Hart claims that this is the conception of God that has prevailed throughout most of history, but I seriously doubt that. Aquinas, Luther, Augustine: none of those people saw God in such a way. And it’s certainly not the view that prevails now, as you can easily see by Googling a few polls.

Seriously ? Coyne can’t recognize this extremely traditional theological fare as the standard Scholastic view, held in the West all over the middle ages ? By Christians, Muslims, and Jews ?  The view most certainly held by Aquinas (he gave the formulation of the argument from contingency!).

I’m not sure about Augustine (he certainly saw god as perfection; not sure about being the ground of being, however). Luther I’m not clear on, but he basically marks the end of the Scholastics anyway (“Aristotle is to theology as darkness is to light”).

It is certainly not the view that prevails now. But we’re talking about the “the opposition’s strongest case”, remember ? And while it’s not easy to judge what is the best case without delving into the options first, it is at least initially plausible that an idea held by so many philosophers/theologians for so long should be on the short list of contenders. Something we should look into, to verify that we’re holding the correct view.

 I can make up yet another God with just as much supporting evidence Hart’s: God is a deistic God who has always been there but has done nothing. He didn’t even create the universe: he let that happen according to the laws of physics, from which universes can arise via fluctuations in a quantum vacuum. My God is just sitting there, watching over us all, but only for his amusement. He’s ineffable and indolent.


I claim that my Coyneian God is just as valid as Hart’s God, for neither can be tested, and thus there’s no reason to believe in either.

Once again – the point is that Hart (the Scholastics in general) are raising arguments why their god is to be believed in, even though there is no empirical evidence to support that theory. You can’t raise another empirically untestable god and claim that he is just as likely simply because they’re both untestable [that smacks more of modern, “reformed”, theology]. You need to actually show why the Coynian God is as likely as the Scholastic God, or (preferably) to simply show why the Scholastic God is improbable.

Burkeman writes, explicitly, “If you think this God-as-the-condition-of-existence argument is rubbish, you need to say why… the question isn’t a scientific one, about which things exist. It’s a philosophical one, about what existence is and on what it depends.”. Right on. Coyne in response replies…

Therefore it’s immune to refutation.  Whether God “is” now depends, as Bill Clinton anticipated, on what your definition of “is” is.  

Aha. And Coyne’s (and my!) position that God doesn’t exist depends on what the definition of “is” too. Welcome to Philosophy 101. Now if you want to make a metaphysical claim (such as that God doesn’t exist / Scientific Realism is correct) then go ahead and make the philosophical case for it, instead of complaining one needs to make a philosophical case for one’s metaphysics.

Cyone then complains that history isn’t a good argument.

Hart wrong in claiming that his conception of God is valid since it’s the one embraced most consistently through “the history of monotheism,” but, as all scientists know, how widely something is accepted is no evidence for its validity.  …  just because a bunch of Sophisticated Theologians™ agreed on God as a Sustainer of the Universe and Ground of All Being does not make it so.  Why on earth does that argument have any force at all?

Just because a bunch of very smart guys, from the days of Aristotle to Luther, believed something doesn’t mean it’s true. But it is enough, I think, to merit intellectual consideration. It’s something that’s so big in our intellectual history that one should check it out before ruling it out. That’s all.

Let’s skip ahead. For his last point, Coyne notes that Hart argues that we pursue God when we pursue Good – again, fairly standard fare, (wrongly) equating the abstraction of “good” with the actual existence of good, and incoherently identifying Good with God (the doctrine of Divine Simplicity). Coyne replies

 If you define God as simply the set of our most admirable aspirations, then of course God exists. But you could also define God as the set of our most unpalatable aspirations: greed, duplicity, criminality, and so on.  And that kind of god could also exist by definition: as the Ground of All Evil.  I claim that, in fact, there’s just as much evidence for that god as there is for Hart’s God. 

That’s abysmally failing to grasp the (very poor) Scholastic argument. The idea isn’t that any set of aspirations exists and grounds being. It is rather that a certain set of aspirations is such that each is identical to the others and also identical to God. This is sheer nonsense, but it’s just not the argument Coyne is arguing against !

Coyne finishes by addressing several questions to Hart. I’ll give brief Scholastic-like answers to each, as I understand things.

1. On what basis do you know that God is a Ground-of-Being God instead of an anthropomorphic God? (In your answer, you cannot include as evidence the dubious claim that this is the kind of God that most people have accepted throughout history.)

Hart would seem to reply that he knows god is the ground of all being on the basis of knowing, from philosophical analysis, that our contingent existence requires a necessary being to ground it, and that this being is identical with the good, with beauty, and so on so that it deserves to be worshiped and be called god. 

I would reply that Hart’s metaphysics is baseless if not totally unsound, and his doctrine of divine simplicity is on the deep end of the latter. 

2. How do you know that your Ground-of-Being god embodies truth, goodness, and beauty rather than lies, evil, and ugliness?

Hart would probably employ the standard Scholastic arguments to support such claims. I would reply that these presuppose that these are “perfections”, rather than abstractions that we value and nothing more.

3. What would the universe look like if your God didn’t exist?

Hart would probably reply that the universe would be impossible without god, just like it would be impossible to have a universe where “1+1” didn’t equal “2”. I would reply in turn that his god concept is incoherent, due to its Divine Simplicity, and implausible due to his essentialist (“transcendant”) metaphysics, and more, so that it’s likely that his god is impossible. And that if it was possible, the universe would look very different (due to the argument from evil and so on). 

I haven’t read Hart (nor did Cyone), but this doesn’t appear to be new stuff. It’s all been done before. Coyne has read lots of philosophy of religion. I fail to see how he could not address such simple allusions to standard Scholastic philosophy and dismiss them as they should be dismissed, at the philosophical level. 

Supernatural Minds

In a recent post, the Christian apologist and philosopher Victor Reppert presents the view that things are ‘supernatural’ if there are mental properties “on the ground floor” of existence, at the most basic level of existence and explanation. This view was linked to sympathetically in a post by the naturalist Robert Oerter [1], and has also been championed by the naturalist philosopher and historian Richard Carrier (e.g. here), among others. Let’s call this the “Fundamental Materialism” thesis – it posits that naturalism maintains that at bottom, there is only inanimate matter.

I don’t understand why naturalists – those who reject the supernatural – take this position. I think it’s mistaken on several levels.

Perhaps most importantly, the fundamental materialism thesis doesn’t understand what naturalism is. If there is a slogan for naturalism, it is “everything is the same”. When lightning is understood to be just another instance of electrical discharge, just like numerous other phenomena all around us – then it becomes natural. When lightning is unlike other things, unlike the normal course of nature – when, for example, it is a bolt thrown by an angry Zeus – it is then that lightning is supernatural.

Given this understanding of ‘natural’ – what is the place of the mental in the universe? There are two options. One is to extend the uniformity that is at the core of naturalism to the mental domain, and maintain that “everything is the same” also in the sense that everything is mental. On this view every thing – every fundamental particle – has some mental content, such as some consciousness; although not every complex thing has a full mind, with unity, will, purpose, or so on. This position is known as Panpsychism.

The other option is to maintain that “everything is the same” in the sense that mental properties emerge from certain configurations of regular, non-mental matter, much like ‘pressure’ is only emerges when there are lots of atoms impinging on a surface. This view is known as emergence.

Personally, I think emergence makes no sense (due to what David Chalmers called the Hard Problem of Consciousness, see e.g. here), so I’m a panpsychist [2]. But the important point is that both views are forms of naturalism! In both cases reality is uniform. There is no special pleading, no ‘thinking matter’ set apart from ‘extended matter’ (as in Cartesian dualism), no ‘souls’ set apart from ‘matter’, no violations of the laws of physics – there is just nature. So naturalism may include mental stuff at the bottom (panpsyhcism), or not (emergence) – it doesn’t matter.

Now Victor Reppert raises three conditions on what would constitute a naturalistic world-view, and I think the first one exemplifies a second major point of confusion: the erroneous belief that ascribing mental properties to things means they can step outside the laws of physics. He writes

First, the “basic level” must be mechanistic, and by that I mean that it is free of purpose, free of intentionality, free of normativity, and free of subjectivity. It is not implied here that a naturalistic world must be deterministic. However, whatever is not deterministic in such a world is brute chance and nothing more.

Notice the implicit assumption here, that mental causation is incompatible with mechanistic causation. If an agent acts with purpose, then his actions are not caused by (say) quantum mechanics. Reppert limits the condition to the basic level only, but the point stands – an electron cannot have some subjective consciousness (‘subjectivity’) and at the same time follow quantum mechanics.

But this is the very thesis that naturalist theories of mind maintain – that an agent acts in a mechanistic way, yet at the same time in a purposeful way. So the naturalist rejects the implicit assumption – the fact that the physical stuff moves in mechanistic ways does not imply it doesn’t have mental content, and having mental content doesn’t imply freedom from physics.

Reppert’s assumption that what is mental is not mechanistic is understandable in a theist – this metaphysical intuition is what allows them to hold at the same time that God is a mind and that god is not physical.

But I cannot understand how naturalists fall to this trap. They too often seem to think that putting in mental stuff at the bottom level would invalidate physics, so it’s not in agreement with naturalism. But yet at the same time they maintain that ascribing mental properties to brains (say) doesn’t mean that brains violate the laws of physics. I don’t understand why they can’t see that their second point stands in regards to mental properties at the bottom just as much as it applies to those at the higher, complex, levels such as the human brain.

I don’t really have much of a point. I just wanted to say – boo on this dreadful definition of the ‘supernatural’. In addition to being wrong, putting the emphasis on the place of consciousness is just not productive. We are not served by a definition of naturalism that speaks about the place of consciousness in nature, but doesn’t speak about the content of nature! Carrier’s definition that “every mental thing is entirely caused by fundamentally nonmental things” tells us nothing about what the reality that these fundamental (supposedly nonmental) things constitute. It tells us nothing about the fact that lightning is  just an electrical discharge; about the regularities and sameness in nature, which is what allows us to explore it, understand it, and call it ‘nature’. It’s useless for building a picture of what the world is like, irrespective of the metaphysical status of consciousness in it.

We naturalists need a definition that leads to the fact that the world behaves naturally, which is what the naturalism-as-uniformity definition does. When everything is the same then, implicitly, the place of consciousness in nature is revealed to be not independent of the laws of physics. But the focus is on the general principle of uniformity, that underlies contemporary physics and naturalistic explanations in general and that, ever since David Hume, defines what ‘nature’ and laws of nature are all about.

[1] That’s how I know of Reppert’s post – I follow Oerter’s blog.

[2] Chalmers prefers terms like panprotopanpsychism to emphasize that the fundamental mental properties are not full minds; I’m not sure that’s helpful. Since one of the things he’s trying to imply here is that they have no phenomenal properties, no ‘subjecivity’, which is not my position – I prefer to stick to the more conventional panpsychism.

Principle of Motion versus Inertia

This post will be about a recent paper by Feser, “The Medieval Principle of Motion and The Modern Principle of Inertia“. Feser argues that contrary to first appearances, the principle of inertia in Newtonian physics is not in contradiction to the corresponding “principle of motion” in Aristotelian metaphysics. He defines the two principles as follows:

  • The Principle of Motion: “Whatever is in motion is moved by another”.


    The Principle of Inertia: “Every body continues in its state of rest or of uniform motion in a straight line, unless it is compelled to change that state by forces impressed upon it”.

I note that the conflict between the two lies in how they seem to imply other things affect motion. The Newtonian principle of inertia maintains that a body maintains uniform motion when nothing external acts on it, while the Aristotelian principle of motion that a body maintains uniform (or any) motion because something external acts on it. To succeed, Feser will need to show this is not really what they say. 

Formally Consistent?

Feser notes that the Newtonian Principle of Inertia only denies any “external forces” are acting on the body during the inertial motion. This leaves the “formal” possibility of having some other “mover”, which is not an external force (or an object exerting an external force), that is “moving” the object along the inertial motion.

The problem here is that this other “mover” is simply denied by reasonable formulations of the Newtonian principle. While Feser’s formulation is that an object continues in a straight line unless it is acted on externally, an equally reasonable formulation is that

  • The Principle of Inertia 2: A body that is not acted on externally [a “free particle”] will continue in uniform motion.

The core of the dispute between the principles is whether change requires external influence. This conflict cannot be brushed aside by careful phrasing to avoid “formal” conflict between the statements.

So Feser can combine the two, but not in a satisfying manner. He can combine them only by invoking “non-physical” causes which do not change velocity, but rather sustain it – I shall call these Sustaining causes. These causes are not invoked by the Newtonian principle and have no place in Newtonian physics. This “formal” success is achieved only by needlessly multiplying entities.

Inertia as Stasis

Feser’s strongest argument proceeds by two main steps. It begins by explicating the principle of motion in Aristotelian thought. Aristotelian “motion” means change, and change is the transition from potentiality to actuality. So the principle of motion “really” says that

  • Principle of Motion 2“Any potency that is being actualized is being actualized by something else (…that is already actual)”.

Now the second leg of the argument is that inertia in modern Newtonian physics is seen as a “state”; it would be more accurate to say that motion in a certain relative velocity (i.e. at a certain velocity relative to a particular observer) can be seen as a metaphysical “state” of the object [1]. The Principle of Inertia then indeed says that any change to this relative velocity will occur only by the influence of another object (exerting an external force on it), just like the Principle of Motion 2 says. So the two principles are actually compatible.

The problem with this interpretation is that it does not account for the change in the particle’s position. It shows that the changes to the particle’s (relative) velocity correspond to the principle of motion, but not that the changes to the particle’s (relative) location do.

Can Feser successfully argue that change of location during an inertial motion is not an actual change in the Aristotelian sense? I doubt it.

Change of velocity and change of location are of course related. To claim that change in location isn’t really change Feser would have to argue that location isn’t a real property; it is velocity and the passage of time that determine it. This is opposed to both Newtonian physics and common sense. 

Further, change of location seems by itself to be genuine change. Physically, it is a “real” property – an invariant property, not depending on perspective (much like relative velocity or whether a motion is accelerated or not). Metaphysically, it appears absurd to the highest degree to claim that a sudden change in location is not change, and I can’t see why a smooth and uniform change will be any different.

So it appears to me that this line of argument fails. 

Inertia as Natural Motion

A second argument that could have been strong is the suggestion to see Newtonian inertia as analogous to Aristotelian “natural motion”. Aristotle believed objects naturally move towards their place – stones move down towards the center of the earth, fire moves up towards the heavens, and so on. Feser concedes this belief is false, but notes that this “natural motion” does not require “something extrinsic”. According to Aristotle and Aquinas, at least. If we replace the natural motion to be inertia instead of towards the proper place, then, it appears inertial motion could proceed without “something extrinsic” as well.

It appears to me, however, that this amounts to saying that “natural motion” or “inertial motion” can be actualized without being actualized by “something else”! Feser unfortunately does not explicitly explain how this notion of “natural motion” fits with Principle of Motion 2. He says only that “a body will of itself tend to move towards its natural place by virtue of its form” [emphasis added] – but the object’s form (it’s essence, or structure) isn’t “something else… that is already actual” [emphasis added], as Principle of Motion 2 requires.

Without such an explicit explanation of how natural motion conforms with Principle of Motion 2, this argument fails.

Inertia as Change

I have argued that inertial change of location cannot be seen as a “state”. The only way left to Feser is to treat it as real change, then. Here Feser’s arguments become quite convoluted, however.

Feser first considers attributing the motion to its initiator, but dismisses this option, seemingly because the mover will no longer be actual. He argues that the motion can nevertheless have a metaphysical cause. Such a cause can be internal or external.

Considering an internal cause – an “impetus” imparted to the object upon its acceleration or generation – he raises two problems: a finite object can have only finite qualities, and such an impetus will apparently be an infinite one; and a finite impetus will change (since apparently finite causes that bring changes undergo change), so we’ll need to explain the impetus’ own change and thus our explanation will not advance us anywhere.  

I note that these objections invoke yet further Aristotelian principles. More importantly, the very idea is in direct contradiction to the principle of motion! The whole question is whether the change requires an external influence.

Feser then reaches the most stupefying part of his argument. Considering external causes to real change, Feser argues that since inertial movement is eternal (in potential) what is required to sustain it are “necessary beings” in the sense that they “have no natural tendency toward corruption the way material things do”. He concludes that

“Hence, the only possible cause of inertial motion – again, at least if it is considered to involve real change – would seem to be a necessarily existing intelligent substance or substances …(Unless it is simply God Himself causing it directly as an Unmoved Mover.)”

I’m going to simply ignore the “intelligent” bit there, as that is not borne out by Feser’s argument above (although it might be by yet further Scholastic principles). I note, however, that Feser is reduced to hypothesizing non-physical sustaining causes to maintain the principle of motion. Which is precisely where we started.


I have shown that Feser has to explain the change in location during inertial motion as real change. He cannot explain it as stemming from an internal cause, as (notwithstanding his own arguments) that would violate the principle of motion. He cannot explain it with an external physical cause, as that implies contradicting the principle of inertia. He is reduced to invoking hypothetical “metaphysical” external causes such as God or necessary substances, whose causal effect is not a force. The only such possible cause is a sustaining cause – positing that something needs to cause the object to maintain its current velocity.

In short, Feser fails to combine the two principles in a satisfying manner. Combining the Aristotelian principle of motion with the Newtonian principle of inertia is only possible if one is ready to assume ad hoc redundant invisible sustaining causes.

Not A Metaphysical Principle

Finally, I would argue that Feser’s position is self-defeating. I have already showed that he must commit to additional external causal entities. But the Newtonian physics is fully consistent without assuming these other entities. Hence, the principle of motion cannot be a metaphysical principle, since it is possible to conceive of change without it – either by invoking internal causes such as impetus, or by declining to demand a cause to explain inertial motion at all. 

Appendix: Some Weak Arguments

There are several other arguments Feser raises, that I think are very weak and don’t fit the above scheme, so I’ll take them on in this section.

Feser argues that while Principle of Motion 2 speaks of actualizing potentials, the Principle of Inertia formally doesn’t, so there isn’t a formal conflict. Well, the conflict is substantial so cannot be wiped away by word games. If the principle of motion is to be put in the language of actuality and potentiality, then the principle of inertia should likewise be put in a similar language or else the principle of motion’s implication in Newtonian terms need to be spelled out for them to be comparable. You can’t demonstrate there is no conflict by putting the principles in different languages!

Feser also argues that the Newtonian principle of inertia is a principle of physics, describing how the world really acts. The Aristotelian principle of motion, in contrast, is a principle of metaphysics which gives an account of the “intrinsic nature of that which moves”. 

I find this argument rather obtuse. Feser appears to attempt to reconcile the two principles by restricting the principle of inertia to talking about the mathematical description of motion, while maintaining that the principle of motion discusses the causal relations that underlay that description. However, the conflict between them is about whether or not something external acts on the object during inertial motion, so the question is about causal relations in the first place. 

Feser also addresses the modern Relativistic idea that the whole world – past, present, and future – exists timelessly together as spacetime. He correctly notes that in Aristotelian terms, this means that the world is entirely actuality, with no real potential and no real change. Feser argues that the principle of motion will be relevant in two ways even in this scenario, but he’s mistaken.

First, he argues that “change really occurs at least within consciousness itself”. But on the contrary, the Parmenidean/Einstenean view is that change doesn’t “really” occur within consciousness – rather, there are different states of consciousness at different places in the man’s worldline.

Secondly, he argues that the laws of nature governing spacetime are contingent, and hence “are merely potential until actualized”. But, in this Parmenidean view potential and the passage of time are an illusion. There is no “until”, nor are the laws “contingent” in the sense that they are actualization of a wider potential. There is simply reality, as it actually exists. There is thus no room for the all the actual to be an actualization of a potential.

The question is, however, besides the point. It does not bear on whether the two principles are compatible.

Similarly, Feser notes that for the Aristotelian what exists are “concrete material substances with certain essences, and talk of “laws of nature” is merely shorthand for the patterns of behavior they tend to exhibit given those essences”. He fails to note that for the Parmenediean, the same is true minus the essences. Talk of “essences” is redundant; the laws of nature suffice to describe the patterns of behavior, and thus essence is made redundant and dismissed as empty metaphysical speculation and dogmatism.

He also argues that for the Thomist things like fundamental particles require an (external) explanation about what keeps them existing. That may be true, but for the Parmenedian this is no need for such an explanation – what exists exists as spacetime, explanations are within this spacetime not about it.

[1] This is not what the physicists would call a “state”. The physical state of an object in modern Newtonian physics consists of both its position and its velocity at a particular time. 

Bayesianism: Compound Plausibilities

[This post is part of a series on Bayesian epistemology; see index here]
The last assumption of the core of Bayesianism is that the plausibility of (logically) compound propositions depends, in a particular way, on the plausibilities of the propositions that constitute them. I will write it in the following form* (using Boolean Algebra):
Assumption 3: Compound Plausibilities: The plausibility of the logical conjunction (“A and B” or “AB”) and the logical dijunction (“A or B” or “A+B”) is a universal function of the plausibilities that constitute it, and their complements, under all relevant conditions of knowledge.
The functions are “universal” in the sense that they do not depend on the content of the propositions or the domain of discourse. The claim is that the plausibility of logical conjunction or disjunction – and thereofre of every complicated consideration of the basic propositions – depends on the plausibilities of the basic propositions, not on what they’re talking about.The assumption of universality is clearly correct in the cases of total certainty or denial. If A is true and B false, for example, we know that A+B is true – regardless of the content of the claims A and B, the topic they discuss, or so on. It is less clear why universality should be maintained for intermediate degrees of certainty. Some** suggest to consider it a hypothesis – let’s assume that there are general laws of thought, and let’s see what these are.

Another aspect of assumption 3 is that the universal functions depend only on their components. But assuming the functions are universal – what else can they depend on? They can only depend on some plausibilities. They cannot depend on the plausibility of an unrelated claim, for then it will not be possible to identify it in different domains of discourse. They must depend at least on the plausibilities of their components as they do under the extreme cases of utter certainty or rejection. It is perhaps possible to conjecture that in addition they may depend on some other compund proposition that is composed out of the basic propositions of the conjunction/disjunction, but this would surely be very strange. The decomposition into constituents therefore appears very simple and “logical” – I don’t know of any that object to it.

Let us proceed, then, under assumption 3.

It is a cumbersome assumption as each function depends on lots of variables. Fortunately, we can reduce their number. Consider the case where B is the negation of A, that is B=A. In this case
so that F depends on only two variables, (A|X) and (A|X). On the other hand, logic dictates that this plausibility must have a constant value,
Assuming that the universal function F is not constant, the only way we can maintain a constant value when we change (A|X) is to change (A|X) simultaneously. We are forced to conclude that the plausibility of a proposition is tied to that of the proposition’s negation by a universal function,


Theorem 3.1: The plausibility of A is tied to the plausibility of its negation by a universal function, (A|X)=S(A|X).
We will determine S explicitly later. For now, it is enough that it exists.

Something very important just happened – from the assumption that there are genreal rules for thought, we concluded that the plausibility of the negation of a proposition (A|X) is measured by the plausibility of the claim itself (A|X). It is therefore enough to just keep track of one plausibility, (A|X), to asses both. As we have said previously, this is an inherent part of the Bayesian analysis, and we see here that it is derived directly from the assumption of universality. The main alternative theory, the Dempster-Shafer theory, considers the measure of support that propositions have and requires a separate measure for the support of the proposition’s negation. The existence of S implies that Dempster-Shafer must reject the universality of their own theory! There cannot be a universal way to determine the support for compound propositions from the support we have for the basic propositions, and even within a particular domain if this can be done then the theory only reverts back to the Baysian one. Unsurprisingly, Shafer indeed doubts the existence of general rules of induction.

Let’s move on. The existence of S allows us to throw out the complements A and B from the parameters of the universal functions, as they are themselves a function of the propositions that they complement (A and B).


Theorem 3.2: Simple Shapes: The universal functions F and G can be written without an explicit dependence on the plausibilities of the complements.
The functions are still rather complicated, but they can be made even simpler. Consider the case where A is a tautology. In this case there is no meaning for the expression (B|A,X) – no information in the world can determine that a tautology is wrong. This expression is just not defined. But that plausibility of (AB|X)=(B|X) must still be well defined! There must therefore be a way to write G in a way that does not depend on the undefined variable (B|A,X). We should notice here that the parallel variable (B|A,X) is actually well defined in this case, so G might still depend on it. A similar situation occurs for the pair of variables (A|B,X) and (A|B,X). We can therefore conclude that we can write the universal functions in a manner that does not depend on half of each of these pairs.
Theorem 3.3: Simpler Forms: The universal functions F and G can be written without explicit dependence on the information that A or B are wrong.
These forms cannot be used when A or B are contradictions, but otherwise they should be applicable. With just four variables, they are simple enough to server as a basis from which we can prove Cox’s Theorem – the foundation of Bayesianism.
* My variant on this assumption is somewhat more general than that usually given.


** For example, van Horn.