Why Carrier is Right on Fine-Tuning

EDIT: This post has been edited to better reflect Luke Barnes’ position.

Recently, Richard Carrier has re-opened his public discussion with Luke Barnes on the fine-tuning argument, in a blog post (for Barnes’ reply, see his own blog post). Their discussion is quite tedious and personal, but at the heart of things I think Carrier is right: fine-tuning is evidence against God, for the reasons Carrier champions. I’ll explain.

I think Luke Barnes, who supports the fine-tuning argument, uses a somewhat misleading terminology. What I take “fine tuning” to be, and what I think most physicists do, is the empirical finding that we live in a universe with laws of nature such that if the constants in these laws were to be altered slightly, then the laws would describe a (different) universe where life cannot evolve or survive. The constants in this sense seem to be “fine tuned” to produce life. Let us call this fact about the laws of nature of the universe we actually live in “FT”.

The fine-tuning argument is then the argument that this fact indicates that God exists. Now in judging whether God exists or not, we are contrasting two hypotheses. It does the atheists injustice to say that they simply don’t believe God exists; rather, they believe the world is natural. Barnes usefully provides a way to characterize this view: the atheists believe that all that exists is Lagrangian, meaning that it is described by one particular uniform, local, set of laws of nature. Let us call this hypothesis “N”, for “Natural”. The question is then whether the data that fine-tuning holds (FN), supports the God hypothesis (G) or naturalism (N). The real question here is whether the data is more likely under G or N.

I think Barnes confuses fine-tuning with a slightly but importantly different mathematical fact. This fact is that in the space of all possible natural (Lagrangian) universes, the ones bearing life are exceedingly rare. This is because life, as we know it, is a very complex phenomena, and its evolution even more so. In order to “build” such a thing from the very simple, local, uniform building blocks that a Lagrangian provides, you need to get things just right. Thus, the Lagrangians that support life are very sparsely distributed among all possible Lagrangians, and even small deviations from them (changing one constant by a bit) will mean a universe that isn’t life-bearing. (At least, that seems to be plausible; there is no way to actually calculate any of that.) Notice that this is a logical fact about the nature of Lagrangians (i.e. of natural hypothetical universes). It makes no sense asking what is the likelihood that we will observe it or what is the likelihood of something given it, just like it makes no sense to ask what is the likelihood that we will observe “1+1=2” or to ask what is the likelihood of something “given” that “1+1=2” (since “1+1=2” is true regardless of what else we consider “given”).

Now, what Carrier essentially argues is that we should be very careful to distinguish the finding that there is Life (let’s call it “L”) from the finding that there is fine-tuning (FT). He rightly claims, and Barnes agrees, that given that there is life in the universe, and given that naturalism holds, the probability that we will find fine-tuning is 1; P(FT|L,N)=1. This is because the few hypothetical Lagrangians that do support life are fine-tuned. In contrast, given that there is life and that the God hypothesis is true, the probability of fine-tuning is lower than 1, since God could have created life without fine-tuning; P(FT|L,G)<1. It follows from this that the evidence of fine-tuning supports atheism – the fact that we find ourselves in a fine-tuned universe lowers the probability of God. I think in this Carrier is right.

Theists in contrast often argue that if we just consider fine-tuning on its own, then it is more likely under theism than under atheism. This is because the probability of fine-tuning given atheism is very low, since the probability of life under atheism is very low, since most Lagrangians don’t support life; P(FT|N) is low. In contrast, the probability of fine-tuning under God is supposedly fairly high, since God wants to create life and he might as well do it with uniform laws; so P(FT|G) is high since P(L|G)=1.

This argument is problematic in that fine-tuning is a separate fact from life, and is only relevant in those universes that have life. So we can’t just write P(FT|N). We have to take each piece of data on its own to maintain clarity. We have to write P(FT|L,N) – and similarly P(FT|L,G). And Carrier is still right – the new data FN is certain under naturalism, so that P(FT|L,N)=P(L|N), whereas it is less than certain under the God hypothesis so that P(FT|L,G)<P(L|G), so that the new information FT actually lowers the probability of the God hypothesis.

Now, this isn’t quite Barnes’ argument. Barnes instead essentially argues that since life is rare within the space of all natural (Lagrangian), the fact that we find it in our universe indicates that the process that chose which Lagrangian to instantiate was highly-biased towards choosing life-bearing Lagrangians. Implicitly, of course, this implies God chose which Lagrangian to instantiate.

Note that this amounts to what I will call the “argument from life”, namely that life is much more likely under God than under naturalism; P(L|G)=1 whereas P(L|N) is very small. And this is exactly the same place where the more usual theist argument leaves us – having established that Carrier is right that the finding that our universe is fine-tuned (FT|L) supports N, we are still left with the question of whether L does. So – how can the atheist reply to the argument from life? Well, he has two replies to this.

First, one can note that the specific God hypothesis the theist is working with is already carefully selected to fit the data that there is life. There are lots of other gods we can think of, that won’t create life. So the fact that P(L|G)=1 isn’t really saying much. To be fair, we should really consider all possible gods, and there is really no way to calculate the likelihood of life under that theory but it stands to reason it would be very low too. Or, if we decide to limit ourselves to just the life-permitting gods, then we might as well limit ourselves just to the life-permitting natural universes, and then P(L|N)=1 too.

In Barnes’ variant of the argument, this amounts to saying that even though a natural Lagrangian-selecting process that chooses a life-bearing Lagrangian seems unlikely, a divine ones that does so is also unlikely. (Technically, the atheist here doesn’t accept that there is a process that chooses the Lagrangian; rather, there simply is a particular Lagrangian. So what Barnes’ really shows is that if you believe in Naturalism is that life is surprising (assuming that Lagrangian space is indeed sparse as is assumed); and the atheist replies that it’s surprising if you believe in God, too.)

This objection is closely related to the fact that one can’t really do rational probabilistic analysis unless one knows beforehand how to divide the landscape of possibilities. The answers you get from a probabilistic analysis, especially one involving infinities such as the values of the constants in the Lagrangians or the possible types of deities, will depend on how you divide the infinite range of possibilities up. This is part of the reason why I said above that we can’t really calculate how common are life-bearing naturalistic universes among all naturalistic universes.

Secondly, one can object to Barnes’ (or the more usual) argument by declining Barnes’ characterization of naturalism. I said above he effectively defines it as maintaining that there is one Lagrangian – implying that there is one uniform, local, simple set of laws of nature. But quantum physics seems to suggest otherwise. A big part of our understanding of the laws of nature that we have involves the idea that some of the “constants” in our laws didn’t start that way, but rather had a range of possible values and “froze” at the values we see (this is called “spontaneous symmetry breaking”). This occurs in a quantum theory, and one of the leading interpretations of quantum theory – the leading one in quantum cosmology, I think; this is the Many World Interpretation – is that whenever there are multiple possibilities, all of them are realized, each in a separate parallel universe. Thus, instead of reality consisting of the laws of nature we have in our universe, with their current values of the constants, contemporary physics suggests that reality actually consists of a multiverse with numerous parallel universes, each with their own “constants” of nature.

This is hardly well-established science; it’s just an interpretation of current science (although one I tend to believe in, for reasons unrelated to the fine-tuning argument). If one adopts something like this view, then, one is led to define naturalism not as there existing one Lagrangian but rather a plethora of Lagrangians describing parallel universes, perhaps even an infinite variety of all possible Lagrangians. In such a multiverse, the probability of there being a Lagrangian universe with life in it is 1; P(L|N)=1.

We have therefore reached the stage where both under naturalism (in the multiverse sense) and under theism the probability of life is 1; so the argument from life fails.

Now the question becomes – which is more likely, the multiverse or God? That’s yet another argument to be had, but I’ll simply note that I think the multiverse is strongly suggested by well-established physics, whereas God is a childish, anthropomorphic (in the “mind of a human”, not “body of a human”, sense), metaphysically incoherent (when the so-called “theologian’s God” is meant), and is in short a highly unlikely hypothesis. At any rate, this question bears little relation to the question of whether fine-tuning implies that God exists – which, as I argued above, it does not.

Strong Emergence = Holistic Physics

In January (2015), Marko Vojinovic wrote a two-part attack on reductionism over at Scientia Salon (Part I, Part II). Based on his reasoning, I’d like to offer a new definition of strong emergence as “holistic physics”.  (Well, perhaps not that new; regardless…)

The idea is that any full description of the underlying-level dynamics must either refer to the emergent concept (strong emergence) or refer to concepts that it reduces to (weak emergence).

Let’s consider a physical system. It is described at some level of description by a certain physical theory, let’s call it the effective theory. There is also a more detailed description, let’s call it the underlying theory, so that when the details of these underlying dynamics are summarized in a certain manner you get the effective theory. For example, the behavior of gas in a canister might be described by the ideal gas law (the effective theory), while this equation in turn can be derived from the equations of Newtonian mechanics (the underlying theory) that apply to each molecule.

For now, let’s assume both the effective and fundamental theories work, they are not in error. We’ll address errors in a moment.

If the underlying theory is mechanical in the sense that it only discusses small parts interacting with other small parts (such as molecules interacting with other molecules) then we can say we have weak emergence: the “higher-level” behavior of the effective theory is reducible to a “lower-level” behavior of the parts. For example, we can define “pressure” as a concept in the effective theory, as a certain statistical property of the velocities and masses of gas molecules. If the movement of the molecules can be described by an underlying mechanical theory, a theory that only takes into account the interactions of molecules with each other – then we can calculate everything in the lower theory, and then “summarize” the right way to see what the result of this calculation means in terms of “pressure”, and in this sense talk of “pressure” has been reduced to talk of molecules.

If, however, the underlying theory is holistic in the sense that the small parts it talks about also interact with parts that are summaries of the small parts, i.e. with the concepts that the effective theory talks about, then we can say that we have strong emergence. For example, if the interaction of molecules in the underlying theory also refers to pressure (instead of just to other molecules), then pressure acts as a strongly emergent property – you cannot reduce talk about it to “lower levels”, since the lower level already includes talking about it.

In the Real World

All indications are that physics is multiply mechanical – it is mechanical at various levels, not just the fundamental one. In other words, there is only weak emergence, but there is weak emergence at many levels: nuclei emerge from quarks, atoms emerge from nuclei and electrons; solids from atoms; and so on. In our investigations, we have never established an holistic scientific theory, a theory that refers to higher-level entities. And we have, on numerous occasions, seen reductive success – we were able to calculate, from underlying theories, aspects of or even entire effective theories.

Now, in his original piece Marko argued for strong emergence by shifting the burden of proof to those disputing it. But a mechanical theory is simpler (as he seems to agree) so more likely a priori, and reduction is empirically successful so it’s more likely a posteriori. (Reductionism has shown empirical success by deriving higher-level theories or aspects thereof, and by consistently finding that the underlying theories are mechanical.) Thus, “weak emergence” is well established and the burden of proof is now firmly on those wishing to overthrow this well-established theory.

A Note On Errors

Why ignore errors? Because they are not philosophically interesting. If the effective theory is correct but the underlying one is wrong, then all we have here is a mistaken underlying theory. If the small parts it talks about do exist, a correct description of their dynamics can always be given, and constitutes the correct underlying theory (which, however, need not be mechanical!). If the small parts it talks of don’t actually exist, then either some others exist and we’ll settle for them or else no small parts exist in which case we can just call this “effective” the fundamental theory – a theory that has no underlying theory.

If the underlying theory is correct but the effective theory is wrong, then we have just miscalculated what the sums over the underlying theory say. It’s also possible we wrongly identified the summaries with concepts taken from other domains (e.g. that “pressure” as defined statistically is not what our pressure-gauge measures), but again this is not a very interesting question as all we need to do is to define properly what these new concepts are in order to see what the underlying theory says about them.

And finally, if both underlying theory and effective theory are wrong then we just have a mess of errors from which nothing much can be gleaned.

In all cases, the errors have nothing to do with emergence. Emergence relates to how things do behave, not to how things don’t behave.

A Note on the Original Argument

In Part I, Marko attacked reductionism by three examples. First, he noted that the Standard Model of cosmology cannot possibly be reduced to the Standard Model of particle physics, because the latter does not include any dark matter while the former does. While correct, this simply indicates that one model is mistaken: the reason that the Standard Model of particle physics does not yield the Standard Model of cosmology is that the Standard Model of particle physics is wrong! That is not an indication that the actual dynamics of the particles are determined by higher-level concepts, such as (for example) whether or not they are near a sun. One cannot conclude from an error in the model that the correct model will show strong emergence.

As his second example, Marko noted that the Standard Model of elementary particles with massless neutrinos fails to correspond to the standard model of the sun.  While true, this merely indicates a failure of the Standard Model, which has since been corrected (neutrinos apparently have mass!). It has nothing to do with emergence, which is all about correct theories.The failure of the zero-mass standard model did indeed indicate that the effective sun-model did not reduce to it, but it did so in a philosophically boring way – it said nothing about whether the sun model reduces to the corrected standard model, or more generally it said nothing about whether the sun model reduces to any underlying theory.

His third example is more interesting, in that he complains that one cannot explain the direction of time with appeal to the dynamical laws alone; one needs to make another assumption, one of setting the initial conditions. That’s not an issue of errors, at least. But again, his true statement has no implication for emergence. The initial conditions are set at the underlying level, at the level of each and every particle. This state in the underlying level then leads to a certain phenomena at the higher-level description, which we call the directionality of time (e.g. the increase of entropy with time). But that’s just standard, weak, emergence. There is no indication that the dynamics of the particles refers to the arrow of time – the dynamics always is mechanistic, referring only the particle-level description. Thus, not only is there no strong emergence here but there is an explicit case of weak emergence. Just as the sun (supposedly) emerges from a particular initial condition (a stellar gas cloud) in the corrected particle Standard Model and thus the sun-model is reduced to the standard model, so too does the arrow of time demonstrably emerges from a particular initial condition and thus the arrow of time actually is reduced to the dynamical laws. It’s one example, out of many, of successful reduction.

In Part II, Marko maintains that

“given two sets of axioms, describing the effective and the [underlying] theory, one cannot simply claim that the effective theory a priori must be reducible to the [underlying] theory.”

I think Marko here mistakes the meta-scientific theory that says “in our world, there is only weak emergence”, which follows from all of our science as well as from parsimony, with the logical theory that says “reduction must hold as a metaphysical principle”. I agree one cannot simply claim the effective theory must be reducible, but one can a priori claim it is more likely that there is one underlying mechanistic level (i.e. a “fundamental theory”) and that all higher-level effects are emergent from it, and one can claim a posteriori that weak emergence is overwhelmingly scientifically established.

Marko also raises a few other arguments in Part II, based on Godel’s theorem. He notes that there would always be true theorems that one cannot prove from a given (underlying) theory (this stems from Godel’s theorem). While true, this again has no bearing on emergence. For one thing – we’re discussing what’s true here, not what is finitely-provable. Secondly, there is no reason to expect that the unprovable theorems will lead to an holistic behavior of the particles described by the underlying theory, i.e. there is no reason to connect incompleteness to holism.

As his final argument, he notes that even if we accept an ultimate “theory of everything”, there would be uncalculable results from it. Again true, and again not relevant. In his example, he imagines there are six “gods” determined by this theory, and that their actions are incalculable. But if the “theory of everything” is a fundamental mechanistic theory, then the actions of these gods and hence all of what occurs is weakly emergent – even though it cannot be calculated. Whereas if the “theory of everything” refers to the overall brain-states of these gods (say), rather than to just the fundamental particles or so on, then the gods are strongly emergent phenomena. Whether there is weak or strong emergence has nothing to do with the uncalculable nature of these “gods”.

Reduction in Two Easy Steps

Over at his Scientia Salon, philosopher Massimo Pigliucci wrote a piece on the disunity of science, discussing favorably some arguments against a unified scientific view (favoring instead a fragmented worldview, where each domain is covered by its own theory). The discussion really revolves around reduction – are high-level domains, such as (say) economics, reducible to lower-level domains, such as (say) psychology? Ultimately, the question is whether fundamental physics is the “general science” that underlies everything and presents a single unified nature, with all other sciences (including other branches of physics) being just “special sciences” interested in sub-domains of this general science. All domains and disciplines therefore reduce to physics. This is the Unity of Science view that Pigliucci seems opposed to.

I’m on the side of reduction. What are the arguments against it? Well, first off lets clarify that no one is disputing “that all things in the universe are made of the same substance [e.g. quarks]” and that “moreover, complex things are made of simpler things. For instance, populations of organisms are nothing but collections of individuals, while atoms are groups of particles, etc.” Everyone agrees that this type of reduction, ontological reduction, is true. The arguments instead are aimed at theoretical reduction, which is roughly the ability to reduce high-level concepts and laws to lower-level ones. Putting arguments from authority to the side, Pigliucci raises a few arguments against theoretical reduction:

(1) The Inductive Argument Against Reduction: “the history of science has produced many more divergences at the theoretical level — via the proliferation of new theories within individual “special” sciences — than it has produced successful cases of reduction. If anything, the induction goes [against reduction]”

However, this argument is based on the false premise that if reduction is true then reductive foundations for a science would be easier to find than new high-level sciences. This premise simply does not follow from reduction, however. Instead, reduction entails that

(A) As science progresses more and more examples of successful use of reduction will be developed. This prediction is borne out by things like the calculation of the proton’s mass from fundamental particle physics, the identification of temperature with molecule’s kinetic energy, the identification of (some) chemical bonds with quantum electron-sharing, and so on.

(B) As science progresses, no contradiction will be found between the predictions of the lower-level theories and the higher-level ones. For example, it won’t be found that the proton should weight X according to fundamental physics yet weighs Y in nuclear physics; it won’t be found that a reaction should proceed at a certain rate according to physics yet that it proceeds in a different way according to chemistry. Clearly, the success of this prediction is manifest.

Thus the inductive argument against reduction is very wrong-headed, misunderstanding what reduction predicts and ignoring the real induction in its favor.

(2) How would reduction even look like?

Pigliucci further maintains that we reductivists are bluffing; we don’t really even know what reduction could possibly look like. “if one were to call up the epistemic bluff the physicists would have no idea of where to even begin to provide a reduction of sociology, economics, psychology, biology, etc. to fundamental physics.”

This is again false – we know in general terms how this reduction takes place (chemistry is the physics of how atoms bond into molecules and move; biology is the chemistry of how numerous bio-molecules react; psychology is the biology of how organisms feel and think; and so on). The only caveat here is that consciousness is somewhat problematic; the mind-body issue aside, however, the picture of how reduction proceeds is clear enough (even if vague and not at all actually achieved, of course) to make this objection moot.

(3) Cartwright’s disjointed theories

Supposing that all theories are only approximately-true phenomenological descriptions (something most scientists would agree to), Pigliucci somehow concludes that therefore “science is fundamentally disunified, and its very goal should shift from seeking a theory of everything to putting together the best patchwork of local, phenomenological theories and laws, each one of which, of course, would be characterized by its proper domain of application.”

But the fact that some theories apply only in some cases does not imply that they are not part of a bigger theory that applies in all these cases. There is no case being made against reduction here – reduction is perfectly comfortable with having multiple phenomenological theories, as long as they all reduce to the fundamental physics. It is even comfortable with there being an infinite series of “more fundamental physics”, as long as each theory reduces in turn to an even-more fundamental theory.

What is Reduction?

I was prompted to write this post because one may not make long/many comments over at Scientia Salon. The thing I wanted to say there was what reduction is. Reduction, as is meant by those actually believing it, is something like “Physics + Weak Emergence”.

Reduction = Physics + Weak Emergence

By “Physics” I mean that what ultimately exists is described by fundamental physics – things like “atoms and void”, “quarks and leptons”, and so on.

By “Weak Emergence” I mean that high-level concepts are arbitrarily defined, and then used to analyze the lower-level descriptions. When this is done, it is revealed that the high-level phenomena that the high-level concepts describe actually exist. This is rather abstract, so consider a simple example: temperature in a gas canister. The gas molecules can be fully described at the low, microscopic level by things like the molecules’ position and velocity. “Temperature” is then defined to be their average kinetic energy. Doing the math, one can show from the microscopic state that the gas indeed has a certain temperature.

In this way the temperature is “reduced” to the lower-level concepts like the molecules’ speed and mass. But the concept of “temperature” was defined by us, it isn’t to be found in the microscopic physics or state!

For this reason, Democritus said “Sweet exists by convention, bitter by convention, colour by convention; atoms and Void [alone] exist in reality”. The idea isn’t that temperature doesn’t exist in reality, however, but rather that we choose to consider nature in terms of temperature by arbitrary choice, by “convention”.