Reduction in Two Easy Steps

Over at his Scientia Salon, philosopher Massimo Pigliucci wrote a piece on the disunity of science, discussing favorably some arguments against a unified scientific view (favoring instead a fragmented worldview, where each domain is covered by its own theory). The discussion really revolves around reduction – are high-level domains, such as (say) economics, reducible to lower-level domains, such as (say) psychology? Ultimately, the question is whether fundamental physics is the “general science” that underlies everything and presents a single unified nature, with all other sciences (including other branches of physics) being just “special sciences” interested in sub-domains of this general science. All domains and disciplines therefore reduce to physics. This is the Unity of Science view that Pigliucci seems opposed to.

I’m on the side of reduction. What are the arguments against it? Well, first off lets clarify that no one is disputing “that all things in the universe are made of the same substance [e.g. quarks]” and that “moreover, complex things are made of simpler things. For instance, populations of organisms are nothing but collections of individuals, while atoms are groups of particles, etc.” Everyone agrees that this type of reduction, ontological reduction, is true. The arguments instead are aimed at theoretical reduction, which is roughly the ability to reduce high-level concepts and laws to lower-level ones. Putting arguments from authority to the side, Pigliucci raises a few arguments against theoretical reduction:

(1) The Inductive Argument Against Reduction: “the history of science has produced many more divergences at the theoretical level — via the proliferation of new theories within individual “special” sciences — than it has produced successful cases of reduction. If anything, the induction goes [against reduction]”

However, this argument is based on the false premise that if reduction is true then reductive foundations for a science would be easier to find than new high-level sciences. This premise simply does not follow from reduction, however. Instead, reduction entails that

(A) As science progresses more and more examples of successful use of reduction will be developed. This prediction is borne out by things like the calculation of the proton’s mass from fundamental particle physics, the identification of temperature with molecule’s kinetic energy, the identification of (some) chemical bonds with quantum electron-sharing, and so on.

(B) As science progresses, no contradiction will be found between the predictions of the lower-level theories and the higher-level ones. For example, it won’t be found that the proton should weight X according to fundamental physics yet weighs Y in nuclear physics; it won’t be found that a reaction should proceed at a certain rate according to physics yet that it proceeds in a different way according to chemistry. Clearly, the success of this prediction is manifest.

Thus the inductive argument against reduction is very wrong-headed, misunderstanding what reduction predicts and ignoring the real induction in its favor.

(2) How would reduction even look like?

Pigliucci further maintains that we reductivists are bluffing; we don’t really even know what reduction could possibly look like. “if one were to call up the epistemic bluff the physicists would have no idea of where to even begin to provide a reduction of sociology, economics, psychology, biology, etc. to fundamental physics.”

This is again false – we know in general terms how this reduction takes place (chemistry is the physics of how atoms bond into molecules and move; biology is the chemistry of how numerous bio-molecules react; psychology is the biology of how organisms feel and think; and so on). The only caveat here is that consciousness is somewhat problematic; the mind-body issue aside, however, the picture of how reduction proceeds is clear enough (even if vague and not at all actually achieved, of course) to make this objection moot.

(3) Cartwright’s disjointed theories

Supposing that all theories are only approximately-true phenomenological descriptions (something most scientists would agree to), Pigliucci somehow concludes that therefore “science is fundamentally disunified, and its very goal should shift from seeking a theory of everything to putting together the best patchwork of local, phenomenological theories and laws, each one of which, of course, would be characterized by its proper domain of application.”

But the fact that some theories apply only in some cases does not imply that they are not part of a bigger theory that applies in all these cases. There is no case being made against reduction here – reduction is perfectly comfortable with having multiple phenomenological theories, as long as they all reduce to the fundamental physics. It is even comfortable with there being an infinite series of “more fundamental physics”, as long as each theory reduces in turn to an even-more fundamental theory.

What is Reduction?

I was prompted to write this post because one may not make long/many comments over at Scientia Salon. The thing I wanted to say there was what reduction is. Reduction, as is meant by those actually believing it, is something like “Physics + Weak Emergence”.

Reduction = Physics + Weak Emergence

By “Physics” I mean that what ultimately exists is described by fundamental physics – things like “atoms and void”, “quarks and leptons”, and so on.

By “Weak Emergence” I mean that high-level concepts are arbitrarily defined, and then used to analyze the lower-level descriptions. When this is done, it is revealed that the high-level phenomena that the high-level concepts describe actually exist. This is rather abstract, so consider a simple example: temperature in a gas canister. The gas molecules can be fully described at the low, microscopic level by things like the molecules’ position and velocity. “Temperature” is then defined to be their average kinetic energy. Doing the math, one can show from the microscopic state that the gas indeed has a certain temperature.

In this way the temperature is “reduced” to the lower-level concepts like the molecules’ speed and mass. But the concept of “temperature” was defined by us, it isn’t to be found in the microscopic physics or state!

For this reason, Democritus said “Sweet exists by convention, bitter by convention, colour by convention; atoms and Void [alone] exist in reality”. The idea isn’t that temperature doesn’t exist in reality, however, but rather that we choose to consider nature in terms of temperature by arbitrary choice, by “convention”.

117 thoughts on “Reduction in Two Easy Steps

  1. This is a bit of a chicken vs. egg argument. We know the complex chicken arises from the simple egg and so must complex reality arise from some simple form. The problem is the reductionistic, measurement based methods used, which results in assuming physics is the egg, really only gives us the bones of the chicken, the hard patterns left, when all the soft tissue of context and feedback, not to mention the inherent dynamics being reduced to static measures, are distilled away.
    For one thing, contextual reality is far more thermodynamic, than temporally linear. Keep in mind that entropy is those energies seeking their own thermal equilibrium and not imposed by our convention. Time also emerges as an effect of this activity, as form evolves, future becomes past. The linear narrative of sequence is in fact our imposition and only really apparent in hindsight.
    In fact, eastern philosophies, which are contextual, as western ones are object oriented, view the past as being in front, since both are observed and the future behind, since both are hidden. This is a contextual paradigm, rather than one in which the object moves against context and thus forward into the future.
    In this view, thermodynamics is more fundamental, because the observer is just one of the particles, not the temporal point of reference moving through its context, from one event to the next.
    So it is not just atoms and the void, it’s nodes and the network. If we look at it from the left side of our brain, the nodes seem to predominate and if we look at it from the right side of our brain, the network seems to predominate.

  2. Reply to David Ottlinger (since I can’t comment on SS):

    “You (and Coel) are equivocating on whether you mean logical or causal entailment. I see it happening moment to moment. “Reductionists maintain that high-level concepts are manifested because of the underlying dynamics” Causal. “The low-level description is “complete” in the sense that it contains all the information needed to calculate the higher-level description.” Logical.”

    OK, now I admit to being puzzled. The two statements there seem to me to be equivalent. Can you explain the distinction between those two?

  3. Hi Coel, I was thinking of posting this when I saw your comment above. It may be relevant. Roughly causal=ontological, logical=epistemic. See if this helps.

    My guess is that one mistake that the reductionists commenting at SciSal were making was to confuse reality with our theories about it. That’s forgiveable, given that our only contact with the microscopic world is through theory. We believe that the world is made of myriads of parts of relatively few kinds that combine to make somewhat fewer parts of rather more kinds, and so on up to macroscopic bodies. We believe that the basic parts have simple interactions with one another and it’s these interactions that fix the behaviour of the more complex parts and ultimately the macroscopic bodies. And that’s the reductionist’s intuition. We think we can account for the dynamics of the basic parts and their interactions through differential equations involving space, time, and a small number of quantifiable elementary properties like mass, charge, spin, etc. It’s with this theory that the trouble starts. We can’t always show that the properties and behaviour manifested at higher levels are fixed by the lower level interactions. The former should be logical consequences of our theory of the latter. So we are obliged to offer mathematical proofs. In some cases we can’t do this at the moment—we lack the mathematical techniques. An alternative is to calculate what the equations say happens in specific circumstances. This is partial and second best but can be convincing. The discussion at SciSal got sidetracked by the word ‘simulation’ which I’d prefer not to use. All we are doing is calculating the solution functions of the equations at discrete points rather than deriving analytic expressions which we would subsequently need to evaluate. Unfortunately in some cases the known methods of doing these calculations on the available computers just take far too long to be practicable. Another issue is that to just to get to a mathematical system amenable to numerical solution sometimes we have to make approximations in the theory. Omit terms we think are small, etc. Just to get to the pendulum equation we have to set sin(x)≅x. Sometimes we can’t assess the error that this introduces. Hence we lack proof or demonstration that the basic level accounts completely for the higher level. This is one of the ways in which theoretical reduction can fail. I suspect it explains why it’s believed that the theory of chemistry has not been reduced to the theory of physics. No doubt completely accounting for biological phenomena in terms of chemistry is a much greater problem than even this. And when we get to the human sciences—those that have to take into account the mind/brain—well… I see this as an epistemic issue—we just don’t know for sure that these reductions go through—and this provides a small space on which the philosophical thesis of the disunity of the sciences can be built. A philosopher can erect a skyscraper on the tiniest of plots. I think we have to own up to these gaps. But to emphasise the gaps, or worse, talk about fundamental disunity is to ignore the overwhelming consilience of the sciences. They fit together beautifully, as you illustrate in paragraph (B). On the other hand, even if we could overcome all these problems of proving reductive completeness we would still want to do the sciences in their own languages because that is the way we understand them. This is just to recognise our cognitive limitations, I think. So the sciences will continue to look and feel rather different. No one is suggesting that ‘out there’ there must be extra ‘effects’ to complete these gaps. Rather the problem is ‘in here’ with the means we have at our disposal to grasp reality. It’s an epistemic thing.

    • Hi David,
      The claim of reductionism always has been and is one of “in principle”. I agree that:

      — if we know the lower-level only approximately then we can go wrong in predicting the higher levels.

      — even if we were to know the lower-level exactly (and we never can know that we do) then predicting the higher level may by totally impractical and thus unachievable in practice.

      — even if we got round the above two, the resulting reductionist account may be unweildy and close to useless, and higher-level heuristics might be much more user-friendly and useful.

      But, none of the above negate the basic claim of reductionism (in the supervenience sense, that the higher-level properties are the product of the lower level).

      • “But, none of the above negate the basic claim of reductionism (in the supervenience sense, that the higher-level properties are the product of the lower level).”

        Yes, I agree. Because ‘the above’ are epistemic issues concerned with our abilities to work out the logical consequences of our ideas, or to apply them to noisy data, or to make sense of a mountain of statements. Whereas the supervenience claim is an act of metaphysical faith about nature. Chalk and cheese. The fact that, despite some failures there have been many, many successes in upward explanation from physics to chemistry and thence to biology, leads us to keep the faith, both in realism and supervenience.

    • “But to emphasise the gaps, or worse, talk about fundamental disunity is to ignore the overwhelming consilience of the sciences. They fit together beautifully”

      Amen brother. Well put.

  4. First off my last comment on SciSal took on your example of the gas cannister so here it is again:
    Panpsychist,

    “Consider a canister of gas.” I thought we were doing cheetahs but ok.
    “The physical, microscopic, description does not talk about temperature, and that concept doesn’t logically follow from it.” Well yes it does in the relevant way. Temperature is one of the few things that does seem to reduce neatly (I’m not an expert though, I can’t be sure). You can define temperatures in terms of energy states. So we can identify being 40 degrees with being in a certain thermodynamic state. That is reduction. Any instance of temperature description can be exchanged for completely physical description according to rules. Given these rules, “bridge laws” in the philosophical vernacular, being in thermodynamic state A *logically* entails being at temperature T. Just like given F=ma, a body’s having force of 1 newton and an acceleration of 1m/s *logically* entails its weighing one kilogram (if I got my units right). I think you have the notion that reductionism requires that the ordinary language concepts must become embedded in the physics or that in running a simulation ordinary language concepts would somehow appear in the simulation. Not at all. What matters is we have rules of replacement so we can reduce one kind of description to the other.

    “The low-level description is “complete” in the sense that it contains all the information needed to calculate the higher-level description.”
    Well what you seem to have in mind by “calculation” is computing one kind of description by bridge laws, where you put, say, mental description in and get physical description out; that is philosophical reductionism pure and simple. Anti-reductionists, like me, think we have reasons to think such calculations impossible. Aravis quite correctly pointed out you havent addressed any of them.

    On Aravis “pinning” views on you:
    When you say that natural kind terms like “tooth” and even “beauty” are definable in purely physical terms you are literally giving the definition of type reductionism. There just is no ambiguity on this point. I’m sorry to be blunt but if you continue to maintain that you are not a type reductionist or maintain a supervenience view, you are simply proving you do not know what these words mean. Aravis understood you perfectly well. You (and Coel) are equivocating on whether you mean logical or causal entailment. I see it happening moment to moment. “Reductionists maintain that high-level concepts are manifested because of the underlying dynamics” Causal. “The low-level description is “complete” in the sense that it contains all the information needed to calculate the higher-level description.” Logical. The frustration sets in because you, Coel, DM and the rest come in and declare that philosophers are using terms arbitrarily, mistakenly or just in unhelpful ways when it is exactly the philosophers who have spent great time and energy trying to frame and answer these questions. Many scientists seem to reflexively think they have the answers merely because they are scientists, yet in discussion many of the scientists here make basic errors that philosophy diagnosed some time ago. They demand that we should suddenly reframe the debate in their improvised framework. I am going to choose the carefully articulated and examined framework every time. One of the traditional roles for philosophy is teaching people they know less than they thought. I certainly see a need for that here.

    Coel,
    You didnt really get a chance to respond, if you want to comment on Panpsychist’s blog I will see it there. otherwise I’m sure groundhog day will come again. 🙂

  5. Coel,

    “Can you explain the distinction between those two?”
    Sure. The physical properties entail, let’s say, the mental properties causally insofar as two physically indistinguishable systems would likewise be mentally indistinguishable. For instance two brains which were completely physically indistinguishable would have the same beliefs, desires, thoughts etc. Interestingly causal entailment is consistent with the idea that we could have a complete physical description of a system but that description would tell us nothing about that system’s mental properties. Reductionists claim that a complete physical description would logically imply the entire mental description as well. This is the further claim of logical entailment. In this way all description can be reduced to physical description according to rules. Anti-reductionist deny that such reduction is possible even in principle based on a number of arguments (multiple realizability and irreducability of normativity and teleology among them).

    This is where you generally bring up simulation. I argued across my posts at SciSal that doesn’t get you there.

    • Hi David,

      “For instance two brains which were completely physically indistinguishable would have the same beliefs, desires, thoughts etc. Interestingly causal entailment is consistent with the idea that we could have a complete physical description of a system but that description would tell us nothing about that system’s mental properties.”

      I would assert that if that first sentence held, then the beliefs and thoughts are (in principle) computable from the physical description. Therefore, the physical description would indeed “tell us” (= give us all the information we need to deduce) the mental properties.

      “Reductionists claim that a complete physical description would logically imply the entire mental description as well. This is the further claim of logical entailment.”

      I can see that one can distinguish the two concepts (causal entailment and logical entailment), but I don’t see how you could have the former without the latter.

      “In this way all description can be reduced to physical description according to rules.”

      “Reduced to” physical description in the sense that the higher-level description can be deduced from the lower level description. That does not mean that the high-level description is part of the low-level description.

      E.g. A low-level description could proceed:
      Starling 1 is at location (coordinates 1)
      Starling 2 is at location (coordinates 2)
      Starling 3 is …

      Starling 5000 …

      The high-level description would be: There is a flock of starlings in that field. That high-level description is not part of the low-level description, but it can be deduced from the low-level description.

      “Anti-reductionist deny that such reduction is possible even in principle based on a number of arguments (multiple realizability and irreducability of normativity and teleology among them).”

      I’m still baffled why multiple realisability is supposed to be an argument against reduction (of the sort that I’m advocating). The high-level descriptions *always* entail a loss of information. Thus you can go from that list of starling locations to the high-level statement about the flock. But you cannot go from the high-level statement to the low-level statement. The high-level description is multiply realisable from any number of different low-level configurations of starlings.

      This seems to me obvious, for any conception of reductionism, so why do people consider it a refutation of (some sorts of?) reductionism?

      “This is where you generally bring up simulation. I argued across my posts at SciSal that doesn’t get you there.”

      I would have replied, had it not been for the comment limits. But, since I can do so here:

      “However it would not, allowing that it is a *physical* simulation, describe an animal, a cheetah, a mammal, a predator, anything beautiful, savage or frightening. It would describe the cheetah only in so far as it is a physical thing. Biological, aesthetic, and many other phenomena will be missing from the simulation.”

      I’d disagree! The biological aesthetic and other stuff are indeed fully there in the simulation. I don’t see why they would not be.

      If I write a simulation of starlings, listing 5000 of them and their locations, then the “flock” is indeed right there in the simulation. If you looked at a visualisation of the simulation, there would be the flock.

      • Coel,
        Ok great. I’m not going to go through the arguments against reductionism just because I’m tired and I dont have too. But I do want to make a few points. When you say: “I would assert that if that first sentence held, then the beliefs and thoughts are (in principle) computable from the physical description. Therefore, the physical description would indeed “tell us” (= give us all the information we need to deduce) the mental properties.” you have identified yourself as a reductionist in the *philosophical* sense. When Massimo, Aravis or I argue against reductionism, we are arguing against the reductionism you yourself hold. You may think they are good arguments, you may think they are terrible arguments but they are arguments about your view. If you think think we are not arguing against your view, you have misunderstood us, because what we are arguing against is exactly what you yourself just wrote a few inches above what you are now (hopefully) reading. When you say “I can see that one can distinguish the two concepts (causal entailment and logical entailment), but I don’t see how you could have the former without the latter” you have identified yourself as *not* having a mere supervenience or emergence view because supervenience and emergence views just are those views which claim that lower level descriptions causally but not logically entail higher level ones.

        I really hope that in future you own your view as a reductionist and see that whatever the merits or demerits of the views put forward they are not aimed at straw men.

  6. I think Aravis confused the issue a little by using “ontological reductionism” differently than Massimo. But I think I actually agree with what he’s trying to say.

    The type/token thing confuses me a bit, and I’d like to hear people’s opinions of my take.

    A token is what some people call an “instance” (I think “instance” is a better term). In low-level theories like particle theories, the objects of the theory are essentially considered fungible or “identical”, because a particle is defined as a separate thing based on how it behaves. So two electrons are fungible in that we would never expect them to act any differently in the same situation as any other.

    In higher-level theories, we deal with processes that are less and less fungible as the complexity of these aggregate objects increases. Anxiety and pancreases are never physically identical, and we’d never expect instances of them to act identically.

    What this means is that higher-level theories are really theories of types. And what Aravis calls “special-science laws” are really “special-science strong tendencies”.

    None of this affects supervenience in any way, but it does affect our ability to reduce the theories of the special sciences to theories at lower levels.

  7. Tokens are indeed individual instances of general types. There are sentences types like “The cat sat on the mat.” Here are two tokens of that type: “The cat sat on the mat.” “The cat sat on the mat.” The tokens are the things that sit between the quotation marks (physical marks on your computer screen). The type is the general sentence both express. There are all kinds of types of course but some of the important ones are natural kind types like “Cheeta” or “liver” or “claw’ etc. The tokens are then individual Cheetahs livers and claws. Now supervenience views often hold that, for instance, all mental events are token identical to some physical event (it’s more complicated than that now but I’m going to ignore that). So my (token) belief that I am typing is identical to some physical aspect of me (presumably something to do with my brain). *Type* reduction holds that further, the type “believing that I am typing” reduces to some physical descripton, again presumably brain states. So when Panpsychist says he thinks “tooth” and “Cheeta” etc can be defined in physical terms he is avowing type reductionism. There are reasons to think such reduction impossible.

  8. Coel,

    “I can see that one can distinguish the two concepts (causal entailment and logical entailment), but I don’t see how you could have the former without the latter.”

    Strictly speaking ‘entailment’ is a logical concept. By ‘causal entailment’ I guess you mean the idea that effect follows cause in the material world. In contrast ‘logical entailment’ is a relation between (sets of) sentences. So in a world without minds, language, and sentences, say our world before Hom. Sap. appeared on the scene, there can be causation without logical entailment.

    This is close to the distinction I tried to bring out above, between stuff and our ideas about stuff.

  9. David,

    I’m wondering if it’s helpful here to bring in notions of reduction from the philosophy of mind? I think it was Aravis who started down this track. Massimo’s post was about theory reduction. I think we can usefully discuss this by restricting ourselves to theories outside the human sciences. So we leave out psychology, sociology, economics, etc, and concentrate on theories about non mindful entities. I think he would apply his thesis of disunity to this subset. I suggest this wholly in the interest of keeping things as uncomplicated as possible.

  10. David,
    If you like you can take out the words “mental” and “physical” and replace them with “chemical” and “physical” or “biological” and “chemical” or “economic” and “psychological” or even “economic” and “physical” because it all runs the exact same way for the points I was making.

  11. David,

    Hmmm. It’s just that when you say,

    “So when Panpsychist says he thinks “tooth” and “Cheeta” etc can be defined in physical terms he is avowing type reductionism. There are reasons to think such reduction impossible.”

    you seem to be saying that physical things can’t be defined in terms of physical things because this amounts to type reduction, and this is thought to be impossible, and I start to wonder if something has gone wrong. That physical things can’t be defined in terms of physical things seems a pretty radical thesis to me. Engineers do it all the time. Or is it that ‘tooth’ and ‘cheetah’ denote biological things and we are back with theory reduction?

  12. David,
    “That physical things can’t be defined in terms of physical things seems a pretty radical thesis to me. Engineers do it all the time.”
    Come now. You don’t think I’m denying engineering do you? Yes physics can describe Cheetahs. But when it describes them, it describes them in the sense that they are no different from the proverbial bucket of crap. If you drop a bucket of crap or a Cheetah off a tall building both fall etc. But it doesn’t seem to describe them as Cheetahs. A biological description would. it might give its genus, species, habitat, adaptations etc. The question is can all that information, vital to biology, be reduced to physical description. That would be an example of successful type reductionism. It’s also hard to even imagine what it would like or why we suppose it to exist. The trouble is Cheetahs arent just physical things they biological things, chemical things, aesthetic things etc.

    Here are a bunch of types I doubt an army of engineers can define in purely physical terms: “Cheetah” “Hemoglobin” “Table” “Politician”

  13. David O: when you say “physical terms” you seem to mean “in terms of physics” (rather than biology, chemistry, etc.). Maybe that’s causing confusion? Biology and chemistry are both physical sciences which use physical descriptions.

  14. The more I think about it, the more the type/token distinction seems to me to cloud the issue. In terms of theories, “type” is just a way of capturing the fact that two processes don’t need to be identical in order to be categorized together.

    A description of a particular brain experiencing anxiety is possible (theoretically) at the level of physics, but it’s not a description of anxiety. And a description of all the possible configurations of processes that we’d call anxiety wouldn’t tell us anything about anxiety, because it would not point to what it is about those processes that would lead us to call them all “anxiety”. Higher-level theories are, in a sense, reductive, because they ignore a huge number of physical differences between two things that we’d say are of the same “type”.

  15. Asher,
    “A description of a particular brain experiencing anxiety is possible (theoretically) at the level of physics, but it’s not a description of anxiety. And a description of all the possible configurations of processes that we’d call anxiety wouldn’t tell us anything about anxiety, because it would not point to what it is about those processes that would lead us to call them all “anxiety”.”
    I agree but reductionist typically deny this.

  16. Hi all, thanks for clarifying things considerably. I concede that I’ve been mis-understanding Type Reduction. But I still fail to see why you would hold that reduction entails all the things you claim it does.

    “I think you have the notion that reductionism requires that the ordinary language concepts must become embedded in the physics or that in running a simulation ordinary language concepts would somehow appear in the simulation. Not at all. What matters is we have rules of replacement so we can reduce one kind of description to the other.”

    I can subscribe to this sort of type reduction. The problem is that you then seem to pile on to that theses that are not part of it and are indeed contrary to it. For example,

    “You (and Coel) are equivocating on whether you mean logical or causal entailment. I see it happening moment to moment. “Reductionists maintain that high-level concepts are manifested because of the underlying dynamics” Causal. “The low-level description is “complete” in the sense that it contains all the information needed to calculate the higher-level description.” Logical. ”

    Why should I commit to one sort of entailment rather than another? I will use the above terminology, so that reduction is achieved by the low-level theory + bridge rules. Assuming the low-level underlying theory indeed holds, it follows both that the low-level description is “complete” in the sense that it contains the information needed to calculate the higher-level concepts in accordance with the bridge rules and that the higher-level concepts actually exist (manifest) because the underlying dynamics yields their existence when analyzed in this way. The gas’ temperature actually is x because the molecules’ mean kinetic energy is x, and the microscopic state description is “complete” in the sense that it contains all the information needed to calculate the temperature.

    Another example:

    “Reductionists claim that a complete physical description would logically imply the entire [bilogical] description as well. … In this way all description can be reduced to physical description according to rules. Anti-reductionist deny that such reduction is possible even in principle based on a number of arguments (multiple realizability and irreducability of normativity and teleology among them).” [changed mental to biological, as per discussion above]

    Reductionists claim that a complete physical description plus the bridge rules would logically imply the entire biological description. I will assume this is what you meant as it fits better with the second sentence. But note that this then perfectly allows for multiple realizability- if only the bridge rules are such that they can be fulfilled in several ways. If for example we define “computing” as we do, in terms of relations of states, then the same computation can be realized in multiple ways, or in other words the bridge rules would imply that very different systems can are doing the same computation.

    (I’m not concerned about arguments from normativity or teleology (or for that matter intentionality), since I don’t think such things really exist. There is only pseudo-normativity, pseudo-teleology, pseudo-intentionality, and so on – and I don’t think it’s difficult to fit those within a reductive framework.)

    Now another point,

    “Yes physics can describe Cheetahs. But when it describes them, it describes them in the sense that they are no different from the proverbial bucket of crap. If you drop a bucket of crap or a Cheetah off a tall building both fall etc. But it doesn’t seem to describe them as Cheetahs….The trouble is Cheetahs arent just physical things they biological things, chemical things, aesthetic things etc.”

    “A description of a particular brain experiencing anxiety is possible (theoretically) at the level of physics, but it’s not a description of anxiety. And a description of all the possible configurations of processes that we’d call anxiety wouldn’t tell us anything about anxiety, because it would not point to what it is about those processes that would lead us to call them all “anxiety”.”

    Are we arguing about Essentialism here? There is an Essence to being a cheetah, that physics can’t capture? I’d hope we’ve moved past this. There is no “elan vital”, we just choose to call some collections of molecules “alive”.

    In the same way, I would maintain that there is nothing like “as Cheetahs”; there are just things we choose to call cheetahs. We generally describe them in biological terms, but are free to add aesthetic descriptions of so on as we like; these are just arbitrary definitions – better yet, arbitrary concepts – we choose to use to carve up the world.

    A low-level description of all possible configurations of particles that we’ll call “cheetah” (or “anxiety”) won’t be a what we mean by a description of a cheetah (or anxiety). It would merely be what cashing out in physical terms our description implies. The rules of replacement allow us to replace one description with another, but the meaning of the concept remains at the higher level!
    Cashing out in configuration state all the states we call “temperature” won’t really tell us anything about what temperature is; it will just be a mess. Temperature would still be the higher-level concept, not the underlying conditions that manifest it. If you cash a high-level concept out to physical descriptions, you won’t be left with anything informative, with anything that says why these very-complex physical conditions are “cheetahs” (say); to see why these definitions are meaningful or useful, and to understand what they mean, you need to do higher levels of analysis – that’s why we do higher levels of analysis, because looking at the lower level is extremely unenlightening.

    • “Are we arguing about Essentialism here?”

      No – except in the mundane sense that a theoretical description of anxiety must ignore a lot of physical details that are needful to a physics-level description in order to classify two instances of anxiety to be the same thing. You could call that keeping what is “essential” about the physical processes with respect to anxiety, but it’s not big-E Essentialism.

    • “Why should I commit to one sort of entailment rather than another?”
      Because they are *different* views. Surely you would agree that specifying your views without ambiguity is a good thing. Also Ive argued (I’m not going to go back over it just now) that it is exactly this ambiguity which allows you to assert reductionist stances at some times and take them back at others. That is a vicious equivocation.

      “Are we arguing about Essentialism here? There is an Essence to being a cheetah, that physics can’t capture?”
      Of course not, just the regular information of biology that reductionism can’t capture.
      “In the same way, I would maintain that there is nothing like “as Cheetahs”; there are just things we choose to call cheetahs.”
      The term “Cheetah” may be a human construct and in some sense conventional but by no means arbitrary. We are talking about a lot of science here, like all of field biology.
      “There is only pseudo-normativity, pseudo-teleology, pseudo-intentionality, and so on ”
      To be honest, I find it hard to take this view seriously. If faced with giving up reductionism or giving up the idea that I have the thought that I am sitting at a computer I’m going to give up reductionism every time. Why would i be more committed to the former than the latter?

    • By the way – what you’re saying sounds a lot like a “causal completeness” view of physicalism, which says that there’s nothing causal happening at the higher level that isn’t captured by the lower level. We can’t see the high-order properties of temperature from a molecular description, but there are no extra causal “forces” (or whatever) at the higher level beyond the local causes in the lower-level description.

      Massimo called this idea into doubt by saying that causality “disappears” at the level of fundamental physics, but I don’t know whether this presents a problem when looked at from the standpoint of a “complete” computational model. If it doesn’t present a problem, it’s just a matter of recognizing that what we call “causes” are just “patterns of behavior” that are just as computable. If it *does* present a problem, then causality itself is a higher-level property like temperature.

      In any case, a “causal completeness” view is really a supervenience view that denies strong emergence. This is the kind of view that people like Terrence Deacon seem to hold (and Coel as well).

      If all that is true, then saying that this kind of view “implies type-physicalism” is probably a misrepresentation. Aravis seems to be saying that people are *calling* themselves Supervenience Physicalists, but their actual views are those of Type Physicalists.

  17. Any sort of reduction means editing out something and that something will be part of a feedback loop somewhere else. You can extract a linear function out of inherently cyclical processes, just like you can draw a straight line on the surface of the planet.
    Just don’t take it too far.

  18. David,

    Knowledge of genus, species, habitat, adaptations, etc, all contribute to our knowledge of cheetahs. This is very broad, and will mostly be expressed in non-technical English. One can imagine an ever-expanding Wikipedia page, including ‘famous cheetahs in captivity’ no doubt. I agree that translating this into talk of cells, say, except where appropriate—cheetahs may have muscle cell adaptations for sprinting—let alone molecules, again except where appropriate—they may have variant haemoglobin for improved oxygen transport—is out of the question. Some of it, not all, forms part of the biological theory of cheetahs. Some of this, again not all, will form my personal concept of the Cheetah (I’m not sure about the notion of a public concept of the beast). Lastly, and most narrowly, some of us will want to say that there must be information, presumably encoded in physical form in our heads, and almost certainly not readily expressible in ordinary language, that enables us to recognise a cheetah in front of us, with nil conscious cogitation. All the material written and spoken about cheetahs goes by the board, though some of it may have contributed to our getting into a state whereby we can recognise a cheetah. Our intuition is that it’s the physics of light, the chemistry of photoreceptors, the electro-chemistry of neurons, and so on, that’s relevant here. I think it’s fair to say that this is a reductionistic view, but it’s not at all clear to me under what philosophical heading we should put it.

  19. David Ottlinger,

    ““Why should I commit to one sort of entailment rather than another?”
    Because they are *different* views. Surely you would agree that specifying your views without ambiguity is a good thing.”

    But trying to shoehorn a view into a dichotomy that doesn’t fit it only confuses things. Reduction is about applying the concepts to the underlying level; this has aspects of logical entailment and aspects of causal entailment.

    * The temperature of the gas logically/mathematically follows from its microscopic state.
    * The temperature of the gas is caused by the underlying microscopic state.
    * The macroscopic temperature, felt by a thermostat, causes the later temperature to appear.
    * The microscopic temperature causes the macroscopic temperature of the thermostat which causes the appearance of the later temperature.
    * The concept of temperature doesn’t logically follow from the microscopic description.
    * The information needed to calculate temperature is already present in the microscopic description, and in this sense is “in it” already.
    and so on

    There are all kinds of complicated causal and logical entailments all following from the same model of reduction. Focusing on these entailments only obscures the model of reduction that lies behind them. Instead, we should focus on justifying or attacking the reductive model itself – that the higher-level concepts are defined in terms of lower-level concepts.

    “The term “Cheetah” may be a human construct and in some sense conventional but by no means arbitrary. We are talking about a lot of science here, like all of field biology.”

    ‘Arbitrary’ is a bit of hyperbole here. So is ‘convention’. The point is to communicate that the concepts we used are up to us. Of course we have good reasons for choosing these concepts, this is not in dispute.

    “If faced with giving up reductionism or giving up the idea that I have the thought that I am sitting at a computer I’m going to give up reductionism every time. Why would i be more committed to the former than the latter?”

    This is not the choice you have to make. The choice is in how to interpret the idea you have that you are sitting at a computer. In this I follow Tyler Durden’s dictum,

    “You are not special. You are not a beautiful or unique snowflake. You’re the same decaying organic matter as everything else.”

    A thermostat can be described as “aiming” to keep a certain temperature. It clearly doesn’t “really” have teleology, but can be described in these terms. It has pseudo-teleology. Well, we are not special. We are machines just like the thermostat. Our “teleology” may be much more complicated, but we ultimately have the same pesudo-teleology. To say otherwise, is to say that we are above the laws of physics. It is not only incredibly vain, but also in contradiction to the laws of physics and against Occam’s razor.

    The same goes for normativity and intentionality.

    • A few closing thoughts:
      “But trying to shoehorn a view into a dichotomy that doesn’t fit it only confuses things.”
      This is not an artificial dichotomy, it’s an actual conceptual distinction with real consequences for what your view entails. I’ve already argued that point though, so I’m not going to go back over it again.

      “Our “teleology” may be much more complicated, but we ultimately have the same pesudo-teleology. To say otherwise, is to say that we are above the laws of physics.”
      This assertion begs the question. Why should we suppose “real” teleology violates the laws of physics? (I imagine you would say because we have to be reductionist-that’s where the question is begged.)

      With that I am off. I expect I will see you at SciSal.

      • “This is not an artificial dichotomy, it’s an actual conceptual distinction with real consequences for what your view entails.”

        Sorry, I just don’t see how it applies.

        “This assertion begs the question. Why should we suppose “real” teleology violates the laws of physics? (I imagine you would say because we have to be reductionist-that’s where the question is begged.)”

        I don’t think one needs to get into the topic of reduction to show that “real” teleology would violate teh laws of physics. But perhaps I’m wrong. At any rate, this is a whole different discussion. 🙂 Suffice it to say that since I don’t consider “real” teleology to be compatible with the laws of physics, I don’t consider it plausible; and that’s on top of the complexity and vanity involved. I hope you can see why those would be good reasons to not accept teleology, even if you don’t agree that these reasons hold.

        “With that I am off. I expect I will see you at SciSal.”

        Occasionally. 🙂 Thanks for stopping by, I found the conversation enlightening.

  20. “You’re the same decaying organic matter as everything else.”
    Form recedes into the past, while energy proceeds into the future.
    Simplicity, like the egg, is not an initial state, but part of an eternal cycle of creation and dissolution.

  21. Asher,

    “Massimo called this idea into doubt by saying that causality “disappears” at the level of fundamental physics”

    Yeah, I’m with you on this – Massimo is right that causality isn’t part of fundamental physics, but this actually doesn’t affect the validity of the thesis of causal closure.

    “If all that is true, then saying that this kind of view “implies type-physicalism” is probably a misrepresentation.”

    This I now tend to disagree with. The fact that the high-level concepts don’t have causal powers not inherent in the lower-level dynamics, which I’m absolutely behind, doesn’t change the fact that the higher-level concepts are defined in terms of lower-level concepts. Temperature is only affected by lower-level dynamics, but also defined in terms of microscopic velocity distributions; it exhibits both causal completeness/closure and type reduction.

    Of course, this type reduction doesn’t imply the extra theses it is is accused of, such as committing to only one type of entailment or not supporting multiple-realizability or so on.

  22. Hi David,

    “If you think think we are not arguing against your view, you have misunderstood us, because what we are arguing against is exactly what you yourself just wrote a few inches above what you are now (hopefully) reading.”

    Well perhaps, then, it would be useful for you to explain why Aravis’s arguments are actually arguments against anything I’ve said. Specifically, I’ll hold to:

    “Reductionists maintain that high-level concepts are manifested because of the underlying dynamics” And:
    “The low-level description is “complete” in the sense that it contains all the information needed to calculate the higher-level description.”

    Now, the bird-by-bird description of a flock of starlings does not contain the concept “flock”, but it does allow one to calculate everything about that flock.

    Aravis’s main arguments seem to be:

    1) Many different low-level descriptions could lead to an identical higher-level description (“there is a flock of starlings in that that field”), and hence multiple realisibility. I agree entirely. But I don’t see why that is incompatible with either of those two statements above.

    2) Type Reduction, as I understand it (and please correct me if I’m wrong), requires every concept present in the higher-level description to be present in the lower-level description. This is violated in the above example, since the high-level concept “flock” is not part of the low-level description. I agree that it isn’t. But, again, I don’t see how that is incompatible with either of the above two statements.

    The claim is not that the concept “flock” is present in the lower level, the point is that the lower level gives all the information necessary to reproduce the higher-level behaviour. Thus, one could write a computer program in which the only entities that the programmer encoded were starlings. The output of that program, however, would manifest “flock”.

    • Coel,
      “Well perhaps, then, it would be useful for you to explain why Aravis’s arguments are actually arguments against anything I’ve said.”
      Probably but to be honest I’m not so inclined at the moment. I will join Aravis in suggesting that you take a look at SEP or maybe look at Nagel, Carnap or get a primer on philscience. Then you’d be abe to join the conversation without these stumbling blocks.

      Last thought:
      “Type Reduction, as I understand it (and please correct me if I’m wrong), requires every concept present in the higher-level description to be present in the lower-level description.”
      Well not quite. It does comitt one to saying that uses of higher order concepts are in principle *replaceable* without loss of information. But this reduced description would not actually make use of higher order concepts.

      I’m sure we will bump in to each other on SciSal in the future. Much left to discuss.

  23. Hi All,

    “A description of a particular brain experiencing anxiety is possible (theoretically) at the level of physics, but it’s not a description of anxiety. …”

    Yes, in a sense, it is a description of anxiety, or rather of one instance of “anxiety”. In the same way a bird-by-bird list of starlings is a description of a “flock”. That description is not presented in terms of the concept “flock”, but it would allow you to reproduce a flock.

    “And a description of all the possible configurations of processes that we’d call anxiety wouldn’t tell us anything about anxiety, because it would not point to what it is about those processes that would lead us to call them all “anxiety”.”

    I’d disagree. That set of configurations that amount to “anxiety” would tell you a great deal about “anxiety”. You could produce a rather good delineation of the set of manifestations of anxiety. But, if you want to know “what it is” about them “that would lead *us* to *call* them all “anxiety”.”, the you’d also need the low-level description of “us” and our motivations for calling it “anxiety”. Thus, you’ve only given me a low-level description of one part of the system.

    • Perhaps my phrasing was poor. Or perhaps I’m just wrong.

      If you’re describing a flock at the starling level – and keeping in mind that there us nothing causal happening at the level of the flock – you will see that each starling is reacting (causally) to other starlings within a certain neighborhood. And in fact, a computer simulation’s causal mechanism would almost certainly involve starlings reacting to their immediate neighbors. So you do see patterns of causal interactions that are relevant to “flockness”. But the causality is “starling causality”. It’s not the most fundamental level.

      Now describe each starling at the level of cells. Now describe each cell at the level of molecules. Etc., Etc., down to whatever level you’re calling fundamental. At some point, “flock” isn’t discernible in the description. There are a bunch of very, very similar things that look entirely different at higher levels of description.

    • “I’d disagree. That set of configurations that amount to “anxiety” would tell you a great deal about “anxiety”. You could produce a rather good delineation of the set of manifestations of anxiety. But, if you want to know “what it is” about them “that would lead *us* to *call* them all “anxiety”.”, the you’d also need the low-level description of “us” and our motivations for calling it “anxiety”. Thus, you’ve only given me a low-level description of one part of the system.”

      I’d like to add to that that the set of manifestations of anxiety isn’t the concept of anxiety. If you want to know what “anxiety” is, you can’t just read that off from that set. You might be able to abstract a useful concept of anxiety from the set, but that would be an abstraction – the set isn’t the concept, it is what cashing out the concept implies.

      Too much is being made of the idea that one can “replace” speech at a high level with speech at a lower level; while technically true, this does not change the fact that the concept is at a higher level!

      We can define “triangle” as the shape formed by the intersection of three straight lines; I trust all would agree this is a case of reduction. The set of all possible triangles is not this definition, however; the set is “cashing out” the definition. We can replace all talk of “triangle” with talk of “shape formed by three lines”, but this changes nothing – the “shape formed by three lines” is still a high-level concept, not a concept at the lower-level (which contains concepts like “line” or “dot”). If we look at the low-level description – at a list of all the line-triplets in that set of all trinagles, for example – we won’t see triangles. We’ll see a mess. We could not logically derive or deduce what a triangle is from that list; but we could analyze it for patterns and abstract concepts out of it, such as the concept of a triangle. And that’s what reduction is all about – finding the patterns that carve up nature well, that reduce a mountain of data (all lines in the set) to simple descriptions (“ah, that’s just the set of all possible triangles”).

  24. Guys,

    “causal entailment” is not a philosophical term of art. If you Google it with site:plato.stanford.edu you wont get any hits and an unconstrained search produces only 2600 hits. If you simply mean causation please say so. Sorry to bang on about this, but part of the problem is that we are drowning in misunderstood terminology.

  25. Hi all. Just arrived here, via SS. I’ve read only some of the comments above, so sorry if I’ve missed anything significant.

    I partly agree with what David Ottlinger has been saying to Coel, though I would put things rather differently. I for one don’t find the talk of “types” and “tokens” helpful. I hope Coel will find my approach (largely eschewing philosophical terms) more acceptable!

    Coel, let’s take your starling/flock case. You can’t deduce the proposition about the flock purely from the listed propositions about starlings, because those propositions don’t mention the word “flock”. You would need in addition some definition or “bridge theory” to provide the relationship between starlings and flocks. When you yourself go from the starling-facts to the flock-fact, you are employing your human, non-formulaic judgement and linguistic competence to jump from one to the other. Presumably a specific definition or rule could be formulated which would provide a rough mapping from arrangements of birds to flocks. But this is a relatively straightforward case, and even this case isn’t as easy as it might sound.

    Now consider a putative complete simulation of the world at the atomic level. Assume, just for the sake of argument, that the simulation is complete enough to perfectly recreate (in simulation) the behaviour of every atom in the world. The simulated atoms in a simulated wet person would behave just like the real atoms. We could say (in a sense) that the simulated man had simulated wetness and simulated mental states. In that sense the simulation would be complete. But, if you looked at the data in the computer, you would only see variables representing atoms (and other low-level properties). You wouldn’t see any variables representing wetness, desires, etc. For the computer to produce a macroscopic statement about the simulation (e.g. “the man is wet and wants to dry himself”), you would need additional software to translate from low-level language to high-level language. And such software could not be based on formulaic rules or “bridge theories”. To be able to produce all the high-level descriptions a human could, the software would require an advanced AI capable of making human-like, non-formulaic, fuzzy judgements.

    In short, I would agree with David that complete bridge theories are not possible, and for some things (like mental states) they’re not possible at all. Unlike David, I don’t think multiple-realizability per se is a problem. You could (in the absence of other problems) have a many-to-one mapping from low-level to high-level properties. The problem, as I see it, is that different models (and their associated concepts) developed for different uses, and there’s no particular reason why they should map neatly onto each other. Daniel Dennett divides our talk about the world into three types (he calls them “stances”): physical, design and intentional. The last of these includes our talk about beliefs and desires. Translation is problematic enough even for many concepts within the physical stance. But the three stances constitute such different types of language (different types of modelling) that I’m not sure we can even talk about translating between them. (The AI I described above would be not so much translating as making fresh judgements about the world.)

    I hope that’s not too unclear. It’s always difficult to put these things into words.

    • “For the computer to produce a macroscopic statement about the simulation (e.g. “the man is wet and wants to dry himself”), you would need additional software to translate from low-level language to high-level language. And such software could not be based on formulaic rules or “bridge theories”. To be able to produce all the high-level descriptions a human could, the software would require an advanced AI capable of making human-like, non-formulaic, fuzzy judgements.”

      I don’t think fuzzy thought is an issue here. It is perfectly fine for a concept to be fuzzy, or to be defined in non-linguistic ways (e.g. as a pattern in an artificial neural network), or so on. The question is whether the concept is about the lower level, so that “wetness” is about lower-level qualities in the simulation, or whether…, well, whether it’s more complicated than that.

      I wish someone could give a demonstrably irreducible concept. Not an argument from incredulity, but a clear example. Temperature and triangle have been given as good examples of reductive concepts; what are good examples of irreducible concepts?

      • Hi Panpsychist,

        Thanks for replying. Please excuse me if I stick to addressing Coel for now. The first thing is to clarify what we mean, so we can stop talking at cross-purposes. Since you and Coel may not mean the same thing, I’d rather address just one of you at a time.

        “The question is whether the concept is about the lower level…”

        Is that the question? Various different questions and claims are being raised, and it’s far from clear what they mean and whether they’re equivalent.

    • Hi Richard,
      I agree with your analysis. As I see it, if you want to produce a higher-level description in terms of higher-level concepts such as “flock”, then you do need to specify the meaning of the concepts such as “flock”. Those concepts are not part of the lower-level description, and thus you need to specify them.

      However, if you have defined the higher-level concept, then: the lower-level simulation can then be seen to manifest the higher-level concept, and the lower-level simulation can tell you everything about the behaviour of the higher-level concept.

      Thus, once we’ve defined the concept “flock”, there is nothing about the flock-level behaviour that cannot be reproduced by a simulation solely of bird-level behaviour.

      If the complaint is that one cannot deduce the *definition* of the concept “flock” from the lower-level description that does not contain that definition, then I agree that you cannot.
      Cheers, Coel.

  26. P.S. Just to clarify, Coel. Much of my previous comment was arguing against the possibility of bridge theories. But I don’t think you’ve even invoked bridge theories to get from low-level descriptions to high-level ones. You seem to think we can deduce high-level descriptions from low-level descriptions alone. It’s clearly not possible strictly to deduce high-level descriptions from low-level ones alone, because high-level descriptions employ concepts that aren’t in the low-level ones, and you can’t deduce a conclusion about X from premises that don’t mention X. I suppose it’s easy to miss this, because we often use the word “deduce” rather informally, and allow ourselves the luxury of implicitly bringing something more to the table than was in the premises. Here you’ve made use of implicit background knowledge as to what constitutes a flock. If you tried to state this knowledge in the form of a formula, it would be an attempt at a bridge theory. Whether we talk about a bridge theory or not, you certainly need something more than the starling premises to get you to the flock conclusion.

    In the flock example you’ve explicitly used the words “deduce” and “description”. Such talk about deducing descriptions goes beyond the supervenience thesis. Supervenience only says: no high-level difference without a low-level difference. To claim that we can deduce high-level descriptions from low-level ones goes much further. (I accept the former but not the latter.)

    When I put it in terms of strict deduction of descriptions, it may feel to you that I’m saying something stronger than you wanted to claim. Indeed, most of the time you haven’t mentioned “deduction” and “descriptions”. Much of your language has been ambiguous, and I think that through the use of ambiguous language, you’ve slid from the modest supervenience thesis into expressing some sort of theoretic reductionism, without realizing it.

    • Hi Richard,
      I think we may be clarifying where people disagree.

      I agree that we cannot arrive at *definitions* of high-level concepts from the low-level descriptions alone.

      I assert that we can arrive at *descriptions* of high-level concepts, given (1) complete low-level description, plus (2) *definitions* of the terms used in the high-level description.

      Thus, if I were to take a bird-level simulation of starlings, and add to that a definition of the term “flock” — (let’s say, region of space bounded by surfaces where the bird/volume density falls by a factor 3, or whatever) — then with that combination I can then tell you everything about flock behaviour.

      But that’s just saying we need to define the terms we use in the high-level description.

      What the low-level description alone could do is give you an output in which the bird/volume density varied significantly.

      Whether you then wish to put a name to those density enhancements is up to you. Whether you then wish to neglect the low-level, bird-by-bird description and describe things instead in terms of the high-level concept that you’ve just defined is then a matter of pragmatics and utility.
      Cheers, Coel.

  27. Hi Coel,

    Thanks for your reply.

    You wrote: I assert that we can arrive at *descriptions* of high-level concepts, given (1) complete low-level description, plus (2) *definitions* of the terms used in the high-level description.

    OK. So you are definitely talking about deriving descriptions. “Arrive at” is rather vague. Are you talking about strict deduction? Unless you say otherwise, I will take you to be making this claim:

    C1. High-level descriptions can in principle be strictly deduced from a combination of low-level descriptions and definitions.

    I will take a definition to be any formula (any set of premises) that can be used in conjunction with low-level descriptions to deduce high-level descriptions. In this context, it’s usually called a “bridge theory”.

    As I wrote above, I don’t accept that bridge theories are generally possible. In some relatively simple cases we may come up with approximations, such as an approximate definition of a “flock”. But in other cases they’re not possible at all. In fact, even the flock case is much more problematic than you seem to think.

    (let’s say, region of space bounded by surfaces where the bird/volume density falls by a factor 3, or whatever)

    That’s a very poor definition of a flock, and will give you results that differ significantly from ordinary language use. A flock is not just a random collection of birds occupying a volume of space. Imagine two flocks passing through each other. They don’t become a single flock as they pass through the same volume of space. Ordinary words generally do not lend themselves to precise formulaic definitions. That’s why dictionary definitions are rarely precise. If you impose a simplistic precise definition onto a complex imprecise concept, then you’ve changed the concept.

    But this is small beer compared with the problems you have when you try to link more distant concepts. How are you possibly going to come up with general definitions that get you from arrangements of atoms to tables, chairs and people, let alone beliefs? As I wrote earlier, I think the idea of a definition or bridge theory from physical descriptions to mental descriptions is particularly hopeless, because our physical descriptions and our mental descriptions are based on radically different types of modelling. But don’t take my word for it. Try coming up with the outline of some definitions that would get you from a set of atomic descriptions (or even higher-level physical descriptions) to this mental description: “Richard is a moral anti-realist”.

    I would also recommend that you step back and ask why you even want to make this sort of claim. Isn’t it enough to make the supervenience claim? Perhaps the problem is that you don’t think the supervenience claim is clear enough. Would it help to put it this way: the way things are at high levels is fixed by the way things are at the lowest level. Why do you need to supplement this with a useless claim about a supposed hypothetical (but practically impossible) way of deriving higher-level descriptions?

    • Hi Richard,

      “I will take you to be making this claim: C1. High-level descriptions can in principle be strictly deduced from a combination of low-level descriptions and definitions.”

      Yes, that’s my claim. But note how it is phrased. The definitions of the high-level concepts are *inputs* into my scheme. We construct and define high-level concepts if we find them useful. Note that I am not saying that you can arrive at *definitions* of high-level concepts from the low-level description.

      “In some relatively simple cases we may come up with approximations, such as an approximate definition of a “flock”.”

      Hold on, the *definition* of the high-level concept is not something I need to “come up with”. I explicitly take that as an input. *If* I have such a definition, and a complete low-level description, *then* I can arrive at high-level *description*.

      “That’s a very poor definition of a flock …”

      Agreed, it is, I just used it as an example of how, if *given* that definition, I could then (and given also a complete bird-level description) calculate a complete description of defined-that-way-flock behaviour.

      “… and will give you results that differ significantly from ordinary language use.”

      My above claim said little about the “definitions” that I would input into my scheme. In particular, I did not demand that the definition be neat or mathematical or precise. The definition could instead be a family resemblence argument about “things English-speakers tend to call a flock”.

      Of course that’s a woolly definition, and thus applying that in my scheme might give woolly results. But that’s a result of the woolliness of the high-level concept, and is thus a “feature”, not a “bug” of the reductionist scheme I claimed (C1 above).

      “How are you possibly going to come up with general definitions that get you from arrangements of atoms to tables, chairs and people, let alone beliefs?”

      Not my problem! 🙂 My only claim is that *given* a definition of high-level Concept C, one can deduce descriptions of defined-that-way Concept C.

      I would agree entirely that many high-level concepts are fuzzy!

      [By the way, this seems to me an example of whenever I say something about reductionism on SS, it gets interpreted as a stronger idea than I’ve actually stated, and I’m immediately asked to defend all sorts of ideas beyond my actual claims.

      That’s not trying to criticise you, since you’ve been clear and explicit, and by doing that it has helped me understand where the criticisms are coming from. But it shows that even a sympathetic reader can misintepret my intent, and thus shows how much misunderstanding there might be with an unsympathetic reader such as Aravis or David Ottlinger.]

      • Hi again, Coel.

        “Note that I am not saying that you can arrive at *definitions* of high-level concepts from the low-level description.”

        OK.

        The definition could instead be a family resemblence argument about “things English-speakers tend to call a flock”… Of course that’s a woolly definition…

        It’s not even a proper definition. You could give just the same sort of pseudo-definition for any word: “X is what English speakers mean by X”. What you need for deduction is a formula that enables you to calculate whether a given arrangement of birds constitutes a flock. I guess you could make the formula “fuzzy” by making it probabilistic or pseudo-random. But, as I’ve said, I’m not denying that in this simple case you could come up with a formula that would approximate to normal language use.

        “Not my problem! 🙂 My only claim is that *given* a definition of high-level Concept C, one can deduce descriptions of defined-that-way Concept C.”

        It is your problem if the definitions you need are not possible in principle, which is what I’m arguing for the more difficult cases. I don’t think I made that clear enough. If I’m right, you might as well be saying “Given an odd number divisible by 2…”

        I want you to take a critical look at your assumption that such definitions and deductions are possible in principle. I’m suggesting that you think carefully about what form they might take, paying particular attention to the physical–>mental case. When you do that, you might start to see the problems, and start to question your assumption.

        Perhaps you think such deductions must be possible in principle for the supervenience thesis to be true. That’s not the case.

        Anyway, even if you still don’t agree with me, I hope I’ve helped you to state your position more clearly the next time the subject comes up. I’ll leave it there.

        All the best,
        Richard.

    • Hi Richard,

      ” What you need for deduction is a formula that enables you to calculate whether a given arrangement of birds constitutes a flock.”

      Well no, I don’t see that I need any restriction on the definition of these high-level concepts at all. My stance is, “if you want to know about X, then you tell me what X is, and my low-level simulation will tell you about X”.

      If the definition of X is vague and woolly, then any replies I give will be necessarily gauge and woolly. If the definition of X is too vague or nonsensical, then my reply will be “of course I can’t make sense of that, but that’s a problem with your definition of X, not with my concept of reductionism”.

      Of course one could also say that if one really wants to make sense of X and understand X, then one needs to understand how it is implemented at the lower level, and for that one needs a specific enough account, but I don’t see that I need a mathematical or formulaic account of X, a family resemblance account of X might be fine. In that case no one, single lower-level implementation will be possible.

      “It is your problem if the definitions you need are not possible in principle, …”

      But it’s not me who needs the definitions! My claim C1 was all about the idea that if someone wants to know about *their* concept X, then they define X for me, and the low-level simulation will then reproduce and describe X.

      “Anyway, even if you still don’t agree with me, I hope I’ve helped you to state your position more clearly the next time the subject comes up.”

      Yes, it’s certainly helped me how people interpret these claims — thanks, very helpful. Obviously I need to write a whole SS article on this!

      • “My claim C1 was all about the idea that if someone wants to know about *their* concept X, then they define X for me, and the low-level simulation will then reproduce and describe X.”

        It never occurred to me anyone would interpret C1 that way.

      • Hi again, Coel. After a good night’s sleep, I couldn’t resist coming back for another go, as I feel we may be getting closer to understanding each other, and it would be satisfying if we could finally do so. If you’re still willing, please read on.

        First, would it be correct to say that you only mean to take the supervenience position (roughly, the way things are at higher levels is fixed by the way things are at the lowest level), and you’re just engaging in an additional way of talking about this position?

        “My claim C1 was all about the idea that if someone wants to know about *their* concept X, then they define X for me, and the low-level simulation will then reproduce and describe X.”

        The problem I have with this is that we are talking about ordinary concepts, like “flock”, “belief”, etc. So it doesn’t make sense to talk about “their” concept. These are everyone’s concepts, yours as well as your interlocutor’s. (No doubt there are slight variations from person to person, but we can ignore such slight variations here.) So let me replace your word “their” with the word “the”.

        Now, suppose your interlocutor wants to know about beliefs. He wants you to program the atomic-level simulation to describe people’s beliefs. However, he can’t give you a definition of beliefs in terms of atoms, so you say you can’t do it. But now suppose that–as I’m arguing–no one can possibly define beliefs in terms of atoms. It’s impossible in principle. Then you would have to say that the simulation cannot possibly be given a definition that would enable it to describe people’s beliefs. Moreover, I say that in general macroscopic concepts cannot be defined in terms of atoms. What’s the point in you saying that the simulation can give macroscopic descriptions, provided someone gives you definitions, if no such definitions are possible in principle? (True, you didn’t mention macroscopic descriptions, but surely that was what you had in mind.)

      • P.S. Further clarification.

        Earlier I claimed that a suitable AI could (in principle) get from atomic-level descriptions to macroscopic ones. So I’m not saying that such a move is impossible in principle. What I’m saying is that such a move cannot be made by a process of deduction using definitions (or bridge theories). It would have to be a different sort of algorithm. I suspect this is the crucial point on which we differ. You think the move from atomic-level descriptions to macroscopic ones can be made by deduction using definitions, and I say it requires a different sort of algorithm.

    • Hi Richard,

      “First, would it be correct to say that you only mean to take the supervenience position …”

      Yes, I think so. It seems (from trying to make sense of all the discussions on SS) that when physicists say “reductionism” they mean what philosophers call “supervenience”. Supervenience is not a term used by scientists, and coming from a scientific perspective it is natural for me to use the term “reductionism”.

      “So it doesn’t make sense to talk about “their” concept. These are everyone’s concepts, yours as well as your interlocutor’s.”

      OK, agreed. And in terms of high-level concepts the world is often fuzzy and messy. That’s just the way the world is. I don’t see that as a flaw in the supervenience/reductionism perspective. (No-one is claiming that you can write down a complete account of the “causes of World War One” in a page of mathematical equations.)

      “He wants you to program the atomic-level simulation to describe people’s beliefs. However, he can’t give you a definition of beliefs in terms of atoms, so you say you can’t do it.”

      But, I would not require the definition “in terms of atoms”. The definition can be in any terms. Or, at least, in any terms that are manifest in my simulation.

      Let’s presume I have a perfect atom-level simulation of a human. We would agree that that simulation would now manifest all high-level properties and behaviour of the human. You can then define “belief” in any terms you like that can be related to that simulation, including all the high-level ones.

      As a comparison, above I defined “flock” in terms of a bird/volume density. Leaving aside whether that is a good definition, the point is that “bird/volume density” is not a concept that I have at the individual-bird level, it is an emergent property of a collection of birds. Yet, I can readily define “flock” in terms of that higher-level emergent property, since I can relate it to my simulation.

      Thus I would only have a problem if the interlocuter could not give me any definition of the term “belief” in any terms, even including all the high-level ones. If that can’t be done then I’d suggest that the problem is with that concept, not with the reductionist conception.

      Cheers, Coel.

      • Hi Coel,

        “But, I would not require the definition “in terms of atoms”. The definition can be in any terms. Or, at least, in any terms that are manifest in my simulation.”

        We’ve stipulated that this is an atomic-level simulation, so the only things manifest in it are atoms! As I said in my very first comment: “But if you looked at the data in the computer, you would only see variables representing atoms (and other low-level properties). You wouldn’t see any variables representing wetness, desires, etc.” If you want the program to print out descriptions of macroscopic objects and beliefs, then you need to define those concepts in atomic-level terms, or have a chain of definitions of intermediates working up all the way from atoms to beliefs. (But I’m saying this isn’t possible. Definitions won’t get you from atoms to macroscopic objects and beliefs. A different approach is needed.)

    • Hi Richard,

      “We’ve stipulated that this is an atomic-level simulation, so the only things manifest in it are atoms!”

      No! The only things *specified* in it are atoms, but all emergent, higher-level phenomena are *manifest*, and thus I am able to define higher-level concepts in terms of higher-level concepts.

      For example, my bird-level simulation does not contain the concept “bird number density” (all has only lists of locations of individual birds), but if someone comes along and defines the concept “bird number density”, then I can (from my simulation) describe everything about “bird number density” as so defined.

      “As I said in my very first comment: “But if you looked at the data in the computer, you would only see variables representing atoms (and other low-level properties). You wouldn’t see any variables representing wetness, desires, etc.””

      Agreed, there would be no variables for “desire”, just as in my bird-level simulation there would be no variables for “bird density”. But, the simulation would manifest these properties (that being the central doctrine of supervenience, that if we completely reproduce the low level then the high level is ipso facto reproduced).

      “If you want the program to print out descriptions of macroscopic objects and beliefs, then you need to define those concepts in atomic-level terms, or have a chain of definitions of intermediates working up all the way from atoms to beliefs. (But I’m saying this isn’t possible. Definitions won’t get you from atoms to macroscopic objects and beliefs. A different approach is needed.)”

      Well this, perhaps, is our central disagreement. Let me ask this: can you tell me what your concepts (“wet”, “desire”) mean in the real world? Can you give a way of relating them to real-world behaviour? (Even, if only, by saying “this location is wet, that location is not wet”, etc)?

      If you can do that then you can relate your concept to my simulation in exactly the same way! Afterall, ex hypothesi, my simulation contains everything that is in the real world, and exhibits all the behaviour and emergent phenomena that the real world does.

      Thus, however you define your high-level concept, you can apply it to my simulation in exactly the same way that you do to the world. If the concepts is vague and woolly as applied to the real world, then it will give a vague and woolly result applied to the simulation, but that’s just a “feature” of that definition, not a flaw in my reductionist simulation.

      The only way you could not apply your definition to my simulation is if you had no idea how to apply it to the real world, in which case — again — the problem here is with the definition.
      Cheers, Coel.

    • Hi David,

      I don’t like the ways that “ontological reductionism” is usually defined, including your way. So let me give my position in my own words, and I won’t violently object if you call me an ontological reductionist.

      The most complete and precise models of reality we have are those of fundamental physics. That’s what makes them fundamental. They get the closest to the way things are. But they’re not useful for most of our purposes. So most of the time we use higher-level models, which are less precise, but tell us about the world in a way that’s useful to us. All our different models are modelling the same reality. So nothing magically “emerges” out of nowhere “at higher levels”. We just have various different ways of looking at the world, using different concepts. Higher-level concepts don’t appear in lower-level models because they’re not useful there. The fundamental regularities of reality are fairly simple. But those simple regularities work out in such a way as to produce complex arrangements, and we can only effectively model those arrangements by imposing reifying concepts on them, i.e. treating them as objects, substances, etc.

      Our most fundamental models are as yet incomplete (and perhaps will always be so). But what gap remains is not filled by any sort of dualistic or supernatural tinkering.

      How’s that?

  28. Hi Coel,

    While I waiting for a reply to my last comment I’ve been re-reading the thread, including some comments I skipped last time. I think it will be helpful if I summarise the situation as I see it. It’s taken me a good while to write this, so please give it very careful attention. It should sort out much of the confusion between us.

    It’s quite clear that you are making two claims, which I’ll call “the supervenience claim” and “the deduction claim”. David Ottlinger used the terms “causal entailment” and “logical entailment”, which I think correspond. But I prefer my terms, as they’re closer to the language that you yourself have used. Specifically, you’ve used the word “deduce”, and I want to focus on that word.

    You wrote: “I can see that one can distinguish the two concepts (causal entailment and logical entailment), but I don’t see how you could have the former without the latter.”

    I guess that, if the deduction claim followed so directly from the supervenience claim that no reasonable person could hold one but not the other, then it might be reasonable to treat them as effectively equivalent. But that’s not the case. There are reasonable people (including me) who say that the latter doesn’t follow from the former, and that the former is true but the latter is false.

    You wrote: “I would assert that if that first sentence held, then the beliefs and thoughts are (in principle) computable from the physical description. Therefore, the physical description would indeed “tell us” (= give us all the information we need to deduce) the mental properties.”

    The expression “tell us” is vague, as your scare quotes suggest. There’s a loose sense in which I would agree that the physical description would “tell us” the mental properties. But that’s not a sense that can be based on deduction. So, when you clarify “tell us” by explaining that you mean deduction, I must reject your claim.

    I suspect what’s going on is this. You have the intuition that the low-level descriptions must in some sense “tell us” the high-level facts. And you can’t make any sense of this except in terms of deduction, so you’ve jumped to the conclusion that high-level descriptions must follow deductively from low-level descriptions. I’m saying that, yes, they follow in a loose sense, but not in a deductive sense. I’m claiming that the deductions you’re appealing to are impossible in principle. That probably seems absurd if you can’t see any alternative to deduction as a way of getting from low-level descriptions to high-level ones. So that’s why I’ve been talking about a hypothetical non-deductive AI that can “tell us” the higher level facts in a looser sense and by a different process.

    In fact, you’ve already implicitly accepted that your original claim is not strictly correct. Lower-level descriptions don’t give us all the information we need to deduce higher-level descriptions, because they don’t give us the information we need about the relationship between low-level and high-level concepts. You’ve tried to plug that gap by invoking definitions of higher-level concepts in terms of lower-level ones. And I’ve been saying that that won’t work. Definitions cannot do all the work you want them to do. My hypothetical AI would have the information it needs about the relationship between low-level and high-level concepts. But that information could not all be in the form of definitions, and the AI could not proceed purely by deduction.

    We humans have the ability to get from very low-level sensory data to the high-level descriptions that we speak. A cat passes in front of me, light from it falls on my photo-receptors, those photo-receptors generate representations of some sort (which we can think of as descriptions), and on the basis of those low-level descriptions my cognitive processes generate the high-level description, “There’s a cat”. In that sense, low-level descriptions are constantly “telling us” what we need to form high-level descriptions. But of course our brains bring a lot of pre-existing knowledge to the table about how to get from those low-level descriptions to the high-level descriptions. That knowledge is mostly not stored in the form of propositions, and the inferences we make using that knowledge are mostly not deductive ones. My hypothetical AI is doing broadly the same sort of things that humans do. We humans mostly do it by subconscious, non-deductive processes, only occasionally bringing to bear our faculty of deductive reasoning.

    I haven’t yet made any serious argument to support my claim that the deductions you’re invoking are impossible in principle. I’m just trying to get you to see that this is a serious claim, deserving of attention. So far, I think, the claim has probably seemed so absurd to you that you can’t even believe that I mean what I say! So you’ve been looking for other interpretations of my words. Please read my lips: I’m asserting that what you claim can be done by deduction cannot be done by deduction.

    A further problem is that you keep focusing on your starling-to-flock example, and assuming that what’s true for that case is true for the more difficult cases. In the starling-to-flock case there could be an (approximate) definition that would allow deduction of (approximate) flock-descriptions from starling-descriptions. But it doesn’t follow that the same is true for the atomic-to-macroscopic or physical-to-mental cases. I’m saying that it’s not true for those cases. But it’s hard for me to make that point when you keep your attention on the starling-to-flock case. As long as you focus on that case, your deduction claim seems so obviously true that you can’t believe I’m really denying it.

    I hope that helps.

    P.S. I’ve just seen your latest post, and I’ll respond to that in due course. I think you’re probably right that what we’re discussing now is a central point in our disagreement, and I haven’t mentioned that point in this comment. But I still think this comment deserves careful attention.

    • Hi Richard,

      “In fact, you’ve already implicitly accepted that your original claim is not strictly correct. Lower-level descriptions don’t give us all the information we need to deduce higher-level descriptions, because they don’t give us the information we need about the relationship between low-level and high-level concepts.”

      I think we now need to discuss the terms “describe”, “deduce” and “define”!

      First, let me distinguish between descriptions *of* the high-level phenomena and descriptions *using* high-level concepts. Take the first of those to start.

      If I have a complete low-level simulation, then that simulation manifests high-level phenomena. Thus that simulation *is* a *description* of the high-level phenomena, a description of the high-level phenomena in entirely low-level terms. It is such because, if you want to see how the high-level phenomenon behaves, you have only to look at my simulation to see.

      But, descriptions in low-level terms only are usually volumous and unwieldy and impractical. Thus, what we want is a compactified and more useful description, which focusses on the aspects we’re interested in. That is, we want a compactified, high-level description in terms of high-level concepts.

      Now, let me admit at this stage that my low-level simulation does not tell you which high-level concepts to adopt. It depends on what matters to you and which features you want brought out in the high-level concept. The high-level concept thus focusses on those and dispenses with all the stuff that is irrelevant for your purpose, thus giving you a much more compact and useful description (e.g. “flock of starlings” as oppose to a list of 5000 individuals).

      So, if you want to compactify the low-level-simulation description into a high-level concept that brings out the salient features and discards the rest, then you have to tell me what you want! That is, you need to *define* the high-level concept for me.

      Thus, if you want to know about “bird number density” then you should give me the definition of that concept and I can then *deduce* everything about “bird number density” from (1) my low-level simulation, plus (2) the definition you’ve given me.

      Similarly, if you want to know about “wetness” then you tell me what you mean by that term and my simulation can then *deduce* the behaviour of “wetness” from (1) my low-level simulation and (2) the definition you’ve given me.

      But, everything, obviously, depends on that definition. The above runs into trouble if you can’t give me a definition that can be related to features of the simulation. But, realising that the simulation manifests all high-level phenomena, that would only be the case if you also cannot give me a definition that relates to features of the real world. Afterall, (ex hypothesi) my simulation has perfectly replicated everything in the real world.

      If your reply is that some concepts are poorly understood, and thus we have little chance of producing a definition that is well-specified enough to get very far in relating it to the simulation, then I would agree — but that is not a drawback for the reductionist scheme, it’s simply that we haven’t sorted out what question we’re asking.

      So, to summarise, perhaps what you are saying is that we cannot deduce high-level *definitions* from the low-level simulation. Whereas, I am replying that I agree, we cannot, but we can deduce *descriptions* of the high level, if we’re first agreed what question we are asking, and thus have agreed what the *definitions* of the concepts actually are.

      So, in briefer summary, reductionism cannot tell us what questions to ask, but if we do sort out an actual question (“definition”), then reductionism can (by “deduction”) tell us the answer (“description”).

      If, though, what one is trying to achieve is arriving at *definitions* by deduction from the low level, then I agree that that is misconceived.

  29. Hi Coel,

    “For example, my bird-level simulation does not contain the concept “bird number density” (all has only lists of locations of individual birds), but if someone comes along and defines the concept “bird number density”, then I can (from my simulation) describe everything about “bird number density” as so defined.”

    I think you’re putting the cart before the horse here. Why are you interested in bird number density? The subject of bird number density came up because you were looking for a definition of “flock”, and you defined “flock” in terms of bird number density. The original goal was to have a definition of “flock” which would make it possible to deduce descriptions of flocks from the locations of birds. In effect, you’ve now broken this down into two definitions:

    1. You’ve defined “flock” in terms of bird number density.
    2. You’ve defined “bird number density” in terms of bird locations.

    You could combine these to give you a single definition, of “flock” in terms of bird locations. But either way you need a set of definitions that gets you all the way from the variables stored in the simulation (bird locations) to the term you want to include in your descriptions (“flock”).

    In the atomic-level simulation, the variables stored are atom locations. So, if you want descriptions that use the word “flock”, you need definitions that get you all the way from atom locations to the word “flock”.

    With regard to the second part of your comment, I’ve indirectly addressed that in my long comment. You are presupposing that the only way to get to high-level descriptions is by means of definitions. And I’m denying that. For example, I deny that our brains hold definitions of all the words we use. They hold the information we need to use those words, in a broad sense of “information”. But that information is mostly not in the form of definitions. By “definition” here I mean any formula that tells us how to use a word correctly. The linguistic information in our heads is mostly not in the form of formulas. (If you wonder what other form it could take, think about neural networks.)

    Sometimes we memorise a definition of a word, such as one that we find in a dictionary. If it’s an unfamiliar word, we may for a while recall that definition each time we want to use or understand the word. But, once we become fully familiar with the word, we are able to use it automatically, with no need for a definition. And for the vast majority of our words, we have never learnt a definition. It’s tempting to think there must be definitions stored in our heads. But we should resist that temptation. Visualising our heads as being filled with definitions, propositions and formulas is, I believe, the source of much confusion in philosophy, and possibly in linguistics too.

    • P.S. To avoid a confusion that’s arisen before… When I say “you need definitions”, I’m not demanding that you give me those definitions here. I mean that such definitions would be needed in order for the given description to be deduced.

    • Hi Richard,
      Main reply up above, but on the specific points here:

      “So, if you want descriptions that use the word “flock”, you need definitions that get you all the way from atom locations to the word “flock”.”

      Agreed. But, this is just as much an issue for the real-world definition of “flock”. Afterall, in the real world, all there is is atoms! So, if you want to use the word “flock” about any aspect of the real world, you need to have some way of relating it to atoms.

      Now, you might, of course, relate it to particular large-scale grouping and patterns of atoms, and the behaviour of those patterns, however I could do that just as readily to my simulation, which necessarily contains the same groupings and patterns.

      ” The linguistic information in our heads is mostly not in the form of formulas. (If you wonder what other form it could take, think about neural networks.)”

      No problem! Again, I am not fussy about the form of the “definitions”. If they come as vague and woolly concepts or as “family resemblences” then fine, I can cope with those. I just apply them to the simulation exactly as one would to the real world. Any woolliness in the result can then be blamed on the wooliness of the concept!

      If you give me the “definition” in terms of a trained neural-network, then again, no problemo! I just apply attach the simulation output to the neural-network input.

      [This, of course, is a common approach in physics. If I have a particle physics or astrophysics database that is vast and unwieldy, and I want to learn something from it, say search for particular events, then I can search it using a pattern-recognition neural-network. As above, the neural network or the “definition” is essentially asking the question, and the simulation is then telling you the answer.]

  30. I wrote: “So, if you want descriptions that use the word “flock”, you need definitions that get you all the way from atom locations to the word “flock”.”

    You replied: “Agreed.”

    Thank goodness we finally got that sorted out. 😉 I think there’s no need to continue with our discussion of the simulation, because this is the point I’ve been trying to get you to accept.

    Let me now present my argument in a formal way, so there’s no more confusion.

    P1. To deduce true descriptions that use the word “flock”, starting from descriptions of atom locations, you need definitions that get you all the way from atom locations to the word “flock”. (That’s what you’ve just agreed to.)
    P2. It’s impossible in principle for there to be general* definitions that get you all the way from atom locations to the word “flock”.
    P3. Therefore, it’s impossible in principle to generally deduce true descriptions that use the word “flock”, starting from descriptions of atom locations.
    P4. Therefore, not all true macroscopic descriptions can in principle be deduced from descriptions of atom locations.
    P5. P4 contradicts your reductionist position.
    C. Therefore, your reductionist position is false.

    I expect you’ll reject P2. But do you have any other objections to this argument? I’m not sure about P5, because you only claimed that mental descriptions could be deduced from physical descriptions. But I’ve assumed that your reductionism extends to thinking that all macroscopic descriptions can be deduced from microscopic (e.g. atomic) descriptions.

    (* I’ve included the word “general” because it’s no use just specifying that one particular arrangement of atoms constitutes a flock. The definition needs to work generally across all arrangements of atoms that are appropriately described as flocks.)

    I’ll also make a similar argument, relating to mental descriptions:

    P1′. To deduce true mental descriptions from physical descriptions, you need definitions that get you all the way from physical terms to mental terms.
    P2′. It’s impossible in principle for there to be general definitions that get you all the way from physical terms to mental terms.
    P3′. Therefore, it’s impossible in principle to generally deduce mental descriptions from physical descriptions.
    P4′. P3′ contradicts your deduction claim.
    C. Therefore your deduction claim is false.

    Again, I expect you’ll reject P2′. But do you have any other objections?

    Of course P2 and P2′ are the central matters of dispute between us, so you may well say that this doesn’t settle much. But my main objective has been to clarify what our respective positions are, and where we disagree.

    • Hi Richard,
      Correct, I do reject P2!

      “P2. It’s impossible in principle for there to be general* definitions that get you all the way from atom locations to the word “flock”.”

      Counter-argument:

      1) Take any concept that has some relation to the real world.
      2) The real world is an arrangement of atoms (or other low-level entities).
      3) Given 1 and 2, that concept can be related to an arrangement of atoms.

      Thus, P2 is false for any concept that is about the real world. Note, though (as in previous comments) that I’m using a fairly relaxed interpretation of “definition”.

  31. Something I missed…

    “If you give me the “definition” in terms of a trained neural-network, then again, no problemo! I just apply attach the simulation output to the neural-network input.”

    To speak of a “definition” (even in scare quotes) suggests that there is a separate thing for each word. That’s not what I’m talking about. But I can (in principle) give you the entire AI, which will make its own choices about what words to use, and you can attach that to the simulation. That’s just the AI scenario I’ve been talking about since my very first comment!!!

    But the point I’ve also been making is that such an AI would not be working by deduction. Therefore, this doesn’t save your claim that it’s possible to deduce the descriptions in question. As I said in my long comment, it’s possible to derive such descriptions in a looser sense. Drop your words “deduce” and “definition”, and we might be able to agree.

    • Hi Richard,

      “But the point I’ve also been making is that such an AI would not be working by deduction.”

      If by “deduction” one means any logical processing of information, and given that any neural network can be emulated by a Turing machine, why isn’t your AI neural network “working by deduction”?

  32. Hi Coel,

    “Note, though (as in previous comments) that I’m using a fairly relaxed interpretation of “definition”.

    If you define “definition” such that it ceases to be a proposition or formula, then it can no longer support your claim of being able to deduce descriptions using definitions, and then it has no relevance to my argument. You can’t deduce from something that is not a proposition or formula. Please think about whether your proposed sense of a word is adequate for the task you’re putting it to.

    “If by “deduction” one means any logical processing of information, and given that any neural network can be emulated by a Turing machine, why isn’t your AI neural network “working by deduction”?”

    But that’s not what we’ve been using “deduction” to mean. I checked that you meant “strict deduction”. If you now define “deduction” so broadly as to include the AI processes that I’m talking about, then your original deduction claim collapses to my position!

    I see no merit whatsoever in calling a physical neural network “deductive”. To do so undermines a valuable distinction that we need to be able make. After all, sometimes humans engage in what we would normally call deduction. But we can hardly say such a thing if we call everything the human brain does “deduction”.

    In the case of a software neural network, the processor executes the instuctions of the emulator software. And that’s certainly what I would call “formulaic”, if not “deductive”. But that’s irrelevant to the logic of the neural network itself. That logic is the same whether it’s implemented in hardware or software.

    In the final analysis it doesn’t matter what we call things, as long as we all fully understand what’s being said, no one’s being misled, and no fallacies of equivocation are being committed. If you accepted that the only way to get from physical descriptions to mental descriptions was by the sort of AI I’m talking about, and if you fully understood the implications of that, then you wouldn’t want to call it “deductive”, because you would understand that that would mislead your readers. (If your readers knew you were talking about an AI and also fully understood the implications, then it wouldn’t matter what word you used. But we can safely assume that won’t be the case.)

  33. Hi Coel,

    I think it’s time to wrap this up. At least I’m now clear about your position. My conclusion is that you are making a deduction claim that goes significantly further than the supervenience claim, and you should be clear about that. I for one reject your deduction claim, and I think quite a few other people will too.

    All the best,
    Richard.

  34. Hi Coel,

    I should make it a rule to wait till the next day before replying, because sleeping on things always helps me get them clearer. I realised I need to add something further to our discussion of neural networks.

    You probably had in mind that in principle you could describe the entire logic of the AI in premises. You could then deduce mental descriptions from physical descriptions plus those AI premises.

    But in the same sense, you could deduce any mental description from the premise “1 + 1 = 2”! All you need to deduce the description “Richard is a moral anti-realist” from the premise “1 + 1 = 2” is to be given the premise “If 1 + 1 = 2, Richard is a moral anti-realist”. Allowing yourself to draw on unlimited unmentioned premises makes your deduction claim meaningless.

    And let me remind you that your claim mentioned information: “the physical description would indeed “tell us” (= give us all the information we need to deduce) the mental properties.” The more you rely on information from extra premises, the more untrue it becomes that the physical descriptions “give us all the information we need”.

    • Hi Richard,

      “Allowing yourself to draw on unlimited unmentioned premises makes your deduction claim meaningless.”

      I should clarify that I’m not trying to sneak in extra premises here. All I’m saying is this. If you come along and say: “From your low-level simulation, tell me about high-level concept X”, then it is entirely fair for me to reply, “sure, but first you need to tell me what you mean by X”.

      Now, if you reply, “X is any input which causes this neural network to give output state “red””, then that is entirely ok. In that sense I’m entirely ok with you giving a definition of the high-level concept in terms of a neural network.

  35. Hi Richard,
    I’m certainly not stuck on the word “deduction”. I’ll happily re-state without using it, and instead talk about “processing information”.

    If I have a complete low-level simulation, and if someone hands me a “definition” of a high-level phenomenon, such that I can relate it to my low-level simulation (and note that, since my simulation manifests everything that the real world manifests, this can be done to the same extent that it can be done regarding the real world) then I can process the information (simulation plus definition) to arrive at a description of the high-level phenomenon.

    The “definition” can be vague, but that is a limitation deriving from the definition, not from the reductionist program.

  36. “I’m certainly not stuck on the word “deduction”. I’ll happily re-state without using it, and instead talk about ‘processing information'”

    So now all you need to do is replace “reduction” with “modifying information” and you’re there!

  37. Hi Coel,

    Given your willingness to drop any mention of “deduction”, I’ve come round to the view that you really are only trying to make the supervenience claim. But for some reason you’re not satisfied with the way other people express that claim. You think you can express it better. But you’re mistaken. You’ve tried to express it in terms of how descriptions could be generated (in principle), but in so doing, you’ve built into your version some assumptions about how descriptions can be generated. That makes it go further than supervenience, and makes it unacceptable to people who don’t accept your assumptions. As other people have suggested, you’ve entangled ontology (the way things are) with epistemology (here, how descriptions can be generated).

    Life would be much simpler if you just stuck to expressing the supervenience claim in one of the usual ways. Why don’t you like the version I offered earlier: the way things are at high levels is fixed by the way things are at the lowest level? There’s room for improvement there, but bringing epistemology into it is always going to mean that you’re going beyond just supervenience, and making a more controversial claim.

    I’m sorry I said that we could agree if you dropped the word “deduction”. I think there is nothing to be said of the sort you want.

    • Hi Richard,

      “I’ve come round to the view that you really are only trying to make the supervenience claim.”

      Yes, I think that also!

      “But for some reason you’re not satisfied with the way other people express that claim. … Why don’t you like the version I offered earlier: the way things are at high levels is fixed by the way things are at the lowest level?”

      No, I’m entirely happy with that.

      “… bringing epistemology into it is always going to mean that you’re going beyond just supervenience, …”

      The reason I’m bringing epistemology into it is that I’m addressing the consequences of supervenience for science. Science is all about the “generating descriptions” and uses supervenience as a tool.

      Just agreeing on supervenience and then stopping is uninteresting. What I’m interested in doing is agreeing on supervenience and then seeing what follows from that for the “unity of science” and for methods of science. Afterall, this sort of supervenience (or “reductionism”, as scientists call it) is a powerful tool of science.

      “… in so doing, you’ve built into your version some assumptions about how descriptions can be generated. That makes it go further than supervenience, …”

      That I’m not sure I accept. All I’m trying to do is to bring out the consequences of supervenience for epistemology.

  38. Hi Coel,

    I’ve been thinking about how to make my views clearer. Let me elaborate on the hypothetical simulation-plus-AI system that I’ve been talking about from the start. It doesn’t use “definitions”, but it has the background information it needs (including linguistic information) to infer high-level facts and mental facts from atomic-level facts. This is broadly the same sort of background information that humans acquire over their lifetime, and which allows us to infer high-level facts and mental facts from lower-level sensory data. Note that I’m talking about inference from evidence, not deduction.

    It might be instructive to think about what evidence the AI would use to infer facts about mental states. I say that looking inside people’s heads would be of limited use. The AI would mainly use the same sort of evidence that we would. If I want to know what you believe, I look at the evidence of what you’ve said and done. That’s what the AI would do too. Since we’re talking “in principle” we can help ourselves to an AI that is even powerful enough to gain evidence of what a person would have said and done in counterfactual circumstances. It can duplicate a simulated person in a side-simulation where it simulates him under a wide variety of circumstances designed to provide more evidence of his beliefs and desires, e.g. having him questioned.

    We can also help ourselves to an AI that is a much better judge of evidence than a human being, though I would deny that there can be a perfect judge of evidence. And since it knows everything at the atomic level, it has extraordinary detailed evidence about everything going on. It can produce an extraordinarily thorough set of high-level descriptions. But I don’t think we can make any sort of strict absolute statement like: it can pick out every description in the set of all possible true high-level descriptions. The AI isn’t perfect, and that’s not even a well-defined set. Once you get away from strict deduction, and into the messy business of judging the evidence, things get fuzzy. In fact, you recognised that high-level concepts are fuzzy, but I’m not sure you fully appreciated the significance of that fact.

    So, if you accept all that, does it help you make any useful claim to add to the supervenience claim? I don’t really think so. But if you want to say something like I’ve just said, go ahead.

    • Hi Richard,

      “In fact, you recognised that high-level concepts are fuzzy, but I’m not sure you fully appreciated the significance of that fact.”

      Yes, I do appreciate the significance of that fact. If high-level concepts are fuzzy it means there won’t be neat and easy definitions of them. That means you can’t ask the reductionist simulation to give a neat and simple output about that concept. But, I do not regard that as a flaw of reductionism, I regard it as a feature of the fuzziness of high-level concepts.

      All I’m saying is that *if* you can define a high-level concept, and to the extent that you *can* define that high-level concept, then the low-level simulation gives you all the information needed to describe that concept.

      “So, if you accept all that, does it help you make any useful claim to add to the supervenience claim?”

      I am not trying to add anything to supervenience! I’m merely trying to use supervenience as a tool in science.

  39. Hi Richard, I have a great deal of sympathy with what you say, especially with regard to the everyday world in which words seem to fit things like baggy sweaters. But I get no sense of any ontological commitment from your reply. It’s models (made of words) all the way down. This seems to me a kind of linguistic idealism. Elsewhere though you seem to allow the existence of atoms (and presumably electrons). The next step is to allow the existence of aggregates of atoms of the same kind packed together in a regular three dimensional lattice. Such an aggregate can be of macroscopic size. Now real crystals have somewhat irregular boundaries and internal defects such as holes and dislocations and impurities. But I’m not going to try to describe all this. Even in a real crystal there will be greater or lesser regions of homogeneity. Given this conception of how a bit of reality might be, the physicist applies his theory of how atoms and electrons interact and comes up with a prediction of how fast electrons will propagate through the lattice under the influence of an electric field. In other words, he has a theoretical estimate of the resistivity of the material, a macroscopically measurable property. Perhaps it’s even possible to estimate the error in the prediction due to ignoring those defects. If we get good agreement between theory and experiment it seems reasonable to me to say that we have ‘reduced’ or at least accounted for electrical resistivity by means of our theory of atoms and electrons. And this tends to enhance our ontological commitment to those entities. In fact, I find it hard to draw a clear distinction between ontology and epistemology here. Our commitment to the entities, if we are realists, involves a commitment to a theory of their behaviour. They are, as it were, what they do.

  40. Hi David,

    It’s a relief to hear that someone has appreciated my comments. I was beginning to think I’d been wasting my time.

    Let me reassure you that I accept the existence of atoms, crystals, people and all the other usual things. I’m not insane! 😉 I don’t know much about crystals, but I have no objection to anything you said about them or about science.

    Perhaps you take me for an instrumentalist, saying that science only gives us models, and not truth-approximating descriptions of the world. I think that’s a false distinction. Good models and truth-approximating descriptions of the world are (roughly speaking) the same thing.

    I have a problem with many of the usual philosophical terms and categories, so please don’t expect me to label myself either a “scientific realist” or a “scientific anti-realist”. If you put a gun to my head and forced me to choose one or the other, I would choose realist, as the least misleading. But I think that term is ill-defined and often used unhelpfully. It looks to me as if at least some realists are invoking the same false distinction as anti-realists, and coming down on the other side of that divide. I reject the distinction. I suspect that you are one of those realists who does perceive this distinction, and so my somewhat anti-realist sounding comments have led you to put me on the other side of it from you, while also noting some apparent inconsistencies like my acceptance of atoms.

    I take a Wittgensteinian view of language, and like Wittgenstein I see philosophers often using language that he called “idling” or “going on holiday”, particularly on more metaphysical subjects. Philosophers sometimes employ language in a context where it can no longer have its usual meaning, and it ceases to mean anything much at all. So they end up making false distinctions.

    When you asked me earlier whether I thought macroscopic objects were made of microscopic parts, I was tempted to answer, “Of course. I’m not ignorant of cells, atoms, etc.” Taking your words in the ordinary way, your question would only make sense if addressed to someone who you suspect has had little or no exposure to science. Of course, I knew that you were speaking in a special philosophical mode, in which words don’t have their normal senses. But I don’t think you’ve given them any other meaningful sense, so I find your question meaningless. Your language was “idling”, to use Wittgenstein’s term.

    I hope that’s given you something to chew on, and I’d welcome your thoughts on the matter. If you have the time, I’d recommend reading something about Wittgenstein (his later period), though his own book “Philosophical Investigations” is hard going.

    • I’ve been following and benefitting from your comments too, by the way.

      I’ve engaged in a fairly lengthy and detailed discussion with Coel as well, with similar results. I suspect that he will not modify the way he expresses his ideas in a way that would allow them to be more clearly understood.

      • Hi Asher,

        “I suspect that he will not modify the way he expresses his ideas in a way that would allow them to be more clearly understood.”

        The way I express the ideas is in line with how scientists would express and understand the ideas. There seems to be an attitude among philosophers that if scientists and philosophers think about things in different ways, then the scientist is always wrong and the philosopher will always have thought it through better and more clearly.

        Personally I find philosophers very unclear in much that they say!

      • None of your points rebut the fact that if you want to be understood, you have to make yourself understood. This isn’t a matter of science v. philosophy — saying “deduction” means “processing information” – and waiting until you’re ten miles into the conversation to say so – is counter-productive in just about any context I can think of. And it’s not in line with how many scientists who want to be understood express themselves. I read a lot of scientists who are interested in being part of the philosophical conversation, and the good ones make an effort to really understand and engage with the extant philosophical ideas.

        So I’d say again – if you want philosophers to understand your awesome ideas, it’s on you to make them clear, rigorous and coherent regardless of whether you use philosophical or scientific language to do it. If you don’t care whether the people on SS understand you, you’re doing fine and you don’t need to change a thing.

      • Hi Asher,

        ” saying “deduction” means “processing information” – and waiting until you’re ten miles into the conversation to say so – is counter-productive …”

        Oxford Dictionaries says that “deduce” means “Arrive at (a fact or a conclusion) by reasoning”. I think that’s near enough to my meaning (that, given the low-level description, and a definition of X, I could “arrive at by reasoning” (aka “processing information”) a description of X).

  41. Hi Coel,

    After a long and frustrating discussion, it seems we’re back to square one. It’s time I stopped hitting my head against brick walls, so I won’t be discussing philosophy with you again.

    All the best,
    Richard.

    • Hi Richard,
      OK, no problem. This discussion was interesting to me in seeing how people interpret what I say. I’m still rather baffled, though, as to what the objection is to supervenience and using that as a tool in science (which is what scientists do a lot),
      Cheers, Coel.

      • Hi Coel,

        “I’m still rather baffled, though, as to what the objection is to supervenience and using that as a tool in science (which is what scientists do a lot)”

        [I hold my head in my hands and start rocking backwards and forwards. Please, Lord, make it stop. I can’t take any more.]

        I’m baffled as to how anyone with the intelligence and education to be a physicist can be so incapable of paying attention. No one has been objecting to supervenience or to anything that scientists do. For the umpteenth time, the claim that’s being disputed is your claim about how high-level descriptions could be generated in principle. (Note: in principle. Not what scientists actually do.)

        The level of equivocation has reached extraordinary proportions in your more recent comments. I’m only claiming supervenience…and I’m making a claim about the consequences too…but I’m only claiming supervenience…and I’m making a claim about the consequences too. Sigh.

        This isn’t specifically a matter of philosophy. It’s a matter of attentive reading, careful thinking and clear expression. I’m sure those traits must be needed in physics. So why can’t you employ them here? Unless you can start doing so, I predict you will continue to piss off one interlocutor after another.

        It’s even crossed my mind that you’re an extremely sophisticated troll, leading us on and laughing at the poor schmucks trying to make sense out of your equivocations and non sequiturs. But that’s probably just me being paranoid. 😉

    • Hi Richard,

      “I’m sure those traits must be needed in physics. So why can’t you employ them here?”

      And you’re 100% convinced that the lack of communication and understanding is 100% my fault and 0% yours?

      After all, as I see it I’m making a very clear and straightforward claim that is close to trivially right, and I’m baffled as to what you’re disputing.

      “The level of equivocation has reached extraordinary proportions in your more recent comments. I’m only claiming supervenience…and I’m making a claim about the consequences too…but I’m only claiming supervenience…and I’m making a claim about the consequences too. Sigh.”

      Exactly. I’m stating consequences which I see as logically entailed by supervenience. But I’m not claiming any stronger version of “reductionism” than supervenience. But, from supervenience, some consequences follow.

      Let me first quote your definition of supervenience: “Why don’t you like the version I offered earlier: the way things are at high levels is fixed by the way things are at the lowest level?””.

      I do like it. It’s fine. Now, if one has a complete description of the low level, and if “the way things are at high levels is fixed by the way things are at the lowest level” then one has all the information about how things are at the high level. Agreed? Or are we already in dispute?

      Thus, we have a complete *low*-level description of all phenomena, including all the high-level ones.

      Now, suppose we were to *agree* on a method of translating that low-level description into other terms. For example, we could agree on a *definition* of some term “X” such that we agreed on how to translate the low-level description into a description in terms of X.

      My claim: *IF* we had agreed on that definition, *then* we could re-write the low-level description in terms of X.

      As I said, it doesn’t seem to me to be an extravagant claim, and I’m still baffled as to why you disagree.

  42. Richard,

    (1) I asked about microscopic parts because I think a case can be made for saying that atoms and molecules are not parts, at least in the usual everyday sense. Macroscopic parts clearly are parts. A finger is part of a hand in a perfectly familiar sense even if we can’t quite say where the finger ends and the rest of the hand begins, or say in very great detail what we mean by ‘finger’ in the first place. And fingers are of the same stuff as hands. But molecules and atoms are rather different. They don’t seem to have any definite position or shape or indeed any of the familiar macro-world properties, apart perhaps from mass and electric charge. We can’t talk about them in ordinary language at all—we resort to pretty abstruse mathematics. What I find deeply mysterious is the way the macro-world of the senses and ordinary understandings shades into this abstract stuff. Related to this is my sympathy for a commenter back at SciSal who was ticked off by Massimo for admitting his discomfort with the assertion that Newtonian mechanics is plain false. I’m not sure truth and falsehood are the right categories for evaluating physical theories. But I don’t want to fall into instrumentalism either.

    (2) You didn’t respond to my suggestion that the macro-world property of electrical resistivity finds a reductive explanation in the theory of atoms and electrons. Likewise Panpsychist’s original example of temperature. Do you agree that these are exceptions to your thesis that reduction generally fails (for reasons to do with the nature of language?), and if so, what makes them exceptional?

  43. Coel,

    Not far above you said,

    I am not trying to add anything to supervenience! I’m merely trying to use supervenience as a tool in science.

    Actually, from a philosopher’s point of view, you do want to add something to supervenience. For a philosopher, supervenience is quite a commitment. After all, he thinks, things might have been otherwise! But it’s merely where a physicist begins. Supervenience just says, There’s macro stuff and micro stuff and what happens to the latter fixes what happens to the former, or, There can’t be a change in the macro stuff without a change in the micro stuff. Big deal! Any scientist is going to want to elaborate on this substantially, with a theory of the micro stuff and an account of how it relates to the macro. That’s what you’re adding, as the philosopher sees it. This is how we move from the ontological to the epistemic. My view, I suspect, is yours, that the former is pretty uninteresting without the latter, and separating them is a bogus distinction.

    • Hi David,
      You’re right that I do want to go beyond just stating supervenience, by then using it as a tool for understanding the world and for doing science — that, of course, is why scientists are interested in the concept. Though, for this thread, I’m trying not to make any claim that isn’t logically entailed by a supervenience.

  44. I think both Richard and I would say that pretty well nothing at all strictly follows from supervenience. It’s a sort of metaphysical ‘pattern’ that may be instantiated in some pair of sets of entities or properties. Even then we don’t get very far. What follows from saying that temperature supervenes on mean molecular kinetic energy? Only that the temperature can’t change without mmke changing. But that’s all it says. It doesn’t say one is proportional to the other. That’s a statement within a specific theory that happens to meet the supervenience ‘pattern’.

    • Hi David,
      I agree that not much follows strictly from supervenience, but a couple of things do. Given:

      “the way things are at high levels is fixed by the way things are at the lowest level”

      Then, if you have a complete description of the low level, then you have a complete account of the high level (even if that account is still in terms of the low level).

      Also, if you fully simulate the low level, then the simulation manifests the high level.

      To me those two are trite and obvious, following from supervenience, but I seem to get into trouble when I state them.

      • Hi Coel,

        I couldn’t resist coming back again, because your latest statement of your position is more acceptable, and makes more room for agreement, as long as you don’t later go back to your old statements. And also I feel I may have misled you, and would like to correct myself.

        The good news is that you’ve eliminated any mention of “deduction” or “all the information”. The bad news is that your statements are ambiguous. On a stronger interpretation they say too much, and I would reject them. On a weaker interpretation they are indeed “trite and obvious, following from supervenience”. But then they’re really no more than restatements of supervenience, and you should state clearly that that’s all they are.

        By being ambiguous you can convince yourself (a) that you’re saying something significant (over and above supervenience), and at the same time (b) that you’re saying something so obvious that it can’t be denied. It’s the ambiguity that allows you to feel you can have it both ways. But, if you’re really having it both ways, you’re equivocating. On the other hand, if you’re unequivocal about the fact that this is just a trite restatement of supervenience, you should say so, or better still find a way of expressing yourself unambiguously.

        You quoted me: “the way things are at high levels is fixed by the way things are at the lowest level”

        I was trying to make this as simple as possible, but I ended up being misleading. There isn’t really a “way things are at the high levels” and a “way things are at the lowest level”. There is only one way things are. But there are high-level and low-level ways of describing the way things are. “High level” and “low-level” are properties of descriptions (or models).

        Your statements exhibit a similar problem to mine, but worse.

        Your #1: “Then, if you have a complete description of the low level, then you have a complete account of the high level (even if that account is still in terms of the low level).”

        Surely, an “account” is a description. What is an “account of the high level” if it isn’t a high-level description, in high-level terms? But then your remark in brackets implies that you’re talking about something written in low-level terms. This seems to be a contradiction.

        Your #2: “Also, if you fully simulate the low level, then the simulation manifests the high level.”

        Again, what does “manifests the high level” mean? It sounds like you’re referring to high-level descriptions. But there’s no such thing in the simulation.

        Let me try some alternatives:
        1. A complete low-level description of the way things are is a complete description of the way things are.
        2. A complete low-level simulation of the way things are is a complete simulation of the way things are.
        3. What can truthfully be said in high-level descriptions is fixed by the way things things are as described by a complete low-level description. (Note: “is fixed by” is not the same as “can be deduced from”.)

        Will those satisfy you?

        The thing is, these are all just supervenience stated in different ways. If you prefer to state them these ways, that’s fine. But then don’t go creating the impression that you’re saying something more than supervenience.

        *** PLEASE READ THE ABOVE VERY CAREFULLY, AND AT LEAST TWICE, BEFORE REPLYING. ***
        (I have to say this, because I feel you haven’t read me carefully enough in the past.)

      • I think this clarifies what’s going on very well. I’m going to add a few things, in the hope that it clarifies things a little further.

        Surely, an “account” is a description. What is an “account of the high level” if it isn’t a high-level description, in high-level terms? But then your remark in brackets implies that you’re talking about something written in low-level terms. This seems to be a contradiction.

        One way to think about this is comparing what you can learn from examining the simulation code with what you can learn from running the simulation. In a simulation the “description” is the code, not the results of running the code. The description is the set of low-level laws themselves, not the behaviors they produce.

        Again, what does “manifests the high level” mean? It sounds like you’re referring to high-level descriptions. But there’s no such thing in the simulation.

        This comes back to the same thing. My suspicion is that what “manifests the high level” means is that the high-level *behavior* appears in the simulation. But there is no description of that high-level behavior in the code.

        My own vacillation on this issue comes from the fact that Richard’s hypothetical AI, observing the simulation, might find high-level entities and statistical regularities in the behavior of those high-level entities; and that these statistical regularities and entities might essentially be what the “laws” (as Aravis called them) of the special sciences are.

        But, as Richard pointed out, these are not “deductions” or “logical entailments” of the low-level laws, because the AI is looking at the simulation, not the code, and its conclusions come from the simulation’s behavior, not its descriptions.

      • Hi Asher,

        I’m afraid you’ve complicated things a bit.

        “In a simulation the “description” is the code, not the results of running the code. The description is the set of low-level laws themselves, not the behaviors they produce.”

        Well, the descriptions I’ve been talking about from the start are representations of the state of the world, or of the simulated world. In the simulation the state of the world is represented by the data, i.e. variables representing locations of atoms (etc). In short, I’ve been talking about the variable data (“the results of running the code”), not the code.

        Yes, fixed general laws are a type of description too. And the real-world’s low-level laws are encoded in the simulation program code (though they’re not the whole of the code). But, in mentioning those, you’ve introduced a kind of description that I haven’t discussed, and ignored (even denied) the type that I did! I’m afraid that might cause confusion. I would prefer to avoid the subject of laws for now.

      • Richard –

        I had a feeling you would say that ;). I’ll let it drop for now, except to say that I – perhaps like Coel – am a bit confused about what sorts of description you are talking about. When I said the AI “observes” the simulation, I was assuming that its inputs were the state variables (registers/memory) of the simulation rather than the executing code.

      • Hi Asher,

        “When I said the AI “observes” the simulation, I was assuming that its inputs were the state variables (registers/memory) of the simulation rather than the executing code.”

        Yes, that’s what I’ve meant all along. But your last comment seemed to identify “descriptions” with the executable code, not the state variables. Perhaps I misunderstood you.

      • The way I’ve been thinking about it, the code – at the lowest level – is a fundamental “theory”. I see higher level theories as regularities in the behavior of state variables. I don’t believe that any theories beyond fundamental ones are “in the simulation”. My sense is that we mostly agree.

  45. Coel,

    Then, if you have a complete description of the low level, then you have a complete account of the high level (even if that account is still in terms of the low level).

    Not quite. Here’s an example that brings out the ontic/epistemic distinction. Let’s accept that the distribution of molecular speeds in a gas fixes its temperature. That doesn’t get us very far. Suppose we knowthe speed distribution of the molecules. A friendly Laplacian demon just told us. We now know the micro state. But we don’t know the temperature. All that supervenience tells us is that it’s determinate. To know the temperature we have to use the bridge rule, mmke=3kT/2. That’s an additional piece of knowledge beyond mere supervenience, and the micro-state. Richard’s point, if I have understood him, is that nice, simple bridge rules like the above are not always available.

    • Hi David,

      “To know the temperature we have to use the bridge rule, mmke=3kT/2. That’s an additional piece of knowledge beyond mere supervenience, and the micro-state.”

      Yes, I agree. That “bridge rule” is what I’m calling the “definition” of temperature.

      I agree that the low-level account of the gas does not contain the concept “temperature”. One cannot deduce the concept “temperature” from that low-level account, and thus one cannot arrive at the value of temperature from that low-level account alone.

      However, if we agree on the *definition* of temperature as an additional input, then *given* that definition we then have in the micro-state all the information needed to calculate the value of temperature. (I would say “… to deduce” the value of temperature, but I’ll get jumped on for using that word.)

      In a similar way, *if* we can *define* a high-level concept, *then* our low-level account gives us all the information needed to “translate” our account into the language of that high-level concept — but only to the extent that we *can* define that high-level concept (obviously, if we’ve can’t define it, or can only do so partially and poorly, then we’re stuck).

      “Richard’s point, if I have understood him, is that nice, simple bridge rules like the above are not always available.”

      Yes, I agree entirely. As I’ve said repeatedly, if we lack such definitions of such concepts, then we’re stuck.

  46. Hi Richard,
    I’ve been tied up with work, so am only getting round to replying:

    “On a weaker interpretation they are indeed “trite and obvious, following from supervenience”. But then they’re really no more than restatements of supervenience, and you should state clearly that that’s all they are.”

    I am indeed trying to limit myself to supervenience. I am also trying to draw out logical consequences of supervenience, without going beyond supervenience (but I’ll try to avoid any of that for this reply).

    “There isn’t really a “way things are at the high levels” and a “way things are at the lowest level”. There is only one way things are. But there are high-level and low-level ways of describing the way things are.”

    OK, agreed.

    “Your #1: “Then, if you have a complete description of the low level, then you have a complete account of the high level (even if that account is still in terms of the low level).” Surely, an “account” is a description. What is an “account of the high level” if it isn’t a high-level description, in high-level terms? But then your remark in brackets implies that you’re talking about something written in low-level terms. This seems to be a contradiction.”

    First, I’m a realist in the sense that a “hurricane” really does exist. It exists as a collection of atoms, a particular pattern of atoms. In the same way a starling and a molecule “exist”.

    If we say that a hurricane or a bird “exists”, and if we say that they are both high-level emergent phenomena, are we then saying that those high-level phenomena exist, independently of any description of them? I’d say “yes”, though perhaps this is semantics. But, the normal meaning of “exists” would include a bird as existing.

    By “a complete account of the high level (even if that account is still in terms of the low level)” what I meant is that we have a complete account of the hurricane or of the bird (= “of the high level”), even if that account is in the form of a list of atom locations (= “in terms of the low level”).

    I guess one could retain the term “high level” only for *descriptions*, but that seems to me a bit weird. The phenomenon of a bird flying around does seem to me “real”, and it is present in the complete low-level account, even if not “described” in those high-level terms.

    “Your #2: “Also, if you fully simulate the low level, then the simulation manifests the high level.” Again, what does “manifests the high level” mean?”

    I meant that the phenomenon to which one could give the high-level description “bird flying around” is present in the low-level simulation.

    (To me the idea that a high-level phenomenon is only “manifest” if a human comes along and describes it sounds like wondering whether a tree falling in a forest makes a sound if no-one is around to hear it, to which my physicists answer is “yes, of course”).

    “1. A complete low-level description of the way things are is a complete description of the way things are.
    2. A complete low-level simulation of the way things are is a complete simulation of the way things are.
    3. What can truthfully be said in high-level descriptions is fixed by the way things things are as described by a complete low-level description. (Note: “is fixed by” is not the same as “can be deduced from”.)
    Will those satisfy you?”

    Yes. I agree with all of those, and agree that they are just stating supervenience in different ways.

    • Hi Coel,

      I’m afraid the illusion of progress was just that, an illusion, because you’re back to equivocating.

      Given your clarifications, the two statements you made a couple of comments ago (which I labelled “Your #1” and “Your #2”) turn out to be just ambiguous ways of stating supervenience. That’s reasonably OK, except it’s annoying that you insist on stating supervenience ambiguously when philosophers have already worked out how to say it unambiguously.

      But then you imply that you are going to return to equivocation, by writing the following:

      “I am also trying to draw out logical consequences of supervenience, without going beyond supervenience (but I’ll try to avoid any of that for this reply).”

      See, a couple of comments ago you implied that Your #1 and Your #2 were all you were trying to say, and that those were your “consequences of supervenience”. But now you imply that you have some further “consequences of supervenience” that go beyond those.

      In my last comment to you, I wrote: “…your latest statement of your position is more acceptable, and makes more room for agreement, as long as you don’t later go back to your old statements.” But it seems you are indeed planning to go back to your old statements.

      And you don’t seem able to comprehend the basic logical point that anything which doesn’t “go beyond supervenience” will just be a restatement of supervenience in another form. In effect you’re contradicting yourself, saying simultaneously that you do want to go beyond supervenience itself (to a consequence of supervenience) and that you don’t.

      So it’s back to square one yet again. I feel a fool for banging my head against the same brick wall again, after I said I was going to stop. Sigh.

      • Hi Richard,

        “I’m afraid the illusion of progress was just that, an illusion, because you’re back to equivocating.”

        Can I assure you that this is just as frustrating for me! And, no, I really don’t think it is all my fault for being confused or not understanding.

        First, there are *all* *sorts* of consequences of supervenience. That’s why phyicists want to use it as a tool. But, using it as a tool is not the same as adopting additional doctrines that go beyond supervenience.

        You seem to want to stick with a particular wording of “supervenience” and are strongy resistant to any attempt to then *use* the concept. It’s as if we’d created a pristine shovel, and then you’re objecting to me wanting to dig a hole with it.

        “And you don’t seem able to comprehend the basic logical point that anything which doesn’t “go beyond supervenience” will just be a restatement of supervenience in another form.”

        This exemplifies the whole discussion! The absolute confidence that it is me at fault and that I’m not “able to comprehend” very basic points.

        Because, from my point of view, it is you who is not grasping that elementary point. What I am doing is indeed nothing more than making “a restatement of supervenience in another form”. That is exactly what I’ve been trying to explain to you all along! That is why bringing out logical consequences of supervenience is NOT going beyond supervenience.

        “In effect you’re contradicting yourself, saying simultaneously that you do want to go beyond supervenience itself (to a consequence of supervenience) and that you don’t.”

        Are you really unable to think of a charitable interpretation of that? Namely, that stating a *logical* *consequence* of supervenience is not adding anything (no extra axioms or doctrines) beyond supervenience?

        But, bringing out *consequences* of supervenience is a step towards *using* the concept as a *tool*. Effectively I’m sticking the shovel into the earth, to howls of outrage from the philosophers.

        Really, I AM NOT GOING BEYOND SUPERVENIENCE in the sense of adding extra axioms or doctrines beyond what is entailed by supervenience, but I *am* seeking logical *consequences* of supervenience because those make supervenience into a useful tool.

        It’s as if we’d agreed a whole lot of axioms of maths, and we’d agreed that 2 + 2 = 4 — and then I go on to say that, given those axioms, then 4 – 2 = 2 , and you immediately howl that I’m adding extra stuff in going beyond what we’d agreed.

        I’ve explained this all about six times, and I really am totally and utterly baffled about what you’re not grasping about what seems to me a very easy concept.

        Now, if you just want to stick with the precisely stated philosophical doctrine of supervenience, and then not even consider what that means for science and how science is done, then ok, I do accept that plenty of people are not interested in science — but please understand that doing science will involve *implementing* such concepts and that is going to involve having them re-stated in different ways!

  47. Coel,

    I would submit that you (and I) are committed to two theses in addition to supervenience.

    Firstly, you are committed to a particular supervenience. The argument originally was about the reduction of fields of study, and if you maintain that thesis (as I do) then I believe this is a stronger thesis than mere supevenience. For example, consider some high-level concept Qeture, which is fixed by the low-level concept “velocity distribution” but each time Qeture emerges it is fixed in a different way; one time it is the average velocity, another time it’s the second molecule’s momentum, and so on, with no rhyme or reason. Now suppose Qeture will actually be a useful concept in some higher-level science. I would suggest that in that case Reduction will fail since there won’t really be a useful way to reduce Qeture to lower-level concepts..

    My own thesis is that there are no such useful concepts; that the high-level concepts can be reduced to relatively simple lower-level ones; that higher fields of knowledge are indeed “special sciences”, focusing on particular dynamics or physical structures or so so that the bridge rules do, in fact, exist.

    Correct me if I’m wrong, but you seem to subscribe to something like this thesis too. Or do you believe useful high-level concepts are fixed by lower-level ones, but in a way that doesn’t necessarily imply different fields of science can be reduced to others in this way?

    Secondly, I would maintain supervenience implies type reduction in the weak sense defined above, “What matters is we have rules of replacement so we can reduce one kind of description to the other”. Given definitions that provide “rules of replacement”, one can reduce higher-level concepts to lower-level ones. Roughly speaking.

    This is not a thesis on top of supervenience, however, and of course does not entail that lo lower-level “reduced” descriptions would be intelligible, or that this lowers the level of the concept, nor that multiple relizability is a no-go (on the contrary).

    Finally, I would note that I’m NOT a reductionist when it comes to the mind-body problem. I think no third-level description can be reduced to a first-level one. Rather, I maintain that we should augment the third-level description provided by physics with psycho-physical “bridge-laws”, e.g Tononi’s Integrated Information Theory, in order to provide the necessary structure to describe first-level as well as third-level descriptions of physical systems. Thus fields like human psychology do require more than just definitions.

    • Hi Panpsychist:

      “each time Qeture emerges it is fixed in a different way; one time it is the average velocity, another time it’s the second molecule’s momentum, and so on, with no rhyme or reason.”

      If there is no rhyme or reason to it then I’d regard it as pretty much an undefined concept. I’d be amazed if an undefined concept could ever be useful.

      “… that the high-level concepts can be reduced to relatively simple lower-level ones;”

      I would not agree. I don’t see any reason why the reduction would always be simple, indeed most of the time it will be so ludicrously complex as to be uselessly unweildy and effectively unknowable.

      For example, if we consider the concept: “Causes of the First World War”, there is no way we’re going to get any “simple” reduction to any lower level.

      As another example, we’re not going to reduce “cheetah chasing an antelope” to an atom-level description without a “bridge law” that involves the entire contingent history of the evolution of those life forms. That’s about the least simple thing imaginable!

      • “If there is no rhyme or reason to it then I’d regard it as pretty much an undefined concept. ”
        It would have no finite definition. It could very well have a precise infinite definition….

        “I would not agree. I don’t see any reason why the reduction would always be simple, indeed most of the time it will be so ludicrously complex as to be uselessly unweildy and effectively unknowable.”

        …. isn’t that the very kind of “undefined concept” we were just discussing? I’d argue that the bridge-laws aren’t that complex, that they employ concepts that are useful and hence not too unwieldy or ill-defined.

        “For example, if we consider the concept: “Causes of the First World War”, there is no way we’re going to get any “simple” reduction to any lower level.”

        There is certainly no way to get a “simple” reduction to the level of atoms, say. But I’d posit we should be able to reduce causes of the first world war to things like people’s desires and idealogies, for example, which is a reduction. We could reduce history into fields such as economics, military theory, human psychology, and so on.

        “As another example, we’re not going to reduce “cheetah chasing an antelope” to an atom-level description without a “bridge law” that involves the entire contingent history of the evolution of those life forms. That’s about the least simple thing imaginable!””

        Agreed. But the point is that we should be able to reduce “cheetah chasing an antelope” to lower concepts such as e.g. “an animal”, “pursuing with intent to catch”, and so on. These too can be cashed out as lower-level patterns until eventually you end up with the monstrously and exponentially complicated cashing-out of the high-level concpet in terms of the lowest level and basic relations. My thesis is that something like this must be true if any supecific kind of supervinence is true, and that hence supervenicence implies a “type reduction” of this sort.

    • Hi panpsychist,

      “It would have no finite definition. It could very well have a precise infinite definition….”

      OK, but such a definition would be useless for practical purposes, Indeed, even “in principle” such a definition could not be used.

      “I’d argue that the bridge-laws aren’t that complex, that they employ concepts that are useful and hence not too unwieldy or ill-defined.”

      I would say that often such “bridge laws” (not a term I particularly like) would be too complicated, too unwieldy and too unknown for actual use. I am certainly *not* arguing that there are always relatively simple bridge laws, nor that a high-level concept can always be described in a relatively simple way in terms of low-level concepts.

      “But I’d posit we should be able to reduce causes of the first world war to things like people’s desires and idealogies, for example, which is a reduction”

      But even that is not going to be “simple”. One could write a book-length account if it and it would still be partial. I agree that such a “reduction” is in-principle possible. I still maintain that very often such reductions will be hopelessly impractical and undoable (because the linkages will be too complex, volumeous and unwieldy).

      • Coel,

        I admit I am confused by your position. How, then, do you see the relation between physics and chemistry, say? Or more generally, the general idea of reduction of the sciences (e.g.and as a caricature physics -> chemistry -> biology -> neuroscience -> psychology -> sociology -> economics -> history)?

        My position is that “bridge-laws” exist for the basic concepts of each science into lower-levels, so that e.g. chemistry is based on an identification of “atoms” from physics, biology is based on identifying cells, organisms, genetic codes, and so on which are essentially chemistry, and so on. These “bridge-laws” are actually very rough and vague “definitions”, but nevertheless on the whole sciences are based on identifying things and dynamics at the lower levels. I thought this was roughly your position as well.

      • Hi Pansychist,

        My position is what philosophers would call “supervenience physicalism”. (Much miscommunication on SS, it seems to me, has occurred because physicists use the term “reductionism” to mean what philosophers call “supervenience physicalism”, whereas to philosophers “reductionism” usually means something stronger).

        On the topic of “bridge laws” I’m slightly wary of answering, because this is again a philosophers’ term and concept that is not used by physicists. For that reason I’m not clear of all the connotations of it.

        The word “law” seems to me to imply something relatively clear, well-specified and concise — something that one could write unambiguously in a paragraph or so of text and/or equations.

        I do not agree that there will always be “bridge laws” in that sense, indeed often there will not be.

        If, though, we’re allowing such “bridge laws” to be “very rough and vague” then, ok, but then I’m not sure that the claim of “bridge laws” has much substance, and personally I would not use that term.

      • This is what the whole “theoretical” vs “ontological” reduction thing is about.

        Bridge laws are about reducing *theories* to other theories. In all of the philosophical literature I’ve read, bridge laws are not vague in any way. They are meant to make the actual reduction of a higher-level theory to a lower-level one possible. And there is no really good evidence that these exist in non-trivial cases.

        Ontological reductionism is at least weakly implied by supervenience (in Pigliucci’s account, anyway — Aravis seems to have a different take on what ontological reduction is). Supervenience plus rejection of strong emergentism (which in positive terms could also be called an acceptance of the “causal completeness of fundamental physics”) would almost surely imply ontological reductionism.

      • Hi Asher,

        Given your explanation of “bridge laws” I agree that in general they will not exist, though they might in a few specific cases.

        On “ontological reductionism”, I’m again a bit wary owing to lacking full understanding of the connotations of the term.

        I would say that a limited number of types of fundamental particle exist, and that higher-level entities are “patterns” of the lower-level entities.

        Thus atoms are patterns of nucleons and electrons (plus bosons binding them together). Molecules are patterns of atoms. Cells are patterns of molecules. etc.

        The higher-level entities such as “cells” most certainly “exist” (under any sensible definition of the term “exist”), but exist as patterns of lower-level entities. In the same way, waves and hurricanes exist as patterns of matter.

        (I’m not sure whether that counts as ontological emergence or as epistemological emergence.)

      • Coel –

        Although philosophy people often talk about existence, it really comes down to whether hurricanes (for example) have any causal efficacy that is not accounted for by the fundamental entities, whatever they may be. That’s what I mean when I say “causal completeness”. The idea is that higher level phenomena emerge, and there is no problem that saying higher-level patterns “exist” – but there is no such thing (in this view) as “top-down causation” — there is only causation at the fundamental level. Which amounts to a rejection of “strong emergence”.

        I think that with respect to emergence, the ontological/epistemological distinction can be pretty confused and confusing. In my perhaps oversimplified view, claiming ontological emergence is just saying that yes, in a world with no thinking beings, there still exist processes that are patterned behaviors involving multiple causal entities.

        If you think about it, this is where the confusion really seems to arise. A physical simulation of a whirlpool presumably needs nothing but fundamental causes programmed in to exhibit the behavior. But the vortex itself – how it behaves as a conglomerate phenomenon – seems to us to be “new” and not “contained by” the descriptions of the fundamental forces. You could say that the vortex behavior is “necessitated by” or even “entailed by” the fundamental descriptions, because when you run the simulation, there’s the vortex. But we seem to be tempted to say that something is arising that is not “described by” those fundamental descriptions. And when we “describe” a vortex, we use other conglomerate properties to say how it will act (things like the “viscosity” of the fluid).

        We seem to want to say that, in addition to needing a “fundamental theory” (the set of fundamental descriptions our simulation runs on), we need a theory of the patterned, conglomerate behavior itself (in this case, a “fluid dynamics” theory).

  48. Coel,

    “My position is what philosophers would call “supervenience physicalism””

    I would maintain that you are committed to more than that – to a particular, uniform, kind of supervenience.

    “If, though, we’re allowing such “bridge laws” to be “very rough and vague” then, ok, but then I’m not sure that the claim of “bridge laws” has much substance, and personally I would not use that term.”

    Let’s use the examples you raise a bit below,

    “Thus atoms are patterns of nucleons and electrons (plus bosons binding them together). Molecules are patterns of atoms. Cells are patterns of molecules. etc.”

    We decide which patterns to call an “atom”, and such definitions are often somewhat vague. This is an example of what I called a “bridge law” above – this is a somehwhat vauge concept (try to argue it’s rigorous to a mathematician!), which “bridges” the gap between nuclear physics and chemistry. I’d say it has plenty substance and usefuleness. We can switch terms back to “definitions” if the term “bridge law” is inappropriate here, but the point is that we have here a reduction of one type – atoms in chemistry – to patterns in the dynamics and constitution of a lower level – patterns of nucleons and pions and their weak interactions in nuclear physics.

    Now note that the higher-level concepts are not simply determined here by the lower level, but are determined in a particular, uniform, way. We are saying the way atoms (well, atomic nuclei) behaves always reduces to the way nucleons behave and always in the same uniform way. So we are committed to more than supervenience in the abstract, we are commiting to a particular kind of supervenience holding uniformly.

    Also note that there is a type-reduction here, in the weak sense defined above – talk using the higher-level term (atom) can be replaced by talk using the lower-level ones (dynamics of nucleons). (Also note this doesn’t at all change the “level” of what an atom is – the concept is still about the same atoms, even if we now understand them as patterns of lower-level dynamics and constituents.)

    The same goes for molecules, cells, and so on.

    [Edited penultimate paragraph.]

    • Well, going from nucleons+electrons to atoms, and from atoms to molecules, seems to be the easy stuff. Yes, I can imagine that you could write down reasonably succinct and specific “bridge laws” regarding those.

      But, when you get to a much higher level I’d suggest that the “bridge laws” are no longer nearly simple, specific and concise enough to be called “bridge laws”. If we take concepts such as “tree” or “climate” or “poem” or “nation” or “leather” I’d suggest that the concept breaks down.

      • “But, when you get to a much higher level I’d suggest that the “bridge laws” are no longer nearly simple, specific and concise enough to be called “bridge laws”. If we take concepts such as “tree” or “climate” or “poem” or “nation” or “leather” I’d suggest that the concept breaks down.”

        Take your “tree”. At the most basic level, a “tree” is defined by pointing to trees and saying “things like this”. Even that is a definition of sorts, even that is a pattern that can be identified by neural networks. As knowledge of biology advances, however, the definition of “tree” changes, and it becomes defined in terms of its constituents, (evolutionary) origins, and so on.. These new definitions build up a reduction of “tree” into lower-level concepts, in that trees are constituted by lower-level things (cells, DNA, proteins…) and the dynamics of trees are due to the dynamics of these constituents. (At the same time “tree” is also defined in terms of its role in the ecosystem and so on; but roles in ecosystems are due to how the trees behave rather than vice versa, so this is still a reductive picture.)

        The new, scientific, definitions may clash with the old ones; someone might think, for example, that something is a tree and a biologist would say that “actually, that’s a bush”. This isn’t a problem. The scientific definitions are just another (more precise) way to carve up the world. We carve things according to the dynamics and constitution of their constituents, as that makes studying them easier.

        I see no reason to hold that at some level, reduction fails. That for some field of science, the objects of study are no longer the patterns of lower-level objects.

        That is my position. What I fail to understand is what is the alternative you offer. If a “tree” cannot be reduced to lower-level descriptions – than what is a mature science of trees about? It is the study of objects that do not have any common structure or dynamics?!! The study of things that are not made up of lower-level things?!! I don’t understand your picture of how the domains of science fit together.

      • “At the most basic level, a “tree” is defined by pointing to trees and saying “things like this”. Even that is a definition of sorts, even that is a pattern that can be identified by neural networks.”

        I agree entirely, and indeed that’s exactly what I’ve been arguing for most of this thread.

        However, what I am saying is that a definition along the lines of “things like this” is not a “bridge law”. Now, it may be that I don’t properly understand what is meant by the concept “bridge law”, but as I understand it the vaguer definitions of the sort that you outline would not qualify.

Leave a reply to David Brightly Cancel reply