About Rationally Speaking

Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Sunday, December 30, 2012

Is there a problem of counterfactual philosophers?

by Massimo Pigliucci

This is my last report from the annual meeting of the American Philosophical Association. I had no idea what this session was going to be about, but the title was intriguing enough to make me skip a parallel one on Elliott Sober’s (very interesting, I read and reviewed it) book, Did Darwin Write the Origin Backwards?. Anyway, the session I actually attended was chaired by Jeff Sebo (NYU), the speaker was Nathan Ballantyne (Fordham), and the commentator was Adam Rosenfeld (Stony Brook). Warning: the below is going to be weird, and may even at times sound like a Monty Python version of what a philosophy talk is like...

Ballantyne began by pointing out that plenty of people who chose other professions could have instead done philosophy. And that of course there are philosophers who are dead but could have died later, especially in the case of those who died young, and could have contributed more to philosophical investigations.

We can think of plenty of epistemic counterfactuals, such as “if the student had studied for the exam he would have known the answer to the exam’s question.” The issue in which the author is interested is that in a number of nearby possible worlds some counterfactual philosopher might have come up with a “defeater” for certain arguments, that is with rational reasons to abandon a given position, reasons that are not available to us because the counterfactual in question is, well, counter-factual...

The problem, as stated by Ballantyne, is this: if a group of methodologically-friendly (e.g., analytical philosophers, if you are an analytic philosopher) counterfactual philosophers had scrutinized my best arguments for some proposition P and then shared their thoughts, I very likely would have defeaters for believing P. As a consequence, we should take seriously the possibility that some of our philosophical beliefs are not, in fact, rational or defensible. (I must say, I believe the latter even in the absence of counterfactual philosophers from nearby possible worlds...)

Ballantyne then entertained — in true philosophical fashion — a number of objections and replies to his central thesis. Here is a flavor, from his handout at the talk:

Objection: counterfactual philosophers might offer us defeaters, but they might not. So it’s doubtful that they would. Then why grant that they very likely would?

Reply: be imaginative.

Objection: wouldn’t methodologically-friendly counterfactual philosophers just agree with us?

Reply: it’s doubtful that the relevant methodological commitments alone guarantee agreement.

Objection: don’t the counterfactual philosophers cancel out?

Reply: consider a case where two sources deliver conflicting epistemic counterfactuals.

Objection: but I have conclusive, knockdown arguments.

Reply: maybe, but you’ll also need reasons to think they are.

And so on. Some of Ballantyne’s objection-reply pairs are a bit more complex, but you get the gist. Should we take this as a serious problem, or as a textbook example of how silly philosophy can get? Good question, and I’m inclined toward a middle ground view.

I can see Ballantyne’s logic, an example of thinking along the lines of possible-worlds logic, which has plaid a major role in certain quarters of analytic philosophy throughout the 20th century. There is an analogous of this problem in philosophy of science, sometimes referred to as the problem of unconceived alternatives. It is related to the issue of underdetermination of theory by the data, a staple of antirealism about scientific theories, and even of some science studies-type critiques of science.

For instance, Andrew Pickering famously proposed in his Constructing Quarks that fundamental physics could have taken a significantly different route from what it actually did during the 20th century, resulting in a picture of the world where the conceptual construct “quark” was not needed. Most physicists would likely react with outrage to this suggestion, and I am not endorsing it here, but it is a possibility, the likelihood of which depend on how necessary the role of certain theoretical entities really is in our views of the world. And of course it depends also on historical counterfactuals about what a different fundamental physics community might or might not have come up with.

The point is that this isn’t a silly possibility to be ruled out without engagement. Indeed, the issue — in philosophy — is related to my contention that philosophical investigations proceed by exploring the logical space relevant to a particular issue. Philosophy makes progress, in an important sense, by exploring spaces of logical possibilities, excluding certain options as inadequate, and working toward refining more promising ones. Think, for instance, how the original version of utilitarianism in ethics, proposed by Jeremy Bentham, was much less tenable than the more sophisticated views put forth by John Stuart Mill, and how those in turn have been significantly augmented by modern consequentialists like Peter Singer (again, this is not an endorsement of utilitarianism on my part, since I am more sympathetic to virtue ethics, but rather a good example of the general principle).

So, if a physicist says that there couldn’t possibly be a fundamental physics without quarks she is making a strong statement that the available empirical evidence constrains the logical space of possible physical theories so that one simply cannot avoid quarks in any good theory about the basic constituents of matter. Similarly, a philosopher who replies to Ballantyne along the lines of one of the objections above (say, counterfactual philosophers would likely agree with our current arguments and positions) is saying that modern philosophy has explored the fruitful regions of logical space concerning a number of issues to a high degree of accuracy, which is a position significantly more doubtful than the strong position held by our imaginary physicist concerning quarks. (I hasten to say that this conclusion is not arrived at because physics is a hard science and philosophy is fluff, but because the logical space in which philosophers typically move is much less constrained by empirical data than the space of theories about the actual world that physicists navigate. This could even be elaborated into an argument that philosophy is therefore much more difficult than physics, but I won’t go that far...)

All of the above said, one cannot possibly leave a session like the one I attended without having a nagging sensation that one has just witnessed an example of what Ladyman and Ross disparagingly refer to as “neo-Scholasticism.” You’ve got to admire the ingenuity of the argument, but at the end of the day you are left with the question: so what? Yes, counterfactual philosophers might have come up with defeaters for some positions that we think are solid, but guess what, counterfactual philosophers don’t exist, and we don’t have any way to access their counter-arguments, so all we can do is to keep doing what we always do: rely on actual philosophers to challenge our arguments, and to respond to the best of our ability to the objections real people actually throw at us. Philosophy, like science, is a human enterprise, and it advances within the constrains imposed by human epistemic limits. That said, as Ballantyne admonished, it’s always a good idea to remind ourselves that maybe a defeater (or a better theory in science) hasn’t been found not because it doesn’t exist, but simply because we have not (yet) stumbled upon it.

Saturday, December 29, 2012

Science and metaphysics

by Massimo Pigliucci

Afternoon time at the annual meeting of the American Philosophical Association. I’m following the session on science and metaphysics, chaired by Shamik Dasgupta (Princeton). The featured speakers are Steven French (Leeds-UK), James Ladyman (Bristol-UK), and Jonathan Schaffer (Rutgers-New Brunswick).  I have developed a keen interest in this topic of late, though as an observer and commentator, not a direct participant to the discussion. Let’s see what is going to transpire today. A note of warning: what follows isn't for the (metaphysically) faint of heart, and it does require at least some familiarity with fundamental physics.

We started with French on enabling eliminitavism, or what he called taking a Viking approach to metaphysics. (The reference to Vikings is meant to evoke an attitude of plundering what one needs and leave the rest; less violently, this is a view of metaphysics as helping itself to a varied toolbox.) French wishes to reject the claim made by others (for instance, Ladyman) that a prioristic metaphysics should be discontinued. However, he does agree with critics that metaphysics should take science seriously.

The problem French is concerned with, then, is how to relate the scientific to the ontological understanding of the world. Two examples he cited were realism about wave functions and the kind of ontic structural realism favored by Ladyman and his colleague Ross.

Ontic structural realism comes in at least two varieties: eliminativist (we should eliminate objects entirely from our metaphysics, particles are actually "nodes" in the structure of the world) and non-eliminativist (which retains a "thin" version of objects, via the relations of the underlying structure).

French went on to talk about three tools for the metaphysician: dependence, monism, and an account of truth making.

Dependence. The idea is that, for instance, particles are "dependent" for their existence on the underlying structure of the world. A dependent object is one whose features are derivative on something else. In this sense, eliminitavism looks viable: one could in principle "eliminate" (ontologically) elementary particles by cashing out their features in terms of the features of the underlying structure, effectively doing away with the objects themselves.

The basic idea, to put it as French did, is that "if it is of the essence, or nature or constitution of X that it exists only if Y exists, so that X is dependent on Y in the right sort of way, then X can be eliminated in favor of Y + structure."

As French acknowledged, however (though he didn't seem sufficiently worried about it, in my opinion), the eliminativist still needs to provide an account of how we recover the observable properties of objects above the level of fundamental structure.

Monism. This is the (old) idea that the world is made of one kind of fundamental stuff, a view recently termed "blobjectivism" (everything reduces to a fundamental blob). As French put it, this is saying that yes, electrons, for instance, have charges, but there really are no electrons, there is just the blob (that is, the structure).

A number of concerns have been raised against monism, and French commented on a few. For instance, monism can't capture permutations in state space. To which the monist responds that monistic structure includes permutation invariance. This, however, strikes me as borderline begging the question, since the monist can always use a catch all "it's already in the structure" response to any criticism. But how do we know that the blob really does embody this much explanatory power?

Truthmakers. French endorses something called Cameronian truthmaker theory, according to which < X exists > might be made true by something other than X. Therefore, the explanation goes, < X exists > might be true according to theory T without X being an ontological commitment of T.

Perhaps this will be made clearer by looking at one of the objections to this account of truth making: the critic can reasonably ask how is it possible that there appear to be things like tables, chairs, particles, etc. if these things don't actually exist. French's response is that one just needs to piggyback on the relevant physics, though it isn't at all settled that "the relevant physics" actually says that tables, chairs and particles don't exist in the strong eliminativist sense of the term (as opposed to, say, they exist as spatio-temporal patterns of a certain kind, accessible at the relevant level of analysis).

Next we moved to Ladyman, on "between eliminativism and monism: the radical middle ground." He acknowledged that structural realism is accused by some of indulging in mystery mongering, but Ladyman responded (correctly, I think) that it is physics that threw up stuff —  like fundamental relations and structure — that doesn't fit with classical metaphysical concepts, and the metaphysician now has to make some sense of the new situation.

Ladyman disagrees with French's eliminativism about objects, suggesting that taking structure seriously doesn't require to do away with objects. The idea is that there actually are different versions of structuralism, which depend on how fundamental relations are taken to be. James also disagrees with the following speaker, Schaffer, who is an eliminativist about relations, giving ontological priority to one object and intrinsic properties (monism). Ladyman's (and his colleague Ross') position is summarized as one of being non-eliminativist about metaphysically "thin" individuals, giving ontological priority to relational structures.

One of the crucial questions here is whether there is a fundamental level to reality, and whether consequently there is a unidirectional ontological dependence between levels of reality. Ladyman denies a unidirectional dependence. For instance, particles and their state depend on each other (that is, one cannot exist without the other), the interdependence being symmetrical. The same goes for mathematical objects and their relations, for instance the natural numbers and their relations.

As for the existence of a fundamental level, we have an intuition that there must be one, partly because the reductionist program has been successful in science. However, Ladyman thinks that the latest physics has rendered that expectation problematic. Things got more and more messy in fundamental physics of late, not less so. Consequently, for Ladyman the issue of a fundamental level is an open question, which therefore should not been built into one's metaphysical system — at least not until physicists settle the matter.

Are elementary quantum particles individuals? Well, one needs to be clear on what one means by individual, and also on the relation between the concept of individuality and that of object. This is a question that is related to that old chestnut of metaphysics, the principle of identity of indiscernibles (which establishes a difference between individuals — which are not identical, and therefore discernible — and mere objects). However, Ladyman collapses individuals into objects, which is why he is happy to say that — compatibly with quantum mechanics — quantum particles are indeed objects. The idea is that particles are intrinsically indiscernible, but they are (weakly) discernible in virtue of their spatio-temporal locality. 

Ladyman, incidentally, is aware of course of the quantum principle of non-locality, which makes the idea of precisely individuated particles problematic. But he doesn't think that non-locality licenses a generic holism where there is only one big blob in the world, and that individuality can be recovered by thinking in terms of a locally confined holism. Again, that strikes me as sensible in terms of the physics (as I understand it), and it helps recovering a (thin, as he puts it) sense in which there are objects in the world.

Finally, we got to Schaffer, who argued against ontic structural realism of the type proposed by either French or Ladyman. He wants to defend the more classical view of monism instead. He claimed that that is the actual metaphysical picture that emerges from current interpretations of quantum mechanics and general relativity.

His view is that different mathematical models — both in q.m. and in g.r. — are best thought of as just being different notations related by permutations, corresponding to a metaphysical unity. In a sense, these different mathematical notations "collapse" into a unified picture of the world.

Schaffer's way to cash out his project is by using the (in)famous Ramsey sentences, which are sentences that do away with labels, not being concerned with specific individuals. Now, one can write the Ramsey sentences corresponding to the equations of general relativity, which according to the author yields a picture of the type that has been thought of since at least Aristotle: things come first, relations are derivative (i.e., one cannot have structures or relations without things that are structured or related). If this is right, of course, the ideas that there are only structures (eliminitavism a la French) or that structures are ontologically prior to objects (Ladyman) are incorrect.

So, Schaffer thinks of Ramsey sentences as describing structural properties, which he takes to be the first step toward monism. Second, says Schaffer, what distinguishes abstract structures from the one describing the universe is that something bears those structures. That something is suggested to be the largest thing we can think fits the job, that is the universe as a whole. He calls this picture monistic structural realism: there is a cosmos (the whole), characterized by parts that bear out the structures qualitatively described by the Ramsey translation of standard physical theories like relativity and quantum mechanics. Note that this is monism because — thanks to the Ramsey translation — the parts are interchangeable, related by the mathematical permutations mentioned above.

Okay, does your head spin by now? This is admittedly complicated stuff, which is why I added explanatory links to a number of the concepts deployed by the three speakers. I found the session fascinating as it gave me a feeling for the current status of discussions in metaphysics, particularly of course as far as it concerns the increasingly dominant idea of structural realism, in its various flavors. Notice too that none of the participants engaged in what Ladyman and Ross (in their Every Thing Must Go, about which I have already commented) somewhat derisively labeled "neo-Scholasticism," that is the entire discussion took seriously what comes out of physics, all participants conceptualizing metaphysics as the task of making sense of the broad picture of the world that science keeps uncovering. That seems to me to be the right way of doing metaphysics, and one that may (indeed should!) appeal even to scientists.

The philosophy of genetic drift

by Massimo Pigliucci

This morning I am following a session on genetic drift at the American Philosophical Association meetings in Atlanta. It is chaired by Tyler Curtain (University of North Carolina-Chapel Hill), the speaker is Charles Pence (Notre Dame), and the commenters are Lindley Darden (Maryland-College Park) and Lindsay Craig (Idaho). [Note: I’ve written myself about this concept, for instance in chapter 1 of Making Sense of Evolution. Check also these papers in the journal Philosophy & Theory in Biology: Matthen and Millstein et al.]

The title of Charles' talk was "It's ok to call genetic drift a force," a position — I should state right at the beginning — with which I actually disagree. Let the fun begin! Drift has always been an interesting and conceptually confusing issue in evolutionary biology, and of course it plays a crucial role in mathematical population genetic theory. Drift has to do with stochastic events in generation-to-generation population sampling of gametes. The strength of drift is inversely proportional to population size, which also means it has an antagonistic effect to natural selection (whose strength is directly proportional to population size).

Charles pointed out that one popular interpretation of drift among philosophers is "whatever causes fail to differentiate based on fitness." The standard example is someone being struck by lightening, the resulting death clearly having nothing to do with that individual's fitness. I'm pretty sure this is not what population geneticists mean by drift. If that were the case, a mass extinction caused by an asteroid (that is, a cause that has nothing to do with individual fitness) would also count as drift. Indeed, discussions of drift — even among biologists — often seem to confuse a number of phenomena that have little to do with each other, other than the very generic property of being "random."

What about the force interpretation then? This is originally due to Elliott Sober (1984), who developed a conceptual model of the Hardy-Weinberg equilibrium in population genetics based on an analogy with Newtonian forces. H-W is a simple equation that describes the genotypic frequencies in a population where no evolutionary processes are at work: no selection, no mutation, no migration, no assortative (i.e., non random) mating, and infinite population size (which implies no drift).

The force interpretation is connected to the (also problematic, see Making Sense of Evolution, chapter 8) concept of adaptive landscape in evolutionary theory. This is a way to visualize the relationship between allelic frequencies and selection: the latter will move populations "upwards" (i.e., toward higher fitness) on any slope in the landscape, while drift will tend to shift populations randomly around the landscape.

The controversy about thinking of drift as a force began in 2002 with a paper by Matthen and Ariew, followed by another one by Brandon in 2006. The basic point was that drift inherently does not have a direction, and therefore cannot be analogized to a force in the physical (Newtonian) sense. As a result, the force metaphor fails.

Stephens (2004) claimed that drift does have direction, since it drives populations toward less and less heterozygosity (or more and more homozygosity). Charles didn't buy this, and he is right. Stephens is redefining "direction" for his own purposes, as heterozygosity does not appear on the adaptive landscape, making Stephens' response entirely artificial and not consonant with accepted population genetic theory.

Filler (2009) thinks that drift is a force because it has a mathematically specific magnitude and can unify a wide array of seemingly disparate phenomena. Another bad answer, I think (and, again, Charles also had problems with this). First off, forces don't just have magnitude, they also have direction, which, again, is not the case for drift. Sober was very clear on this, since he wanted to think of evolutionary "forces" as vectors that can be combined or subtracted. Second, it seems that if one follows Filler far too many things will begin to count as "forces" that neither physicists nor biologists would recognize as such.

Charles' idea is to turn to the physicists and see whether there are interesting analogs of drift in the physical world. His chosen example was Brownian motion, the random movement of small objects like dust particles. Brownian motion is well understood and mathematically rigorously described. Charles claimed that the equation for Brownian motion "looks" like the equation for a stochastic force, which makes it legitimate to translate the approach to drift.

But I'm pretty sure that physicists themselves don't think of Brownian motion as a force. Having a mathematical description of stochastic effects (which we do have, both for Brownian motion and for drift — and by the way, the two look very different!) is not the same as having established that the thing one is modeling is a force. Indeed, Charles granted that one could push back on his suggestion, and reject that either drift or Brownian motion are forces. I'm inclined to take that route.

A second set of objections to the idea of drift as a force (other than it doesn't have direction) is concerned with the use of null models, or inertial states, in scientific theorizing. H-W is supposed to describe what happens when nothing happens, so to speak, in populations of organisms. According to Brandon, however, drift is inherent in biological populations, so that drift is the inertial state itself, not one of the "forces" that move populations away from such state.

Charles countered that for a Newtonian system gravity also could be considered "constitutive," the way Brandon thinks of drift, but that would be weird. Charles also object that it is no good to argue that one could consider Newtonian bodies in isolation from the rest of the universe, because similar idealizations can be invoked for drift, most famously the above mentioned assumption of infinite population size. This is an interesting point, but I think the broader issue here is the very usefulness of null models in science in general, and in biology in particular (I am skeptical of their use, at least as far as inherently statistical problems of the kind dealt with by organismal biology are concerned, see chapter 10 of Making Sense).

Broadly speaking, one of the commentators (Darden) questioned the very benefit of treating drift as a force, considering that obviously biologists have been able to model drift using rigorous mathematical models that simply do not require a force interpretation. Indeed, not even selection can always be modeled as a vector with intensity and direction: neither the case of stabilizing selection nor that of disruptive selection fit easily in that mold, because in both instances selection acts to (respectively) decrease or increase a trait's variance, not its mean. Moreover, as I pointed out in the discussion, assortative mating is also very difficult to conceptualize as a vector with directionality, which makes the whole attempt at thinking of evolutionary "forces" ever more muddled and not particularly useful. Darren's more specific point was that while it is easy to think of natural selection as a mechanism, it is hard to think of drift as a mechanism (indeed, she outright denied that it is one), which again casts doubt on what there is to gain from thinking of drift as a force. The second commentator (Craig) also questioned the usefulness of the force metaphor for drift, even if defensible along the lines outlined by Pence and others.

Even more broadly, haven't physicists themselves moved away from talk of forces? I mean, let's not forget that Newtonian mechanics is only an approximation of relativity theory, and that "forces" in physics are actually interpreted in terms of fields and associated particles (as in the recently much discussed Higgs field and particle). Are we going to reinterpret this whole debate in terms of biological fields of some sort? Isn't it time that biologists (and philosophers of biology) let go of their physics envy (or their envy of philosophy of physics)?

Friday, December 28, 2012

Metaethical antirealism, evolution and genetic determinism

by Massimo Pigliucci

The Friday afternoon session of the American Philosophical Association meeting from which I am blogging actually had at the least three events of interest to philosophers of science: one on race in population genetics, one on laws in the life sciences, and one on the strange combination of (metaethical) antirealism, evolution and genetic determinism. As it is clear from the title of this post, I opted for the latter... It featured three speakers: Michael Deem (University of Notre Dame), Melinda Hall (Vanderbilt), and Daniel Demetriou (Minnesota-Morris).

Deem went first, on "de-horning the Darwinian dilemma for realist theories of value" (no slides, darn it!). The point of the talk was to challenge two claims put forth by Sharon Street: a) that the normative realist cannot provide a scientifically acceptable account of the relation between evolutionary forces acting on our evaluative judgments and the normative facts realists think exist; b) that the "adaptive link account" provides a better explanation of this relation than any realist tracking account. (Note: much of this text is from the handout distributed by Deem.)

The alleged dilemma consists in this: by hypothesis, evolutionary forces have played a significant role in shaping our moral evaluative attitudes. If so, how is the moral realist to make sense of the hypothesis while holding on to moral realism? Taking the first horn, the realist could deny any relation between evolution and evaluative judgments. But this would mean either skepticism about evaluative judgments, or lead to a view that evolved normative judgments coincidentally align with moral facts, neither option being palatable to the moral realist.

The second horn leads the realist to accepting the link with evolution. But this means that the s/he would have to claim that tracking normative truths is somehow biologically adaptive, a position that is hard to defend on scientific grounds.

According to Street there are two positions available here: the tracking account (TA) says that  we grasp normative facts because doing so in the past has augmented our ancestors' fitness. The adaptive link account (ALA) says that we make certain evaluative judgments because these judgments forged adaptive links between the responses of our ancestors and the environments in which they lived. Note that the difference between TA and ALA is that the first talks of normative facts, the latter of evaluative judgments.

Street prefers ALA on the grounds that it is more parsimonious and clear, and that it sheds more light on the phenomenon to be explained (i.e., the existence of evaluative judgments). Deem doesn't think this is a good idea, because within the ALA evaluative judgments play a role analogous to hard-wired adaptations in other animals, which seems implausible; and because it is mysterious why selection would favor evaluative judgments.

Deem then went on to propose a modified ALA: humans possess certain evaluative tendencies because these tendencies forged adaptive links between the responses of our ancestors and their environments. Note that the difference between standard ALA and realist ALA is that the first one talks of evaluative judgments, the latter of evaluative tendencies. (This distinction makes perfect sense to me: judgments are the result, at least in part, of reflection; tendencies can be thought of as instinctual reactions or propensities. So, for instance, humans have both, while other primates only — as far as we know — possess propensities, but are incapable of judgments.)

To put it in his own words, Deem claims that "the realist can show that his/her position is compatible with evolutionary biology and can provide an account of the relation between the evolutionary forces that shaped human evaluative attitudes and independent normative facts. ... [However] it seems evolutionary theory underdetermines the choice between realism and antirealism in metaethics."

Okay, I take it that Deem's idea is to reject the suggestion that evolution makes it unnecessary to resort to the realist idea that there are normative facts. Perhaps so, in a way similar to which an evolutionary account of our abilities at mathematical reasoning wouldn't exclude the possibility of mathematical realism ("Platonism"). But one needs a positive reason to contemplate an objective ontological status of moral truths, and I think the case for that is far less compelling than the analogous case for mathematical objects (one of the reasons being that while mathematical abstractions truly seem to be universal, moral truths would still apply only to certain kind of social organisms capable of self-reflection).

Melinda Hall talked about "untangling genetic determinism: the case of genetic abortion" (another talk without slides, or even a handout!). She is interested in abortion in cases where medical evidence predicts that the infant will be severely disabled. Given such information, is it moral to terminate the pregnancy ("genetic" abortion, a type of negative genetic selection) or, on the contrary, is it moral to continue it?

The basic idea seems to be that genetic abortion is conceptually linked to genetic determinism, i.e., an overemphasis on the importance of genetic factors in development. In turn, Hall argued, the decision to terminate pregnancies in such cases contributes to stigmatize, as well as reduce social resources for, the disabled community.

Disability has both a social and a biological component, and if a lot of the negative effects of disabilities on life quality are the result of social construction, then the main issue is social and not biological. Disability advocates claim that it is problematic to make a single trait (the disability, whatever it is) become an overriding, criterion on the basis of which to make the decision to abort.

There is thus apparently a tension — which Hall sought to diffuse — between the usually pro-choice attitude of disability advocates and the restriction on the mother's reproductive rights if one objects to "genetic abortion."

A reasonable (I think) worry is that "gene mania," i.e., the quest for purely or largely biological explanations for human behavior, may encourage the search for simplistic solutions to problems that are in reality complex and in good part social-environmental. My own worry about Hall and some of her colleagues' approach, however, is the opposite danger that disability advocates may seriously underestimate the biological basis of disabilities, which may in turn lead to an equally problematic tendency to reject medical preventive solutions. (Indeed, Hall at one point made the parenthetical comment that disabilities may not be a "problem" at all. I think that's willful rejection of the painful reality in which many human beings live.)

Hall went on to invoke the nightmarish social scenario depicted in the scifi movie Gattaca. I don't object to using scifi scenarios as evocative thought experiments, but of course there is a huge disanalogy between the situation in Gattaca and the issue of disabilities. Gattaca's "inferiors" were actually normal human beings, pitted against genetically enhanced ones. Disable people are, in a very important sense, the mirror image of the movie's enhanced humans, since they lack one or another species-normal functionality typical of humans.

Though Hall qualified this, disability advocates apparently worry that "negative genetic selection" may nurture a societal attitude that it may one day be possible to eliminate disability, which somehow could turn into decreased social support for disabled people. Frankly, I think that's an egregious example of non-sequitur, and moreover it flies in the face of the empirical evidence that Western societies at least have significantly increased allocation of resources to the disabled (see, for instance, the Americans with Disabilities Act).

This whole discussion seems to be predicated on an (unstated and, I think, indefensible) equivalency or near-equivalency between the moral status of a fetus who is likely to develop into a disable person and that person him/herself. As the commentator for the paper (Daniel Moseley, UNC-Chapel Hill) pointed out, it is hard to see what is morally wrong in parents' decision to abort a fetus that has a high likelihood — based on the best medical evidence available — to develop a disability that would be hard to live with, regardless of whatever support society will provide (as it ought to) to the disabled person resulting from that pregnancy, should the parents decide not to abort.

Finally, Daniel Demetriou spoke about "fundamental moral disagreement, antirealism, and honor." (Yay! Slides!!) He took on Doris and Plakias' argument that moral realism predicts fundamental moral agreement (analogously, say, to agreement about mathematical or scientific facts). However, empirically there is plenty of evidence for moral disagreements, for instance in the case of the "culture of honor" among whites in the American South. This is turned by Doris and Plakias into an argument against moral realism (i.e., there are fundamental disagreements about moral norms because there is no objective thing of the matter about moral norms).

There are indeed interesting data showing that white Southerners respond more violently to insult and aggression. The alleged explanation is that these people inherited (culturally, not genetically) a culture of honor, which comes from their pastoral ancestors. More broadly, an honor culture according to some authors is likely adaptive in pastoralist social environments, where goods are easily stolen and a reputation for prompt and violent reaction may function as an effective deterrent (as opposed to, say, the situation in agricultural societies, where goods like crops are not easily stolen).

Interestingly, African pastoralists, as well as pastoralists in Sardinia and in Crete, consider raiding from other livestock owners a way to prove their honor as young men. The same goes for the Scottish highlands, again highlighting the connection between honor and violence.

Demetriou, however, is not convinced by this account, raising a number of objections, including the fact that pastoralist societies are still concerned with fairness, as in the concept of fair fighting. Fairness in fighting would not be a good deterrence against aggression, contra the above thesis. Moreover, there are several honor cultures that are not in fact violent. Instead, Demetriou put forth a "competition ethic account" of honor, where honor has to do with social reputation.

Metaethically, Demetriou agreed that honor really is different from the liberal ethics of welfare, favoring prestige instead. Similarly, liberalism favors cooperative principles, while honor ethics favors competition. So for Demetriou the honor outlook is much more fundamentally different from the liberal ethos than even the story based on the effectiveness of violence would suggest.

However, the author concluded, moral realism has no problem with the divergence between liberalism and honor, since it is possible to accommodate the difference invoking pluralism of a realist sort. Well, yes, though it seems to me that this strategy is capable of accommodating pretty much any set of data demonstrating empirical divergence of ethical systems... Moreover, one of Demetriou's comments toward the end was a bit confusing. He wondered why a white Southerner who has grown up in an honor culture couldn't "wake up" to a liberal approach, perhaps (his examples) after watching the right movie or reading the right book. But wait, that seems to imply no pluralism at all, but rather a situation in which the person steeped in the honor culture was simply wrong and realized, under proper conditions, that he was so. That, of course, may be, but it is a very different defense of realism against the empirically driven antirealist argument. Which one is it? Actual pluralism, or the idea that there is one correct moral system and some people are simply in error about it?

Overall this felt as a somewhat disjointed session, particularly because the second talk had hardly anything at all to do with antirealism, while neither the first nor especially the last talk had much to do with genetic determinism. But such is the way of many APA sessions, and each of the three talks did raise interesting questions about the relationship between ethics and science. It has been pretty uncontroversial for a while among moral philosophers that their discipline (just like every other branch of philosophy, I would argue) better take seriously the best scientific evidence relevant to whatever philosophical issues are under discussion. The much more interesting and thorny question is that of what exactly the implications of the science are for ethical and even metaethical positions, as well as — conversely — what the implications of our ethical theories are for the way science itself is conducted and scientific advise is implemented in our society.

Philosophers and climate change

by Massimo Pigliucci

It's that time of year: the period between Christmas and New Year's Eve, when for some bizarre reason the American Philosophical Association has its annual meeting. This year it's in Atlanta, and I made it down here to see what may be on offer for a philosopher of science. This first post is about how philosophers see climate change, at least as reflected in an APA session chaired by Eric Winsberg, of the University of South Florida. The two speakers were Elisabeth Lloyd (Indiana) and Wendy Parker (Ohio), with critical commentary by Kevin Elliott (South Carolina).

The first talk was by Lloyd, who began by addressing the claim — by climate scientists — that the robustness of their models is a good reason to be confident in the results of said models. Broadly, however, philosophers of science do not consider robustness per se to be confirmatory. To put it simply, models could be robust and wrong.

Still, Lloyd argued that robustness is an indicator that a model is, in fact, more likely to be true. She began by referring to a point made by some theoretical ecologists: good models will predict the same outcome in spite of being built on different specific assumptions.

Lloyd stressed that there are different concepts of robustness. One is that of measurement robustness, the most famous example of which is the estimation, based on as many as 13 different methods, of Avogado's number. The concept used by Lloyd, however, is one of model robustness, which deals with the causal structure of the various climate models. The focus, then, shifts from the outcome (measurement robustness) to the internal structure of the models themselves.

Climate models are a way of articulating theory, because the equations that describe atmospheric dynamics are not analytically solvable. Lloyd went into some detail concerning how these models are actually built, pointing out how often predictions of crucial variables (like global mean surface temperature) are the result of six to a dozen different models, incorporating a range of parameter values. A set of models, or model "type," is characterized by a common core of causal processes underlying the phenomena to be modeled. An interesting example is that when climate models do not include greenhouse gases (but are limited to, say, solar and volcanic effects), they are indeed robust, as a set, but their output does not match with the available empirical data.

The point is that if a model set includes greenhouse gases as a core causal component, all models in the set produce empirically accurate estimates of the output variables — that is, the set is robust — for a range of parameter values of the other components of the model. Moreover, it is the case that individual parameter values in a given model within the set are themselves supported by empirical evidence. The result is a strong inference to the best explanation supporting particular models within a given causal core set.

While I find Lloyd's analysis convincing, it seems to me that in a backward sense it does reach the conclusion that it isn't robustness per se that should generate confidence in a model (or set of models), but rather the robustness together with multiple lines of evidence pointing toward the empirical adequacy both of the outcome of the model and of its specific parameter settings.

Wendy Parker gave the second talk, tackling the role of computer simulations as a form of theoretical practice in science. She referred to Fox Keller, according to whom describing simulations as "computer experiments" qualitatively changes the landscape of what counts as theorizing in science. Parker is interested in the sense in which simulation outcomes can be thought of as "observations," and the models themselves as observing instruments.

She began with a description of numerical weather predictions, which started in the 1950s, before modern digital computers. The data anchoring the analysis of weather models today are produced by satellites and local stations. While the models are set up as regular grids on the territory, the data are of course not comparably evenly spread. Forecasters then use various methods of "data assimilation," which take into account not only the available empirical data, but also the previous forecasts for a given area. The goal is to find a better fit than the one characteristic of the previous forecast alone to achieve a best estimate of the state of the system.

The resulting sequence of constantly updated weather snapshots was soon seized upon by climate scientists to help bridge the gap between weather and climate. This process of re-analysis of weather data, integrated by additional empirical data not originally taken into account by the forecasters, is now a common practice of data assimilation in climate science (the example discussed by Parker in detail is that of a procedure known as 4DVAR).

The point is that re-analysis data sets are simulation outputs, which however are treated as observational data — though some researchers keep them distinct from actual observational data by referring to them in published papers as "reference data" or some such locution. The problem begins when some climate scientists think of assimilation models themselves as "observing instruments" in the absence of actual measurement instruments on the ground. (Interestingly, there are documented cases of assimilation models "seeing" atmospheric phenomena that were not registered by sparsely functioning ground instruments and that were later confirmed by satellite imagery.)

Parker wants to reject the claim that models should be thought of as observing instruments, while she is sympathetic to the conceptualization of simulation outcomes as "observations" of atmospheric processes.

Her objection to thinking of assimilation models as observing instruments is that, although they are informative and, indirectly, empirical (because at some previous iteration empirical data did enter into them), they are not "backward-looking" as true observations themselves are (i.e., you don't "observe" something that hasn't happened yet) are and so are best thought of as predictions.

Parker's argument for nonetheless considering simulation outcomes as observations is that they are empirical (indirectly), backward-looking (partially, because model assimilation also uses observations made at times subsequent to the initial model projections), and informative. That is, they fulfill all three criteria that she laid out for something to count as an observing or measuring procedure. Here Parker is building on van Fraassen's view of measuring as "locating in logical space."

While I enjoyed Parker's talk, in the end I was not convinced. To begin with, we are left with the seemingly contradictory conclusion that assimilation models are not observation instruments, and yet produce observations. Second, van Fraassen's idea was meant to apply to measurement, not observation. Parker acknowledged that van Fraassen distinguishes the two, but she treated them as effectively the same. Lastly, it is not clear what hinges on making the distinction that Parker is pushing, and indeed quite a bit of confusion may arise from blurring too much the distinction between actual (that is, empirical) observations and simulation outcomes. Still, the underlying issue of the status of simulations (and their quantitative outputs) as theoretical tools in science remains interesting.

The session was engaging — regardless of one's agreement or not about specific claims made by the participants — because it showcased some of the philosophical dimensions of ongoing and controversial scientific research. It is epistemologically interesting, as Lloyd did, to reflect on the role of different conceptualizations of robustness in modeling; and it is thought provoking, as Parker did, to explore the roles of computer simulations at the interface between theory and observation in science. Who knows, even climate scientists themselves may find something to bring home (both in their practice and in their public advocacy, which was commented upon by Lloyd) from this sort of philosophical analysis!

Thursday, December 27, 2012

A brief detour through The Twilight Zone

by Michael De Dora

The end of another year and the start of a new one is quickly approaching, which means we are getting closer to one of my favorite annual events: The Twilight Zone marathon! As it has done for years now, the channel SyFy will air non-stop episodes of Rod Serling’s classic television show on Monday, Dec. 31 and Tuesday, Jan. 1. You can find more information, including a full schedule, on the SyFy website.

For most people I meet, The Twilight Zone (1959-1964) was an entertaining if not groundbreaking science fiction anthology series that set up extraordinarily hypothetical situations and ended with a twist few viewers saw coming. But for a devoted fan such as myself, the show did much more than just amuse its viewers: it explored the nature of the human condition. Through his storytelling, Serling (creator, executive producer, and often script writer) depicted the many flaws of the human mind and the wretched behaviors these produce – irrationality, authoritarianism, xenophobia, narcissism, and more – and portrayed what could happen if they were not recognized and dealt with head-on. 

As Star Trek creator Gene Roddenberry put it, “No one could know Serling, or view or read his work, without recognizing his deep affection for humanity … and his determination to enlarge our horizons by giving us a better understanding of ourselves.” I would argue Serling wanted to do more than just enlarge our horizons: he wanted to motivate us to improve the world. Indeed, he once quipped that, “If you want to prove that God is not dead, first prove that man is alive.” I would also argue that a person cannot fully understand Serling’s passion without watching several episodes of The Twilight Zone.

As an inducement to readers of Rationally Speaking, what follows are brief summaries of ten episodes that in my mind best represent this passion, all of which will air during the marathon.

Death’s Head Revisited (Dec. 31, 10 a.m.)

Aired during the proceedings of the Adolf Eichmann trial, this episode features former SS Captain Gunter Luntze returning to remains of the Dachau concentration camp to recall his time as commandant during World War II. While there, Luntze is haunted by the ghosts of the Jewish people whom he tortured and murdered. The conversation between Luntze and the ghosts covers many of the arguments made by Eichmann and other Nazi officers.

The episode concludes with Serling’s response to the question, “Dachau … why does it stand? Why do we keep it standing?” I find it so gripping that I will quote it in full:

There is an answer to the doctor’s question. All the Dachaus must remain standing. The Dachaus, the Belsens, the Buchenwalds, the Auschwitzes – all of them. They must remain standing because they are a monument to a moment in time when some men decided to turn the Earth into a graveyard. Into it they shoveled all of their reason, their logic, their knowledge, but worst of all, their conscience. And the moment we forget this, the moment we cease to be haunted by its remembrance, then we become gravediggers. Something to dwell on and remember, not only in the Twilight Zone but wherever men walk God’s Earth.

Will the Real Martian Please Stand Up? (Dec. 31, 7 p.m.)
On a cold, snowy night, two police officers receive word that a mysterious flying object has crashed in the woods. Upon reaching the site of the crash, the officers discover footprints that lead across the street to a diner. They follow the trail and find a group of bus passengers waiting out a storm. When the officers arrive, they ask the bus driver how many people were on the bus. He responds: six. Yet there are now seven passengers in the diner. What gives? 

The officers employ their best detective skills in trying to determine which of the passengers is the alien. The surprising and enjoyable series of twists that end the episode underline two points. First, sometimes the people you think are most likely to be guilty are least likely, and vice versa. And second, sometimes assumptions, when accepted without question, too narrowly limit the focus of our scrutiny. 

I Am the Night, Color Me Black (Jan. 1, 12:30 a.m.)

Serling reportedly wrote this episode in response to the assassination of President John F. Kennedy, which took place roughly four months prior. The plot in no way makes this fact obvious: in a small town, a man is set to be executed for a crime he most likely did not commit, while nonetheless the town’s inhabitants are looking forward to – indeed, celebrating – his public hanging. 

But as the day progresses, the town is taken hold by an eerie and seemingly unexplainable phenomenon: despite the fact that the sun should be rising, the sky is continuing to darken. Is this occurrence related to real world ongoings? Or is there something else going on? You’ll have to watch to find out.

I Shot an Arrow Into the Air (Jan. 1 6:30 a.m.)

With the space race fully in motion, Serling often addressed the wide-ranging possibilities of what humans might encounter beyond planet Earth. In this episode, he introduces viewers to Arrow One, the first manned flight into space. 

Flash forward and the spacecraft has crash-landed on what appears to be a barren planet. Several astronauts are dead, and soon after the remaining astronauts begin to fight. The twist ending aside, this episode raises an obvious and deep question: how might humans behave toward one another if there was little to no expectation that they would be accountable to society ever again?

Nick of Time (Jan. 1, 4:30 p.m.)

In one of two episodes to star William Shatner, a husband and wife on a road trip suffer a problem with their car and are forced to stop in a small, unknown town and have it repaired. They decide to grab lunch at a local diner, where each table is equipped with a one-cent fortune machine. Shatner’s character proceeds to play. 

He quickly becomes obsessed with the machine’s seemingly accurate answers, and continues to drop in pennies despite his wife’s protests. Yet does the machine really have the ability to understand Shatner’s questions, or is Shatner being duped?

A Stop at Willoughby (Jan. 1, 7 p.m.)

Thus begins a string of four of my favorite episodes – and one that Serling claimed was a favorite of his. Gart Williams is an ad executive in the city who increasingly feels the stresses of a demanding job and a wife bent on riches. On his train commute home, Williams begins to dream of a stop named Willoughby, a utopian town “where a man can live his life full measure.” 

This episode was one of Serling’s favorites because it was personal. It dealt explicitly with the kind of work life becoming more common in that era – long hours in a competitive office environment featuring intense mental pressures – and its effects on life at home (Serling worked in television in Hollywood, and eventually retired to upstate New York). But I think there is another angle to this story: perhaps, for some people, imaginary life truly is better than the real thing. 

The Monsters Are Due on Maple Street (Jan. 1, 7:30 p.m.)

It is a clear and normal day on Maple Street, when suddenly a dark shadow passes over, followed by a loud roar and a bright flash of light. Every house on the block loses power, and every car stops working. As the community speculates on what could have happened, they start to believe it must be aliens attacking Earth. A number of strange things begin to occur, and suddenly every person on the block is accusing the other of being an alien. 

This episode, which has been used in classrooms to explore with students ideas like critical thinking and intolerance, closes with another poignant Serling narration:

The tools of conquest do not necessarily come with bombs, and explosions, and fallout. There are weapons that are simply thoughts, attitudes, prejudices, to be found only in the minds of men. For the record, prejudices can kill and suspicion can destroy. And a thoughtless, frightened search for a scapegoat has a fallout all its own for the children and the children yet unborn. And the pity of it is, that these things cannot be confined to the Twilight Zone.

Howling Man (Jan. 1, 8 p.m.)

David Ellington is an American citizen touring the countryside in Europe who gets separated from his group and caught in a thunderstorm. Desperate for shelter, he stops in what appears to be a church and begs for help. The inhabitants demand he leaves at once, but a weary and sick Ellington collapses.

As Ellington recovers, he begins to hear howling. He demands to know the source of this noise, threatening that if the inhabitants do not tell him, he will go to the police. For your sake, I won’t describe any more of the story here, but let’s just say the moral I take from this one is that once again, reason loses and evil escapes humanity’s firm grasp. 

Time Enough at Last (Jan. 1, 8:30 p.m.)

This is one of four episodes to feature Burgess Meredith, who plays Henry Bemis, a bookworm interested only in reading. This focus draws criticism from both his co-workers and his loved ones, who try to convince Bemis that there is more to life than reading. 

While most people focus on this episode’s twist at the end, I think there is a deeper angle at play. Have you ever thought that life would be better if everyone else just disappeared and left you alone to your own devices? That’s precisely what Bemis thought – that is, until he happened to take a brief detour through The Twilight Zone.

The Obsolete Man (Jan. 1, 10 p.m.)

This episode represents perhaps Serling’s most direct attack on authoritarianism and censorship. It features Burgess Meredith again, this time as Romney Wordsworth, a religious librarian set to be euthanized because the state has found his work and views “obsolete.” 

The episode highlights the dangers of declaring any individual member of society unworthy of basic rights, with Serling flipping the focus from people to the state in his closing narration: Any state, any entity, any ideology that fails to recognize the worth, the dignity, the rights of man, that state is obsolete.


In case you cannot catch this television marathon, The Twilight Zone is streaming on Netflix. And since there are 156 episodes of the original series, those who enjoy the above offerings might also take pleasure in many other episodes. In case you’re looking for a decent starting place, here are ten other episodes I highly recommend: The Eye of the Beholder, Where Is Everybody?, The Shelter, To Serve Man, People Are Alike All Over, The Little People, Nothing in the Dark, A Nice Place to Visit, The Brain Center at Whipple’s, and The Silence.

Enjoy, and have a happy new year!

Monday, December 24, 2012

The (complicated) relationship between math and logic

by Massimo Pigliucci

Ever since reading my first book on the philosophy of mathematics I've gotten more and more interested in the relationship between math and logic (not to mention, of course, in the idea of mathematical Platonism). That relationship is charmingly explored in the highly entertaining Logicomix, featuring Bertrand Russell as the main hero of an unusual comic book adventure.

More recently, I read two interesting short essays on the topic, one by mathematician Peter Cameron, the other by Sharon Berry, a philosophy PhD student at Harvard. Both essays date from 2010, but good writing is always stimulating, so let’s take a look (besides, this isn’t a field in which progress is made at lightening speed, exactly).

According to Cameron, there are two roles that logic plays in mathematics. The first deals with providing the foundations on which the mathematical enterprise is built. As he puts it: “No mathematician ever writes out a long complicated argument by going back to the notation and formalism of logic; but every mathematician must have the confidence that she could do so if it were demanded.” The second role is played by logic as a branch of mathematics, on the same level as, say, number theory. Here, according to Cameron (and who am I to disagree) logic “develops by using the common culture of mathematics, and makes its own rather important contributions to this culture.”

Of course, you just can’t talk about this stuff without — sooner rather than later — stumbling upon Gödel’s famous “incompleteness” theorems (there is two of ‘em), but before getting there Cameron reminds his readers of the difference (in math) between a sound and a complete system: “A system is sound if all its theorems are logically valid [i.e., are true on every interpretation], and is complete if all its logically valid formulae are theorems.” Cameron says that a common misunderstanding of Gödel’s results is that he proved that no mathematical system can be sound and complete at the same time. On the contrary, Cameron reminds his readers that two important systems have been proven to be both sound and complete: propositional (Boolean) logic and first-order logic. The latter is particularly important because it is the type of formalism in which most (but not all!) math is actually framed.

So, to recap our progress so far: Cameron tells us that the relationship between logic and math is not along the lines of one being a branch of the other, exactly. Rather, certain logical systems can be deployed inside mathematics, while others are in an interesting sense outside of it, meaning that they provide (logical) justification for math. At least, that’s if I understand Cameron correctly, I’d be happy to be shown the error of my ways if need be.

Berry’s essay approaches the issue of the relationship between logic and math in a more comprehensive and systematic manner. She states that the answer to the question depends (surprise, surprise!) on what one means by “logic.” In particular, she provides a useful classification of five meanings one might have in mind when using the word:

1. First order logic (see above).
2. Fully general principles of good reasoning.
3. A collection of fully general principles which a person could learn and apply.
4. Principles of good reasoning that aren’t ontologically committal.
5. Principles of good reasoning that no sane person could doubt.

[We will not get into a discussion of what constitutes sanity insofar as possibility #5 is concerned. Another time, perhaps.]

Berry’s first point is that we know that it is not possible to program a computer to produce all and only the truths of number theory, but it is possible to program such computer to produce all the truths of first order logic. Which means that math is not the same as logic if one understands the latter to be the first order stuff (option #1 above). Berry immediately adds that if we make the further assumption that human reasoning can be modeled in a computer program (which, personally, I don’t actually think has been established), then logic doesn’t capture all of math also in cases #3 and #5 above.

What about #2, then? To quote Berry: “If by ‘logic’ you just mean ... fully general principles of reasoning that would be generally valid (whether or not one could pack all of these principles into some finite human brain) — then we have no reason to think that math isn’t logic.” I am very sympathetic to this broader reading of logic, but we are still left with option #4 above.

Apropos of that Berry reminds her readers that standard math is reducible to set theory, and that the latter in turn has been shown to be reducible to second-order logic, thus implying that mathematics is, after all, a branch of logic. [I will leave it to the reader to dig into Berry’s issue about “ontological commitments,” which hinges on the relationship between the ontology of abstract objects and the inferential rules we deploy while using said objects. It makes for light reading after dinner...]

Berry’s conclusion is that “it is fully possible to say ... that math is the study of ‘logic’ in the sense of generally valid patterns of reasoning. However, if you say this, you must then admit that ‘logic’ is not finitely axiomatizable [because of Gödel’s theorems!], and there are logical truths which are not provable from the obvious via obvious steps (indeed, plausibly ones which we can never know about). ... What Incompleteness shows [then] is that not all logical truths can be gotten from the ones that we know about.” You’ve got to love this stuff! Or am I just weird?