About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Thursday, November 29, 2012

Odds again: Bayes made usable


2.bp.blogspot.com
by Ian Pollock

[Note: this post assumes basic familiarity with probability math, and also presupposes a subjectivist view in philosophy of probability.]

Readers of this blog, and of others a few Erdos numbers (Massimo numbers?) away from it, will by now be used to having Bayes’ theorem hammered into their heads all the time, as the Great Equation of Power and the Timeless Secret of the Universe.

I suspect that I am not the only one who has occasionally felt somewhat disingenuous when harping on Bayes. Even though I do actually think it’s the secret of the universe, memorizing the formula is liable to become little more than a signal of in-group identity (along the lines of being able to recite the Nicene Creed or the roster of Local Sports Team), unless people know what it means, and how to sometimes actually maybe possibly use it.

When I talk about “using” Bayes theorem, I have a different picture in mind than what you may think. I do not necessarily mean a textbook problem with all the needed information clearly specified and relevant numbers handed to you. What I tend to think of instead are problems like:
“The car in front of me just swerved halfway into my lane. How likely is the driver to be drunk?"
These underspecified problems are the meat of day-to-day probability judgments.

But let’s look at Bayes theorem as traditionally presented:

P(H|E) = P(H)•P(E|H) / ( P(E|H)•P(H) + P(E|¬H)•P(¬H) )

[Terminology: P(_) stands for “probability of _,” H stands for “hypothesis,” E stands for “evidence,” the vertical bar stands for “given,” e.g., P(E|H) is the “probability of E given that H is true”, and finally ¬ means “not.”]

This formula is hideous on at least two levels:

First, it has too many terms (some repeating) and too many operations. You end up performing 2 or 3 multiplications, 1 addition, 1 subtraction ( P(¬H) = 1 - P(H) ) and 1 division, in order to get the answer. This does not conduce to doing the arithmetic in your head in real time, unless you are unusually good at arithmetic and have good fluid memory (neither of which apply to me).

Second, and perhaps most importantly, it is conceptually opaque. You do not see the structure of reasoning when you look at Bayes’ theorem in that form; all you see is a porridge of symbols. The “prior” that Bayesians are always harping on about, P(H), appears three separate times, once in the numerator and twice in the denominator, all tangled up with P(E|H) and P(E|¬H) — the “evidence terms.” Granted, the denominator is really just an expansion of P(E), which makes it a bit less opaque. But you can rarely calculate P(E) without doing the expansion.

Notice that when we speak of using Bayes’ theorem we are speaking of modifying (1) prior judgment in the light of (2) evidence to arrive at (3) a new judgment. Ideally, we would like a formula that looks more like:

posterior = prior [operation] evidence

Well, here is Bayes’ theorem in odds form:

O(H|E) = O(H) * P(E|H) / P(E|¬H)

As you can see, it consists of only one division and one multiplication. And lo, O(H) is just the prior odds, and the ratio P(E|H)/P(E|¬H) corresponds to “evidential strength,” although the literature usually calls it a likelihood ratio or a Bayes factor.

If you’re not used to how odds work, now would be a good time to check out my old article on them, in which for some inscrutable reason I didn’t get round to talking about their advantage in re: Bayes’ theorem. The rest of this article assumes you are moderately comfortable with odds talk.

Let’s see how Bayes works with an example.

In the classic 1957 film “12 Angry Men” (one of my favorites), a young man is accused of killing his father. One of the pieces of evidence brought against him is the fact that he was identified by a store clerk as having recently purchased a switchblade knife with an unusual handle, and the same kind of knife had been found on the body (wiped of fingerprints). See a nice clip here of the jurors debating the relevance of this piece of evidence.

At first, the unusual character of the knife led the jurors to believe that it was, if not one of a kind, at least very rare. But they are led by the touch of Henry Fonda’s cool hand to modify that assessment and consider the knife a much more commonplace one than they had thought. One of the hawkish jurors then asks petulantly: “Maybe there are ten knives like that, so what?” So what indeed.

We are interested in estimating the odds that the boy is guilty, given that he had purchased a knife the same as the one found at the murder scene — O(guilty|knife). Let us assume that it is certain that the boy did indeed purchase the knife as the store clerk said (actually a very charitable interpretation in the prosecution’s favour).

The first thing we need to think about is our prior. This represents what we think the chance is that the boy committed the murder, before the knife evidence is considered at all. Different people will have different priors, but let us suppose that enough evidence had been presented at trial already to make you consider him 20% likely to be guilty, or odds of 1:4 in favor: O(guilty) = 1:4.

We still need to know two more things.

First, P(knife|guilty) — assuming the boy is guilty, how likely is the knife evidence?
Well, it is not beyond the realm of possibility that the boy could have stabbed his father and disposed of the knife altogether, so even if he is guilty, there is no guarantee of seeing the knife. However, since we know he did buy an identical knife, it is not very surprising to see it at the crime scene if he is guilty. Let us estimate this probability as P(knife|guilty) = 0.6.

We also need to know P(knife|¬guilty) — assuming the boy is innocent, how likely is the knife evidence?

If (as the jurors at first seem to assume) there is only one knife in the whole world that looks like the murder weapon, and we know that the boy bought it, then the only plausible way it could have been the murder weapon and yet the boy be innocent, is if somebody else acquired it from the boy, and then used it to kill the boy’s father. One can understand the hawkish jurors’ impatience with this “possibility.” It requires not only that the boy somehow lost possession of the knife, but that somebody else (coincidentally?) wanted to use it to kill his father in particular. This rates a very low probability, let us say 1000:1 against or P(knife|¬guilty) = 0.001.

Now we have everything we need to figure out the odds of the boy being guilty, given this evidence. We already have the prior — 4:1 against or 1:4 in favor. The “evidential strength” is just the ratio of P(knife|guilty)/P(knife|¬guilty) = 0.6/0.001 = 600. We just multiply the prior by the evidence:

O(guilty|knife) = (1:4)*600 = 150:1 in favor of guilt.

So far so good, although the three numbers involved can all be quibbled with. But here is where Henry Fonda’s duplicate knife becomes important. It does not really change the top part of the evidence ratio: P(knife|guilty) is about the same. But suddenly that factor of 1000 that was making the boy look so guilty is going to drop, because now we know that the killer had access to lots of identical knives, not just the defendant’s. Now it looks like P(knife|¬guilty) is just the fraction of all knives available in the victim’s neighborhood that look like the murder weapon. We can guess that this is something like 1 in 10. So the evidence ratio becomes 0.6/0.1 = 6, and we multiply by the prior to get

O(guilty|knife) = (1:4)*6 = 3:2 in favor of guilt.

Thus, what Fonda showed is that although the knife is evidence of the boy’s guilt, it is much weaker evidence than the jurors had been led to believe. We do not convict criminals at odds of 3:2, or at least, we ought not to.

To address one objection I anticipate: yes, many of the numbers above are very rough guesses. Wherever possible, they should be improved upon by more objective data. But in my defense, notice how mapping out the underlying structure of the reasoning directs inquiry to where it needs to go, rather than to irrelevancies. You can challenge the prior I chose of 4:1 against guilt, by saying that the other evidence presented at trial makes him look a lot more guilty than that. You can challenge the drop in the evidence ratio by checking exactly how many of these knives are sold in nearby shops. These are exactly the questions juries should be thinking about.

Meanwhile, other questions, when seen in a Bayesian light, are obviously non-starters. A bigoted juror in the movie makes much of the boy’s poor background, as if that ought to weigh heavily in favor of his guilt. Unfortunately, while his fellow jurors express their disgust at this man’s prejudice, they fail to notice the obvious silliness of the underlying logic in this case. For if the boy is more likely to commit a crime by virtue of living in a bad neighborhood, so too are all the other people in the neighborhood, leaving the boy’s relative chances of having committed this particular crime approximately the same as they would have been if he had lived in a good neighborhood. Likewise, it is not much good emphasizing the victim’s bad relationship with his son, when he had bad relations with innumerable others.

To recap what we did in our example: we had a prior judgment about how likely the boy was to be guilty, not considering the knife evidence. Then, we considered the evidential strength of the knife evidence, which can be summarized with the phrase: “how much more likely was the evidence if he was guilty, than if he was innocent?”

This way of thinking about uncertainty, while normatively correct, departs from how humans automatically reason about these things in two important ways.

First, it gives equal weight to evidence and to prior. This is important because people constantly forget all about their priors as soon as they see evidence confirming a hypothesis. “I just met Sally. She is very adventurous, a real adrenaline junkie. Is Sally more likely to be a skydiving instructor, or an accountant?” Most people will answer that Sally is probably a skydiving instructor, forgetting that although all skydiving instructors are surely adventurous, there are way more accountants than skydiving instructors (and some accountants are adventurous too). The skeptical community usually sums up the insight that priors matter as much as evidence, with Carl Sagan’s excellent slogan “extraordinary claims require extraordinary evidence,” although they sometimes display a woeful lack of inclination to generalize this principle beyond Bigfoot.

Second, it emphasizes that what matters is not that evidence be consistent with some hypothesis, but that it be more likely if the hypothesis is true, than if it is false. This has the side effect of emphasizing the non-binary nature of evidence. Amanda Knox acted oddly (for example, doing a handstand) after the murder of her roommate Meredith Kirchner, about which the prosecution made much hay. The question we now know to ask is, “How much more likely is a person to act oddly after the murder of their friend if they are guilty, as opposed to if they are innocent?”

Um... a little more likely? Maybe twice as likely, at most? Possibly even less likely, as a guilty person might be more careful not to stand out... If this is evidence of guilt at all, it is extremely weak and ambiguous evidence, an evidence ratio of close to 1.

Most of us will not serve on many juries, but the same logic applies, rather famously, to medical tests of various kinds. If I go in for random screening against bowel cancer, and test positive, I am liable to assume that I almost certainly have the disease. However, the questions that really need to be asked at this point are: (a) what’s the base rate in the population (aka, prior) and (b) how much more likely is a positive test if I have the disease than if I don’t?

Wikipedia tells us that Fecal Occult Blood screening for bowel cancer has 67% sensitivity (67% of people with the disease test positive) and 91% specificity (9% of people without the disease test positive anyway). This means the evidential strength of a positive test is P(pos_test|cancer)/P(pos_test|¬cancer) = 67/9 = 7. So whatever the prior odds were, multiply them by ~10. [1]
The base rate for bowel cancer looks to be about 54 per 100,000 or around 2000:1 against, so O(cancer|pos_test) = (1:2000)*10 = 1:200 in favor = 200:1 against. As you can see, a positive test is cause for concern, but not panic. You probably don’t have the disease. In fact, you didn’t even need to look up the incidence in this case - all you needed to do was realize that unless 1 in 10 people in your reference class have bowel cancer (surely not!), your odds of having it are less than 50:50.

I hope that this reformulation of Bayes, mathematically trivial as it is, serves you as well as it now serves me. Even if you don’t actually calculate (hard to do in the messiness of the real world), knowing how it works is, I think, very epistemically salutary.
_______

[1] 7=10 in guerrilla arithmetic. We spit on your bourgeois Peano axioms.

Monday, November 26, 2012

Politics and science literacy


Marco Rubio
www.therightsphere.com
by Massimo Pigliucci

It’s time to pick on my friend Phil Plait, the Bad Astronomer, a little bit, . Knowing him, I am sure he will take my remarks as a friendly challenge for all of us to improve the way we all think about certain issues, and I am of course extending him an open invitation to respond or comment on this blog post whenever and in whatever fashion he feels appropriate.

So, what’s my beef with Phil? It’s about a post he has published recently, on the infamous interview with Florida Senator Marco Rubio (apparently, and regrettably, a rising star in the Republican firmament) conducted by GQ magazine. In the interview, Rubio was asked his opinion about the age of the earth — something that has unfortunately become a standard litmus test for Republican Presidential hopefuls since the well documented turn away from reality that has characterized the party. Predictably, Rubio’s response was that he wasn’t a scientist, but “whether the earth was created in 7 days, or 7 actual eras, I’m not sure we’ll ever be able to answer that. It’s one of the great mysteries.”

Phil, like any reasonable person, was outraged that a prominent public figure, a potential Presidential candidate  no less, could be so obtusely equivocal about a basic scientific fact. (Of course, whether Rubio really is that much of a simpleton or whether he was simply pandering to the half of the US population which denies basic scientific facts, is another matter altogether.)

Phil’s (and my own, for that matter) reaction, however, got partly — and I think correctly — chastised by Daniel Engber in a follow up article published in Slate (the same magazine that hosted Phil’s initial essay). Engber cited another well known American politician as saying this:

“My belief is that the story that the Bible tells about God creating this magnificent earth on which we live — that is essentially true, that is fundamentally true. Now, whether it happened exactly as we might understand it reading the text of the Bible, that I don’t presume to know.”

Care to guess who this second politician is? None other than our esteemed current President, Barack Obama, speaking in 2008 as a Presidential candidate. The point that Engber was trying to make, I think, is neither that Phil Plait should have been aware of the Obama quote when skewering Rubio, nor that Rubio and Obama are therefore to be considered on the same intellectual level. After all, Phil is not a political commentator, and Rubio has been a constant supporter of the teaching of creationism while Obama has expressly said that he believes in evolution. The point, rather, is that we all have ideological blinders, and that as a consequence we are sometimes a bit too quick in using strong language to condemn our opponents while turning a blind eye when our allies say something remarkably similar.

But what really caught Engber’s attention was the broader picture Phil painted from the Rubio quote. Here is Phil, commenting on Rubio’s position that esoteric issues like the age of the earth have no bearing on the status of the American economy and how to improve it: “Perhaps Senator Rubio is unaware that science — and its sisters engineering and technology — are actually the very foundation of our country’s economy? All of our industry, all of our technology, everything that keeps our country functioning at all can be traced back to scientific research and a scientific understanding of the universe.” [Italics in the original.]

But that is simply not the case, as Engber points out in his commentary: “Lots of basic scientific questions have no bearing whatsoever on the nation’s short-term economic growth. ... Lots of scientific questions don’t matter all that much when it comes [even] to other scientific questions. It’s possible — and quite common — for scientists to plug away at research projects without explicit knowledge of what’s happening in other fields. And when a bedrock principle does need to be adjusted — a not-so-unusual occurrence, as it turns out — the edifice of scholarship doesn’t crumble into dust. DVD players still operate. Nuclear plants don’t shut down.”

My experience both as a scientist and as a philosopher of science tell me that Engber is right on the mark. When I was a practicing evolutionary biologist, I had to constantly write grant proposals for the National Science Foundation, to keep my lab going and my graduate students and postdocs reasonably fed. NSF at one point started asking for a layperson statement of the proposed research, with the admirable goal of making the basic ideas available to the general public, who after all was footing the bill. NSF now also asks for a statement of broader impact, where the Principal Investigator has to explain why taxpayers should be paying for the usually highly esoteric research being proposed — often to the tune of hundreds of thousands, or even millions of dollars per year. Here is where things get funny: I noticed that both I and all my colleagues were stumped, and resorted to vague statements about the “long term implications” of basic research for scientific applications, eventually (way, way down the line) leading to potential applications concerning human health, the quality of the environment, and so on. But if pressed, we would have been in a really difficult position to elaborate on exactly how, say, studying the mating patterns of tropical butterflies, or the genetic structure of a species of small flowering plants, could plausibly be related to cures for cancer or any other kind of improvement to human life.

Indeed, on the rare occasions in which scientists are pressed on these matters they resort to the worst kind of evidence: anecdotes instead of rigorously quantified surveys of the connections between basic and applied research. Moreover, these anecdotes are often somewhat historically incorrect, since most scientists don’t actually have either the time or the inclination to read serious scholarly research in the history of their own field. So the last resort becomes something like, “well, this [i.e., my] topic of research is intrinsically interesting,” which means little more than that the person in question finds it fascinating and wants funding for it.

Before I leave room for misunderstanding, I do think that a healthy society ought to fund basic scientific research, just as it ought to fund the arts and the humanities. And I do think that there are (often vague, serendipitous) connections between basic and applied research (I am also perfectly aware of the porous boundary between these two categories). But I think that a lot of scientists are far too casual in their justification for why the public should pay for their specific, often very expensive and almost always not particularly useful (to the public), research. We keep forgetting that publicly financed science is a rather novel (mostly, post-WWII) luxury that has come to sustain a great part of the academy — just ask Galileo how he had to earn his living (by pandering to fickle princes all over Italy, as well as by selling his perfected version of the telescope to the Venetians, for war-related uses). It is dangerous to take this situation for granted, and it is dishonest to pretend that it all directly benefits the millions of people who foot the bill while having no clue as to what we do in our laboratories.

There is a deeper philosophical reason why Engber is right and people like Phil and myself ought to be more cautious with our outrage at the cutting of scientific budgets or at politicians’ opportunistic uttering of scientific nonsense to gather supporters and votes. Knowledge in general, and scientific knowledge in particular, is not like an edifice with foundations — a common but misleading metaphor. If it were, it would be more likely that, as Phil so strongly stated, everything is connected to everything else, so that ignoring, denying, or replacing one piece of the building will likely create fractures all over the place.

But that’s not how it works. Rather, to use philosopher W.V.O. Quine’s apt metaphor, knowledge is more like a web, with some threads being stronger or more interconnected than others. (Interestingly, the largest database of scholarly papers available to date is called the Web of Knowledge, though I doubt the name is a knowing wink to Quine.) If you see science as a web of statements, observations, experiments, and theories, then it becomes perfectly clear why Engber is right at pointing out that quite a bit of independence exists between different parts of the web, and how even relatively major chunks of said web can be demolished and replaced without the whole thing crumbling. There really is next to no connection between someone’s opinions about the age of the earth and that person’s grasp of the state and causes of a country’s economy. (Just like, to use another example from Engber’s article, there is little relationship between Francis Collins’ philosophically naive beliefs about Christianity and his undoubted abilities as a scientist and current head of the NIH. If one bought into the “everything is tightly connected to everything else” view of science, the effectiveness of figures like Collins would amount to an unexplained miracle, so to speak.)

Still, there is an important point where Phil is absolutely correct and that I think Engber underestimates. What is “chilling” and disturbing about people like Rubio (but not people like Obama) is that they have embraced a general philosophy of rejecting evidence and reason whenever it is ideologically or politically convenient. That is what is highly dangerous. Quite frankly, I’m comfortable having a born again Christian leading the NIH, as long as he doesn’t start funding prayer-based medicine. I’m even ok — in a regrettable, chagrined way — with politicians being preposterously ambiguous about the age of the earth, as long as they then turn around (as Obama, but not Rubio, did) and recognize the real and present danger posed by climate change. Indeed, the real problem isn’t Rubio, or even the evidence-avoiding Republican party. The problem is that half of the American population keeps voting for these clowns, in the process, jeopardizing the entire world’s future. But that is a different topic rooted in broader failures, failures for which the scientific and science education communities are not entirely innocent either.

Saturday, November 24, 2012

Rationally Speaking podcast: Live! John Shook on Philosophy of Religion

Massimo and Julia visit Indianapolis for a heated debate, in this live episode of Rationally Speaking. At a symposium organized by the Center for Inquiry (CFI), they join up with John Shook, Director of Education and Senior Research fellow at the CFI, and the author of more than a dozen books on philosophy and religion. 

Sparks fly as the three debate questions like: Should science-promoting organizations, like the National Center for Science Education, claim publicly that science is compatible with religion? And is philosophy incapable of telling us anything about the world?

John's pick: "Meaning and Value in a Secular Age: Why Eupraxsophy Matters—The Writings of Paul Kurtz."

Friday, November 23, 2012

On the “problem” of altruism

by Massimo Pigliucci

[It's that time again, Massimo goes on vacation! As a result, we are running "encore" presentations of some of the best essays posted at Rationally Speaking. Enjoy, we'll be back with new material soon!]

[Originally posted on August 8, 2006]

Some people who read this blog regularly seem to think that I use it primarily as a soap box to declare my ideas to the world, feedback be damned. Well, I'm sure there is some of that in every blog, or for that matter in any editorial-style writing. However, for me writing is actually a major way of thinking. I literally think while I write, in the sense that writing helps me clarify (to myself first) what I think about a certain topic and why. This isn't surprising, given good evidence from the literature on pedagogy that the best way to learn something is to either do it or to explain it to someone else.

That said, let me get to the topic of this entry, altruism. Altruism has bugged a lot of people, from theologians to philosophers, to scientists. And it has bugged me for a long time. Although I am an atheist, I grew up with a Catholic education, and I certainly consider myself a moral person who tries to do the right thing within the limits of human nature. The problem, of course, is that to figure out what “the right thing” is in many circumstances isn't so easy. (Readers are also referred to a previous entry on the multiple philosophical threads that make up my view of ethics.)

Altruism is a “problem” because one needs to explain where it comes from, if in fact it exists at all (depending on how one defines the term), and how far it should go in regulating our moral behavior. Biologists have pretty much concluded that there are two types of “altruism” in the natural world: kin and reciprocal. Kin altruism is the helping behavior we display toward our close relatives, especially but not exclusively our offspring. It is explained in terms of actually increasing our genetic fitness because it helps passing (some) copies of our genes to the next generation. As the famous British geneticist J.B.S. Haldane once quipped: “I will die for two brothers or eight cousins.” Reciprocal altruism occurs in social groups, where most animals seem to adopt a “tit-for-tat” strategy: I'll be nice to you as long as you are nice to me; if you start being nasty, I'll retaliate. Reciprocal altruism can be “diffuse,” meaning not based simply on a one-to-one direct reciprocation, because most complex social animals have a social memory (humans call it “reputation”) that encourages members of the group to be nice or cooperative in general, or they'll be shunned by the rest of the group.

Biologists have shown the existence and workings of kin and reciprocal altruism both theoretically (with elegant mathematical models based on game or optimization theory) and empirically (e.g., with research documenting the highly unusual social behavior of colonial insects through kin selection, or of vampire bats through reciprocal altruism). But what about humans?

As regular readers of this blog know, I'm perfectly aware of the naturalistic fallacy, the idea that one cannot automatically derive an “ought” from an “is,” as David Hume put it. This would seem to preclude adopting the idea that the basis of human altruism is a combination of kin and reciprocal altruism (which, incidentally, do not really qualify as “altruistic” in the strict sense of the word, because the agent derives a benefit, either immediately or in the long run). Nonetheless, it seems to me that if we are claiming that there are additional forms of altruism that are typically human, then the burden of proof is on those making the claim (divine revelation, as usual, is barred from the arena, since it isn't an argument at all).

The best attempt I've seen to reconcile what is known as biological altruism (i.e., kin and reciprocal) and psychological altruism (what we all feel or claim to feel at the intra-personal level) is the book by Sober and Wilson (a philosopher and a biologist respectively), Unto Others. In it they make the argument that there is no contradiction in having genuinely altruistic feelings (at the psychological level) that result in behaviors that are compatible with biological “altruism.” Take the case of our behavior toward our children: (within limits) parents sacrifice resources for them to ensure their survival, and parents “feel good” and selfless while doing so, even though clearly they derive a (subconscious) biological advantage from such behavior. Of course, there are exceptions of people who engage in apparently truly selfless behavior, just as there are cases (not just among humans) of naturally homosexual individuals (obviously a biological disadvantage). But remember that biological, and a fortiori social scientific, theories never aspire to explain more than the general trend, certainly not the behavior of every single individual.

Of course, philosophers from Aristotle to Kant (and beyond) have given all of this quite a bit of thought, and it would require a book to get into the details. The bottom line for me, however, is that the more I think about it, the more it seems to me that kin and reciprocal altruism are not just the only two types that apply to the biological world, they are also the only two flavors that are rationally defensible. I'm perfectly aware that this begins to sound like Ayn Rand, and if that's the case, so be it. (However, I still have little respect for Rand as a “philosopher” -- on account of her amateurish approach -- and even less for her as a human being, at least based on reports of her nasty personal behavior, not necessarily congruent with her own teachings. Then again, to dismiss someone's ideas solely on the ground of her character is to commit the classic genetic fallacy, so we ought to distinguish “the sin from the sinner,” as they say in some religious circles).

Anyway, to avoid the naturalistic fallacy one has to come up with a rational defense of the position that kin and reciprocal altruism are all that one needs to live a moral life. And, frankly, it seems to me that this isn't difficult. Few people would argue against taking care of one's offspring or close relatives, but not many of us are prepared to sacrifice everything for them either – a balance typical of kin altruism. Yes, there are rare cases of mothers sacrificing their lives for their children (and if the children are sufficiently numerous, this makes straight biological sense, see the quote above from Haldane), but the much more common dynamic is one of parent-offspring conflict, in which the older the children become the less the parents are willing to invest resources in them. And why should they? Nobody has ever come up with a good argument for why a complete negation of one's own interests is, in fact, somehow the moral thing to do.

Analogously, (diffuse) reciprocal altruism is what makes the world go round. I am nice to my friends because they are nice to me; should they turn nasty, after a while I would let them go. I contribute to National Public Radio because I get both a direct benefit (I listen to it) and an indirect one (I think intelligent public information makes for a better world). While at the moment I don't need financial or medical assistance, I contribute to charities because of the indirect benefits they bring (a better and more just world means more stability and prosperity for everybody). And so on. It would be hard to make a case that I should give up all my resources in order to help, say, the resolution of the Arab-Israeli conflicts (assuming that such an outcome is even theoretically possible). Again, why would such an extreme degree of altruism be moral, since it would deny my own ability to function in the world? I am certainly not more intrinsically valuable than any other human being, but I ain't less valuable either (from a moral perspective, not measured as practical contributions to society). Incidentally, this is part of what makes suicide bombings immoral.

So, it seems to me that the burden of proof is on those who claim that true altruism is morally superior to the kin and reciprocal varieties. These people seem to think in terms of group advantage (it's for the good of society), but they fail to recognize that they need to make a case for why the group is more important than the individual – from the individual's perspective. This represents a fairly big change of my positions from what they were years ago, and it has taken time to get here, but I don't see a way out of this conclusion (again, outside of unsubstantiated divine commandments). Indeed, I don't think it is a bad conclusion at all, because if taken seriously it would bring people to strive for a reasonable balance among one's own needs, those of one's offspring, and those of the rest of society. Hard to think of a better world, really...

Wednesday, November 21, 2012

Happiness, the data

by Massimo Pigliucci

[It's that time again, Massimo goes on vacation! As a result, we are running "encore" presentations of some of the best essays posted at Rationally Speaking. Enjoy, we'll be back with new material soon!]

[Originally posted on July 19, 2006]

Interesting article by Jennifer Senior last week in New York magazine, reporting on recent research on happiness. Seems like psychology is at least in part turning from the study of how to be less miserable (a la Freud) to a more positive approach to help people improve their lives (it's called “positive psychology,” and its the latest rage in academic departments and on bookshelves at Barnes & Noble).

One of the concepts discussed in the article is the difference between people who try to optimize their choices and those who go for what economists and biologists (and now psychologists) call “satisficying.” If you are an optimizer, you are after the best possible solution to a problem, be that an engineering puzzle, choosing a car, or finding a mate. If you are a satisficer, however, you'll establish certain criteria that have to be met, and then stop your search at the acceptable first solution (or car, or mate) that fulfills such requirements.

The trade-off between the two strategies is well-known: optimizing searches can in theory find the best solution made possible by the laws of the universe, but it could also take a time equivalent to the age of the universe itself to actually find the best solution! And satisficying doesn't mean settling for a minimum common denominator: one's bar can be set pretty high, but the point is that you stop the search (and save energy and time) as soon as that bar has been reached by an available solution.

What does have this to do with happiness? Turns out that optimizers are more unhappy than satisficers, because the latter can stop worrying and enjoy what they've got, while the former will keep searching forever, or will settle for something (or someone) out of necessity, and yet feel like they could have gotten a better outcome had they continued the search (as in “the neighbor's grass is always greener,” or “look for the one person who is your soul mate,” and similar nonsense). Moreover, the difference between the two groups is most striking when there are many choices: contrary to what most people seem to think (witness the American obsession with health plans that allow unlimited choice of doctors), too many choices have a paralyzing effect, and start a perennial chain of conterfactual thinking (“had I gone with the other brand of cereal I would have been happier”) that increases frustration and diminishes happiness.

One more note from the article in question: apparently, there is something in common between the experiences of having children and living in New York City. In both cases, people are less happy than people who, respectively, don't have children or don't live in the Big Apple (the study didn't address the particularly unfortunate lot that has both conditions -- In the interest of full disclosure, I have a child, and I am about to move to NYC). The researchers readily found out why this is: despite loud protestations to the contrary, having children or living in New York City is a pain in the neck, because it results in countless daily irritations. Why, then, do people who have children or live in “the” city (as it is known here on Long Island) insist that they wouldn't have it any other way? (Of course, in the case of having offspring there is that little matter of the biological imperative, but we'll leave that aside.) Psychologists have found that in both cases people experience occasional “transcendental” moments, the “whoa” effect, if you will. For example, your child calling on your cell phone to ask you to explain to her the meaning of the word “existentialism” (it has happened to me), or witnessing a sunset over the Brooklyn Bridge (seen that one too). Those rare moments, for most people, are worth the daily crap they have to endure as a result of their choices. Just like drug addicts, we live for the occasional high; it doesn't make us very happy overall, but the rest of the world be damned if we'll let it go!

Monday, November 19, 2012

Tricks of the brain

by Massimo Pigliucci

[It's that time again, Massimo goes on vacation! As a result, we are running "encore" presentations of some of the best essays posted at Rationally Speaking. Enjoy, we'll be back with new material soon!]

[Originally posted on May 18, 2006]

If you think your brain is an objective processor of data about the world, capable of reaching objective, unbiased conclusions, think again. And if you to really worry about it, then read a nicely written little booklet by Cordelia Fine, A Mind of Its Own: How Your Brain Distorts and Deceives. Our brain can be vain, emotional, deluded, pigheaded, secretive, and bigoted, all of which are words appearing in the chapter titles of Fine's book.

For example, consider vanity. In an experiment with male college students (psychologist's favored animal subjects), a group was told they had performed exceedingly well on a test for manual dexterity, while another was told they did pretty badly – except that the evaluations were assigned randomly to the two groups. When prompted for explanations, students who had to provide them immediately were at a bit of a loss, but those who had a few days to think about the experience had apparently managed to concoct all sorts of apparently logical (but in fact bogus) reasons for their performance. Seems that our brains are great story tellers indeed, especially about themselves.

Being emotional has a bad reputation, unless you like English movies set in the Victorian age, but in fact it turns out that emotions often come to our rescue. Another experiment reported by Fine concerns subjects who were asked to bet on different decks of cards, some of which were biased to occasionally yield high losses and others that were more benign. The statistical underpinning was too complex to be arrived at without actual numerical evaluations of the odds, and yet it turns out that subjects developed an intuitive feeling for the decks to avoid. Interestingly, the experimenters were able to show that the subjects responded emotionally (heightened skin conductance) to the bad decks even before they began to actually implement their intuitions about the game. It seems that an unconscious “fear of the bad deck” was the first response of the brain. Perhaps we should seriously entertain what our emotional intuitions tells us before dismissing them as “irrational.”

A deluded brain, you say? Indeed, just consider another experiment in which people were asked a rather simple question: are you happy with your social life? Generally, subjects answer in the positive, and can provide “evidence” that this is in fact the case. But now ask the same question slightly differently: are you un-happy with your social life? Turns out that most respondents admit to unhappiness, and can as easily provide supporting evidence from their recent experiences. The possibilities for manipulating the public through polls and advertisements are endless. And, of course, have been exploited for a long time.

Wanna know how pigheaded your brain can be? Easily done, again through one of those cunning psychological experiments perpetrated by scientists who seem to derive an unholy degree of pleasure from showing the rest of us how embarrassing it can be to be human (as Kurt Vonnegut wrote in Hocus Pocus). For example, it isn't particularly surprising that explicitly negative headlines in a newspaper will cast a shadow on someone's reputation. What is a bit more surprising is that an innuendo, say a title ending with a question mark, has a similar effect. And even more disturbingly, someone's reputation (and likelihood to, say, win an election) can be affected even by a positive headline, actually denying the reality of charges. Apparently, our pigheaded brains remember the part of the headline mentioning the charge, but not the little and yet crucial negation that accompanies the title of the article!

In what sense are human brains “secretive”? Fine briefly reviews evidence that poses the disquieting question of who or what really is in charge “up there.” We are all familiar with the phenomenon by which repeated tasks that initially require our conscious attention (like driving) become more and more automated while control is delegated to unconscious processing. But the famous “tap your finger” experiment by Benjamin Libet is a window into the possibility that we might routinely be much less in control than we think. Libet asked volunteers to spontaneously decide when to tap a finger, then measured what was going on in terms of electrical potentials inside their bodies and brains. Not only he detected a “readiness potential,” i.e. increased activity in the brain before the muscles were actually activated, but he measured that such potential occurred about one third of a second before the volunteers were aware of their decision to move the finger! Apparently, the decision to engage in the action came from somewhere in the unconscious of the brain, and was made apparent to the conscious after the causal chain eventually leading to the action itself had already started. Again, who's in charge here?

If all of this hasn't convinced you to question your brain's motives and reliability, the final chapter of Fine's book deals with bigotry, and how difficult it is to get rid of. Studies show that if one “primes” the brain (i.e., uses words or symbols connected to a particular concept, like mother, or race) with neutral words, the effect is different depending on whether one is prejudiced on that particular issue or not. So, for example, a racist primed with neutral words about black people will react negatively, while a non-racist will not. However, if the priming is done with negative words, or if the subject is tired, then even non-racists are subject to accept racial prejudices. This goes a long way toward explaining how difficult it is to maintain non-biased opinions when under a barrage of emotionally-charged messages in the media, and presumably also while we are stressed, or simply tired, by our own daily affairs. Moreover, psychologists have discovered that will power is in very limited supply, so that if you spend a lot of mental energy, say, avoiding to overeat and trying to follow a healthy life style, your guard may be too low to protect yourself against ideological assaults that would require a fresh and vigilant mind to be detected. Not a pretty trade-off, if you ask me.

Fine's message isn't that we shouldn't trust our brains – after all, we have no choice! Rather, the idea is that by knowing about our natural tendencies toward biased thinking we will be better able to maintain a healthy dose of skepticism about ourselves and others. The brain is the most crucial of our organs, pity that most of us don't bother to read even a short and sensible manual for its proper care and usage.

Friday, November 16, 2012

The problem with baptisms


www.evangelicaloutreach.org
by Michael De Dora

Last week I received an invitation to a baptism. Usually mail of this sort would not merit enough consideration for an essay on a blog devoted to philosophical and scientific discussion. You might even consider it a normal part of life. Indeed, this was at least the 10th invitation of this sort I’ve received from relatives over the past couple of years, and I expect to start receiving them from friends in the near future.

Yet this time around, things were different. While I have accepted some invitations in the past, my living situation has often prevented me from even considering going to most. However, this latest baptism is being  held at a time when I would be able to go. But I am not going. Given that my decision to decline has drawn questioning, and that I plan to continue not attending baptisms moving forward, I think it’s worth explaining my position in a public forum. This will allow both others and myself the opportunity to make sense of this surprisingly heated issue. 

Before moving forward, let me paint a quick picture of what goes on at these gatherings, at least in my experience. The baptisms to which I am invited typically take place in a Christian church, usually Roman Catholic, somewhere on Long Island, New York. Attendees dress in their nicest clothes and gather at the selected church to watch a (supposedly) holy man lead a religious ceremony, some more sectarian than others, which concludes with the crowd rejoicing over a newborn being blessed. Depending on your religious beliefs, you might think God or Jesus Christ is present. When the proceedings conclude, everyone heads to a local restaurant for a celebratory meal. 

This might sound innocuous to most Americans, but I think there are a couple of significant problems that I can best illustrate by considering some common questions I receive regarding my opposition:
  • Are you so rabid about your atheism that you would be offended attending a religious ceremony?
  • Are you saying parents shouldn’t make decisions for their kids? 
  • Isn’t baptism just one small, meaningless ceremony? 
  • Why would you sit out a family event? What do you think you’re accomplishing? Isn’t that being intolerant?

Allow me to take these one-by-one. 

Simply put, no, I have not been personally offended when attending religious ceremonies such as baptisms. I do find them a waste of my time – usually I sit and read the Bible in an attempt to pull some education from the lengthy service – but I am rarely offended. That said, to focus on my experience is to miss the point. The reason why I am uncomfortable with baptisms is not because I am personally offended; it’s because I am offended by what is happening to the child.

To me, a baptism represents, at least in part, a parent forcing his or her religious heritage on a child unable to approve or reject the gesture. It labels a baby with a certain religious affiliation, and enters him or her into that religion, or else puts him or her on the path toward that religion. My presence at a baptism condones the practice of basing your child’s beliefs on yours. But as a person who values freedom of conscience, I reject in full the idea of parents passing their religious beliefs onto their children by default. I believe we should not label or push a child regarding religion – atheist, Christian, Muslim, or anything else – until he or she can make up his or her mind about the matter. I believe parents should provide their children a neutral and informative perspective rather than an indoctrinating and closed-minded one.

In this sense, baptisms categorically differ from other religious ceremonies that I have attended and will continue to attend. For instance, several of my friends have been married in churches, through religious ceremonies. I have attended each one and will continue doing so. Why? Because they are two grown adults deciding they, and only they, want to get married in a religious ceremony at a church. While I would certainly choose a different setting for myself, at least they have thought about it, and consented to the final decision (unless, of course, their family has coerced their decision, in which case I say shame on the family). 

But back to baptisms. Some people have responded to my previous line of argument by stating that, “you know, parents need to make decisions for their children. They don’t have a choice.” In a certain respect, I agree. Clearly parents need to keep after their children, and ensure their safety, health, and happiness. A good and generally agreeable example is that a parent has to make decisions regarding a child’s dietary habits (though I say generally, because clearly parents do not have an absolute right to instruct their children as they wish, an issue I’d like to take up in another essay). But a baptism has nothing to do with the direct safety, health, and happiness of the baby. A baptism is the act of deciding for a child something that is irrelevant to the child’s immediate well-being. It is not akin to telling your child to eat his or her greens; it is an effort to plan and control the development of the child’s beliefs and values, especially regarding religion.

Which brings me to the next question, regarding baptisms being a single and small instance of parents’ intrusion. I admit this, and do not believe baptisms in themselves represent a severe or pressing moral problem. You might even point out that children in Islamic societies undergo far worse methods of indoctrination, and I would agree. However, the fact that these experiences are different doesn’t make either of them good or desirable. Nor does it alter the fact that these experiences are all part of a broader landscape of behaviors in which parents are pressing their religious beliefs onto children, without giving the children a chance to think things through from an objective standpoint. Some rituals might be worse than others, yes, and it would be short-sighted to pick on only one specific behavior at the expense of others. But I do reject them all, I just happen to think that rejecting baptisms in particular is a worthy focus in the U.S., since they are often the beginning of a lifelong trend for the child.

In regard to my family and friends, I think it’s misleading to claim that sitting out a baptism is akin to sitting out a family event. To me, a family event is one where the family comes together to celebrate the family. A baptism is an event focused almost solely on religion. There is no other reason to gather on the day of a baptism but to celebrate the child’s induction into a certain religion. Hence, I am not missing a family event; I am missing a religious event attended by my family. There is a difference.

Even so, I do not sit out baptisms to offend my family members (or friends), nor should my absence necessarily offend them. I wrote above that the focus of this discussion should not be about my feelings as a secular person. It also should not be about my feelings for family members and other loved ones, which are irrelevant and unquestionably strong – I value my family members and friends, and try to spend as much time with them as possible. The focus here should be on the child. And in my estimation, the child is being wronged. 

In closing, I would argue that the simple act of sitting out baptisms does actually serve a purpose. Historically speaking, one of the only ways that tradition has ever changed is when certain people stand up and proclaim, “wait a minute; something isn’t right here; we have some issues with what’s being practiced.” This allows other people who might share in this dissent to see some safe ground on which to plant their feet. Perhaps, as a result of my actions, one of my relatives (doubtful) or friends (more likely) will find the courage to not baptize their child. At least I can hope so.

Even if I fail to convince anyone else, I see no reason why my secular and others’ religious opinions cannot coexist within a framework of tolerance. Certainly I am disagreeing with a long-practiced tradition. But I am not organizing anti-baptism protests outside of churches, or lobbying for laws banning baptisms. I am simply stating that I would prefer not to attend baptisms because I consider them a harmful practice, or at least part of a broader spectrum of harmful practices. How is that intolerant? Tolerance doesn’t mean going with the flow or keeping one’s mouth shut for the sake of tradition. Tolerance means being respectful toward others. There’s nothing intolerant about sitting out a religious ceremony that is contrary to your values, or providing honest answers when asked a question about your decision. If anything, it is the vocal criticism of my right to not attend such ceremonies that betrays a degree of intolerance.

Tuesday, November 13, 2012

A better way to do “studies,” perhaps


Lehman College
www.lehman.edu
by Massimo Pigliucci

This semester I’ve been running a graduate level seminar at the City University of New York, on the difference between philosophy of science and science studies. The latter is a broad and somewhat vaguely defined term that includes (certain kinds of) sociology of science, postmodern criticism of science, and feminist epistemology. It’s the stuff of the (in)famous science wars of the 1990s (think Sokal affair, or perhaps this most recent disgraceful episode).

I told my students upfront that my sympathies tend to be with analytic philosophy of science, as opposed to continental-inspired science studies. But also that I realize that there must be some fire behind that much science studies smoke, and that I am certainly aware that there is a significant bit of exaggeration and silliness going on in philosophy of science circles as well (like in pretty much any academic, no, make that human, activity). So the point of the seminar was to look at the primary literature and sift out the good kernels from the background mud (and mud slinging).

But this post isn’t about that, specifically. Rather, it is more broadly about the academic phenomenon of “X Studies,” where X is an increasing number of things, which includes but is by no means limited to: gender, women, African-American, Asian, Italian-American, Latino, Puerto Rican, disability, obesity, and so on; and, of course, science. Before proceeding, let me make clear that I am not about to pass judgment on the academic quality of these programs, neither in terms of the scholarship of the faculty involved nor in terms of the courses being taught. I simply have neither the expertise nor the experience necessary to do so. And of course my (qualified) skepticism of science studies in particular (where I do have both the scholarly expertise and the teaching practice) cannot be generalized to other fields of the Studies family.

Rather, what I am wondering is whether particular implementations of the Studies model are the best way to achieve the stated goals of these programs. The basic idea behind Studies is to provide room in the academy for scholarship and teaching that represents and caters to traditionally underrepresented and under-catered to groups: ethnic minorities, women, people with disabilities, etc. Now, there are at least two ways of setting up these programs to pursue said goals. One way is to create separate administrative units — usually departments — on a par with traditional departments like Anthropology, English, History, Philosophy, and the like. A second way is to house the programs in a classic department, the choice of which depends on the specific type of Studies one is considering (e.g., if the focus is primarily on comparative literature, English may be the most appropriate house, if we are talking cultural history then a History department, epistemology in a Philosophy department, and so on). The CUNY campus where I teach has actually adopted both models, depending — I assume — on the history and relative impact of each individual Studies program (you can see the complete list here).

There are advantages and disadvantages in both cases. One clear advantage of setting up independent administrative unit is that the Studies are inherently interdisciplinary animals, and as such will always be somewhat constrained within the confines of one of the classic departments. Then again, higher level administrators at most American campuses talk the good talk when it comes to interdisciplinarity, but rarely walk the corresponding walk. As a result, independent mini-departments may end up being significantly under-resourced in terms of faculty, administrative assistance, and so on.

But in my mind there is a more compelling reason to house Studies programs within traditional departments, while at the same time treating them as true interdisciplinary units: diversity. Bear with me for a moment, because this is going to sound somewhat ironic. Remember, a pivotal idea behind these programs is to allow the exploration, both in terms of scholarship and of teaching, of areas that are not well served by traditional academia, because they pertain to historically undervalued minorities. But the reality of a number of Studies programs is that they end up creating a circle of like minded faculty teaching to like minded students. I don’t have quantitative nationwide data (anyone out there? Let’s do some decent crowd sourcing, shall we?) but I have been in a number of universities and have met a number of students and colleagues in these programs. It is relatively rare for, say, women studies not to be taught almost exclusively by women to women; or for Italian-American studies not to be taught by Italian and Italian-American faculty to Italian-American students; and so on. Yes, there are exceptions, but that’s what they are, exceptions. So the likely result is precisely the opposite to the one sought: instead of bringing more diversity of opinions and more interdisciplinarity to campus, one may end up creating a number of isolated islands that inevitably begin to be looked at with suspicion by other faculty and students.

The way out of this, I think, is to take the second route mentioned above and install Studies programs, qua interdisciplinary programs, within broader classic departments. That way the faculty teaching in Studies programs will be in contact with faculty with a broader swath of interests (collectively), and will likely also have to teach more general courses than just those focused on the Studies approach. Which in turn would have the additional benefit of allowing Studies-focused faculty to attract a broader sample of students (those taking introductory courses in, say, history, or philosophy) to their area of interest. Likewise, students taking Studies courses as part of their major or minor will also have to take a broader range of courses as specified by the particular department’s learning objectives and degree requirements. Everybody wins, so to speak. Moreover, integrating Studies programs in this manner will help them transition from a small niche, low budget exercise in need of protection into a welcome and even necessary addition to liberal arts education.

Thursday, November 08, 2012

Consciousness and the Internet


blog.lib.umn.edu
by Massimo Pigliucci

Here is an interesting statistic: if we multiply the (approximate) number of computers currently present on planet Earth by the (approximate) number of transistors contained in those computers we get 10^18, which is three orders of magnitude larger than the number of synapses in a typical human brain. Which naturally prompted Slate magazine’s Dan Falk to ask whether the Internet is about to “wake up,” i.e., achieve something similar to human consciousness. He sought answers from Neuroscientist Christof Koch, scifi writer Robert Sawyer, philosopher Dan Dennett, and cosmologist Sean Carroll. I think it’s worth commenting on what three of these four had to say about the question (I will skip Sawyer, partly because what he said to Falk was along the lines of Koch’s response, partly because I think scifi writers are creatively interesting, but do not have actual expertise in the matter at hand).

Koch thinks that the awakening of the Internet is a serious possibility, basing his judgment on the degree of complexity of the computer network (hence the comparison between the number of transistors and the number of synaptical connections mentioned above). Koch realizes that brains and computer networks are made of entirely different things, but says that that’s not an obstacle to consciousness as long as “the level of complexity is great enough.” I always found that to be a strange argument, as popular as it is among some scientists and a number of philosophers. If complexity is all it takes, then shouldn’t ecosystems be conscious? (And before you ask, no, I don’t believe in the so-called Gaia hypothesis, which I consider a piece of new agey fluff.)

In the interview, Koch continued: “certainly by any measure [the Internet is] a very, very complex system. Could it be conscious? In principle, yes it can.” And, pray, which principle would that be? I have started to note that a number of people prone to speculations at the border between science and science fiction, or between science and metaphysics, are quick to invoke the “in principle” argument. When pressed, though, they do not seem to be able to articulate exactly which principle they are referring to. Rather, it seems that the phrase is meant to indicate something along the lines of “I can’t think of a reason why not,” which at best is an argument from incredulity.

Koch went on speculating anyway: “Even today it might ‘feel like something’ to be the Internet,” he said, without a shred of evidence or even a suggestion of how one could possibly know that. He even commented on the possible “psychology” of the ‘net: “It may not have any of the survival instincts that we have ... It did not evolve in a world ‘red in tooth and claw,’ to use Schopenhauer’s famous expression.” Actually, that wasn’t Schopenhauer’s expression (apparently, the phrase traces back to a line in a poem by Alfred Lord Tennyson published in 1850), but at least we have an admission of the fact that psychologies are traits that evolved.

And talk about wild speculation: in the same interview Koch told Slate that he thinks that consciousness is “a fundamental property of the universe,” on par with energy, mass and space. Now let’s remember that we have — so far — precisely one example of a conscious species known in the entire universe. A rather flimsy basis on which to build claims of necessity on a cosmic scale, no?

Dennett, to his credit, was much more cautious than Koch in the interview, highlighting the fact that the architecture of the Internet is very different from the architecture of the human brain. It would seem like an obvious point, but I guess it’s worth underscoring: even on a functionalist view of the mind-brain relationship, it can’t be just about overall complexity, it has to be a particular type of complexity. Still, I don’t think Dennett distanced himself enough from Koch’s optimism:

“I agree with Koch that the Internet has the potential to serve as the physical basis for a planetary mind — it’s the right kind of stuff with the right sort of connectivity ... [But the difference in architecture] makes it unlikely in the extreme that it would have any sort of consciousness.”

The right kind of stuff with the right sort of connectivity? How so? According to which well establish principle of neuroscience or philosophy of mind? When we talk about “stuff” in this context we need to be careful. Either Dennett doesn’t think that the substrate matters — in which case there can’t be any talk of right or wrong stuff — or he thinks it does. In the latter case, then we need positive arguments for why replacing biologically functional carbon-based connections with silicon-based ones would retain the functionality of the system. I am agnostic on this point, but one cannot simply assume that to be the case.

More broadly, I am inclined to think that the substrate does, in fact, matter, though there may be a variety of substrates that would do the job (if they are connected in the right way). My position stems from a degree of skepticism at the idea that minding is just a type of computing, analogous to what goes on inside electronic machines. Yes, if one defines “computing” very broadly (in terms, for instance, of universal Turing machines), then minding is a type of computing. But so is pretty much everything else in the universe, which means that the concept isn’t particularly useful for the problem at hand.

I have mentioned in other writings John Searle’s (he of the Chinese room thought experiment) analogy between consciousness as a biological process and photosynthesis. One can indeed simulate every single reaction that takes place during photosynthesis, all the way down to the quantum effects regulating electron transport. But at the end of the simulation one doesn’t get the thing that biological organisms get out of photosynthesis: sugar. That’s because there is an important distinction between a physical system and a simulation of a physical system.

My experience has been, however, that a number of people don’t find Searle’s analogy compelling (usually because they are trapped in the “it’s a computation” mindset, apparently without realizing that photosynthesis also is “computable”), so let’s try another one. How about life itself? I am no vitalist, of course, but I do think there is a qualitative difference between animate and inanimate systems, which is the whole problem that people interested in the origin of life are focused on solving (and haven’t solved yet). Now, we know enough about chemistry and biochemistry to be pretty confident that life as we know it simply could not have evolved by using radically different chemical substrates (say, inert gases, to take the extreme example) instead of carbon. That’s because carbon has a number of unusual (chemically speaking) characteristics that make it extremely versatile for use by biological systems. It may be that life could have evolved using different chemistries (silicon is the alternative frequently brought up), but there is ample room for skepticism based on our knowledge of the much more stringent limitations of non-carbon chemistry.

It is in this non-mysterian sense that, I think, substrate does matter to every biological phenomenon. And since consciousness is — until proven otherwise — a biological phenomenon, I don’t see why it would be an exception. To insist a priori that it is in fact exceptional is to, ironically, endorse a type of dualism: mind is radically different from brain matter, though not quite in the way Descartes thought.

As it turns out, cosmologist Sean Carroll was the most reasonable of the bunch interviewed by Falk at Slate. As he put it: “There’s nothing stopping the Internet from having the computational capacity of a conscious brain, but that’s a long way from actually being conscious ... Real brains have undergone millions of generations of natural selection to get where they are. I don’t see anything analogous that would be coaxing the Internet into consciousness. ... I don’t think it’s at all likely.” Thank you, Sean! Indeed, let us stress the point once more: neither complexity per se nor computational ability on its own explain consciousness. Yes, conscious brains are complex, and they are capable of computation, but they are clearly capable of something else (to feel what it is like to be an organism of a particular type), and we still don’t have a good grasp of what is missing in our account of consciousness to explain that something else. The quest continues...