About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Thursday, September 29, 2011

Michael's Picks

by Michael De Dora

* The New York Times details the anti-abortion movement’s current fight to restrict reproductive rights across the country.

* You might think it’s a stretch to say that we can predict whether a person leans left or right by looking at their brain scan, but Andrea Kuszewski doesn’t necessarily think so.

* You’ve probably heard of the new movie Moneyball, which stars Brad Pitt as Oakland Athletics General Manager Billy Beane. Here’s the story behind the movie. It essentially represents skepticism as applied to baseball.

* A couple weeks ago, I wrote about the United States’ use of drones to carry out strikes on suspected militants in areas of the world where the U.S. is not formally engaged in war, such as Pakistan. Two days after my essay, the New York Times reported that President Obama’s legal team is in the midst of a hotly contested debate on whether to expand the drone program to attack militants in Yemen and Somalia.

 * The Guardian discusses Stephen Pinker’s new book, in which the evolutionary psychologist argues that violence is on the decline.

* Twenty percent of Americans think God is guiding the economy, according to a new poll. I’m honestly surprised that number is so low.

* NYPD Commissioner Ray Kelly has ordered his police force, the largest in the world, to stop arresting minimal marijuana possessors.

* Two (admittedly morbid) Wikipedia pages that kept me busy while home sick this weekend: a list of unusual deaths, and a list of unsolved deaths.

Monday, September 26, 2011

The value of academic scholarship

by Massimo Pigliucci

How do academics justify to society what they are doing and why they should be paid for it? It’s a question that has been seriously relevant only beginning with the 20th century and the rise of the professional academic, in both the sciences and the humanities (and especially after WWII for the sciences, which have seen a huge increase in their share of university and national budgets). Before then, much scholarship was done outside universities, and even within them it really didn’t cost much and was often paid for by a prince or other patron for their own amusement and aggrandizement. David Hume, one of the most influential modern philosophers, never held an academic post, and neither did Darwin. (Again, there are exceptions — just think of Kant and Newton — but that’s what they were, exceptions when compared to the modern version of the academy.)

Indeed, the very idea of a “university” got started in Europe in the 11th century (the first one on record was in Bologna, Italy, quickly followed by Paris, Montpellier, Oxford and Cambridge), and the term initially referred to a guild of itinerant teachers, not a place. By the end of the Middle Ages, however, it started to occur to various municipalities that it was good for business to attract the best teachers on the market, and that a relatively cheap way of doing so was to offer them shelter — both in physical form as a place in which to teach and study and in the more crucial one of (some) protection from Church authorities and their persecution mania (believe it or not, Thomas Aquinas’ writings were considered too hot for public consumption for many years).

Particularly because the phenomenon is so very recent, the question of why we should finance sciences and humanities with university posts and federal grants is a good one, and should not be brushed aside by academics. The latter attitude is too often the case, for instance whenever my colleagues in the sciences tell people that “basic research leads to important applications in unforeseeable ways,” or that whatever they happen to be doing is “intrinsically interesting.”

Let’s start with the first response. Yes, it is easy to come up with the historically more or less accurate anecdote that links basic research to some application relevant to the human condition, though of course only positive examples tend to be trotted out, while the interested parties willfully ignore the negative ones (basic research in atomic physics led directly to Hiroshima and Nagasaki, for instance). But the fact is that I have never actually seen a serious historical or sociological study of the serendipitous paths that lead from basic stuff to interesting applications (this article, focusing on math, does report on some more systematic attempts in that direction, but it still feels very much as cherry picking).

Yet, it seems that an evidence-based community such as the scientific one (the problem of applications doesn’t even arise in the humanities, obviously) would be interested and capable of generating tons of relevant data. What gives? Could it be that the data is out there and it doesn’t actually back up the official party line? Possibly. More likely is that the overwhelming majority of scientists simply doesn’t give a damn about applications of their research (again, the issue isn’t really one that humanists are even confronted with; and besides, have you compared the budgets of a couple of typical philosophy and physics departments recently?). I certainly didn’t care about it when I was a practicing evolutionary biologist. I did what I did because I loved it, had fun doing it, and was lucky enough to be paid to do it (in part, the other parts being about teaching and service). Oh, yes, I to dutifully wrote my “impact statement” for my National Science Foundation grants, in which I allegedly explained why my research was so relevant to the public at large. But the truth is that everybody’s statement of that sort is pretty much the same: disingenuous, very short on details, and usually simply copied and pasted from one grant to another.

Which brings me to response number two: it’s intrinsically interesting. I never understood what one could possibly mean by that phrase other than “it is interesting to me,” which is rather circular as far as explanations go. Perhaps we could get scientists to agree that, say, research on the origin of life is “intrinsically” more interesting than the sequencing of yet another genome, or the study of yet another bizarre mating system in yet another obscure species of insects. But then one would expect much of the research (and funding!) being focused on the origin of life question rather than on those other endeavours. And one would be stunned to discover that precisely the opposite happens. In fact, as John R. Platt, a biophysicist at the University of Chicago, famously wrote in an extremely cogent article on “strong inference” published in Science in 1964: “We speak piously of ... making small studies that will add another brick to the temple of science. Most such bricks just lie around the brickyard.”

There is a third way to show that what you do is worth the university paying for, one that is increasingly welcomed by bean counting administrators of all stripes — from NSF to your own Dean or Provost: impact factors. These days, in order to make a case for your tenure, promotion or continued funding, you need to show that your papers are being cited by others. Again, the game largely concerns the sciences, since the number of scientific journals, scientific papers, and their consumers vastly outnumbers those of humanist fields. (I can easily catch up with pretty much everything that gets published in philosophy of biology these days, but the same feat was simply impossible for any human being when my field was evolutionary biology — and the latter isn’t that large of a field compared to other areas of biology or science more broadly!)

The problem, of course — as pointed out by Tim Harford in the article mentioned above about mathematics — is that this solves precisely nothing, for a number of reasons. First, because impact factors, despite the fact that they are expressed by numbers, still reflect the qualitative and subjective judgment of people. Yes, these are fellow experts in the relevant scholarly community, which is certainly pertinent. Firstly, scientific communities tend to be small and insular, as well as prone to the usual human foibles (such as jumping on the latest band wagon, citing papers by your friends and avoiding those of your foes like a pest, indulging in a comically absurd number of self-citations, etc.). Secondly, impact factors only measure the very short term popularity of particular papers, not the long term actual impact of the corresponding pieces of research. Perhaps that’s the best that can be done, but it really doesn’t seem even close to what we’d like. Thirdly, no impact factor actually measures anything whatsoever to do with “impact” in the broadest, societal, sense of the word. Which brings us back to the original question: why should society give money to such enterprises, and at such rates?

The answer is prosaically obvious: because society gets a pretty decent bargain out of allowing bright minds to flourish in a relatively undisturbed environment. Academic careers are hard: you need to get through college, five to seven years of PhD, one, two, more often than not three postdocs, and seven more years of tenure track, all to land a stable job (undoubtedly a rare commodity, especially in the US!), a decent but certainly not handsome salary, and increasingly less appealing (but still good) benefits. Oh, and a certain amount of flexibility as to when and how much to work. (None of the above, of course, is guaranteed: the majority of PhD students do not find research positions in universities, period.) Trust me: nobody I know in the academy goes through the hell of the PhD, postdoc and tenure process just so that she can (maybe) land a permanent job with flex time. We all do it because we love it, because — like artists, writers, and musicians — we simply cannot conceive doing anything else worthwhile in our lives. (Incidentally, the term “scientist” was coined by William Whewell, a philosopher, in the 19th century; it was in direct analogy to “artist” — an analogy that is more meaningful than most modern artists and scientists seem to realize.)

Passion is, after all, the same response one gets from non-scientific academics (who usually can’t fall on the “what I do matters to society in practical ways” sort of defense). It’s also why civilized nations support (yes, even publicly!) the fine arts: scholarship and artistic creativity simply make our cities and countries much better places to live.

Of course, something tangible is (indeed, ought to be) required of academics in return. And this something is to be found in the other two areas (outside of scholarship) on which academics are judged by their peers and by university administrators (though it would be so much better if the latter simply confined themselves to, well, administration): teaching and service. And by service I don’t mean that largely (though not exactly entirely) useless and mind-numbing type of “service” one does for one’s own institutions (committees’ membership, committee meetings, committees on committees, and the like). I mean service to the community, which comes in various forms, from writing books, articles and blogs aimed at the public, to giving talks, interviews, and so forth. Service, in my view, means taking seriously the idea of a public intellectual, an idea that would only increase the quality of social and political discourse in any country in which it is taken seriously.

What about teaching? Well we (almost) all do it — unless you are so good at scholarship that the university will exempt you from doing it, a situation that I think is quintessentially oxymoronic (shouldn’t our best current scholars excite the next generations?). But do we do it well? Murrey Sperber, in his Beer and Circus: How Big Time College Sports Has Crippled Undergraduate Education, talks about the myth of the good researcher = good teacher, a myth propagated (again, curiously, without the backing of hard data) by both faculty and administrators. At least on the basis of my anecdotal evidence I am convinced (‘till data will show otherwise) that the two sets of skills are orthogonal: one can be an excellent researcher and a lousy teacher, and vice versa - one can be an excellent teacher while being a lousy scholar (though, obviously, one cannot be a good teacher without understanding the material well).

Sure, you will hardly find faculty members at any university who are both a lousy teacher and a lousy scholar: why would anyone hire that sort of person? But you will find examples of all the other three logical categories (with different admixtures of types depending on what college we are talking about), and I honestly have no idea of what percentage of us falls into each of them. (We all, of course, think that we are above average teachers as well as above average scholars, but that sounds a lot like the sort of wishful thinking that goes on in Lake Wobegon, where all the women are strong, all the men are good looking, and all the children are above average...)

The way I see the bargain being struck between society and scholars these days is this: the scholar gets a decent, stable job, which allows her to pursue interests in her discipline, no matter how (apparently) arcane. Society, in return, gets a public intellectual who does actual service to the community (not the stuff that university administrators like to call “service”), as well as someone who takes her duty to teach the next generation seriously, which means honestly trying to do a good job at it, instead of looking for schemes to avoid it. Sounds fair?

Sunday, September 25, 2011

New Rationally Speaking podcast: Fluff that works

In this episode we tackle the curious case of pseudoscience or mysticism that works, or seems to, at least some of the times.

From acupuncture to chiropractic, from yoga to meditation, what do we make of instances where something seems to have the desired effect for the wrong reasons (e.g., acupuncture), or might otherwise be a perfectly acceptable technique which happens to come intricately bundled with mysticism (e.g., yoga)?

Friday, September 23, 2011

An optimistic look at human nature

by Massimo Pigliucci

I’ve recently read Martha Nussbaum’s Not for Profit: Why Democracy Needs the Humanities. It is a manifesto in defense of critical thinking, the role of the humanities (alongside science) in liberal arts education, and the crucial contribution of the latter to an open democratic society. But this is not what this post is about, largely.

Rather, I want to focus on a somewhat peripheral discussion that Nussbaum engages in, in chapter 3 of her book (entitled “Educating citizens: the moral [and anti-moral] emotions”). Nussbaum briefly relates three famous experiments demonstrating how easy it is to lead people to engage in bad behavior. The first experiment was conducted by Stanley Milgram (and has been repeated several times since). It’s the one where people were convinced to administer what they thought were increasingly painful electrical shock to “subjects” (in reality, confederates of the experimenters) who were allegedly being used to study the connection between learning and punishment. The results clearly showed that a figure of authority (a “doctor” in white lab coat, for instance) can easily induce people to engage in what would normally be considered cruel behavior towards strangers. Milgram himself set out to do the experiments because he was interested in the question of what could have possibly led so many Germans to acquiesce and collaborate with the Nazi policies of extermination during World War II.

The second experiment mentioned by Nussbaum was conducted by Solomon Asch to explore the effects of conformity. In this case subjects were shown, for instance, images of lines of different lengths and were asked to make judgments about their relative lengths. Unbeknownst to them, a number of confederates were pretending to participate in the experiment, but in reality gave coordinated wrong answers to the questions. Astonishingly, a number of subjects began to agree with the confederates, even though it was very clear that they were agreeing to the wrong answer.

Finally, Nussbaum refers to Philip Zimbardo’s experiment on prison dynamics, during which subjects told to play the role of prisoners or prison guards in a correction facility quickly began to behave as victims and oppressors respectively, with the first group passively accepting violence and the second one escalating their practices to include torture.

The typical interpretation of experiments such as those above is that people are easy to manipulate and that beneath a veneer of civility we can all be lead to inflict pain (Milgram and Zimbardo), be willingly victimized (Zimbardo), or endorse obvious falsities (Asch). But Nussbaum turns our perspective around and argues that another way to look at exactly the same data is that it is relatively easy to avoid the above mentioned negative outcomes by paying attention to the structure of our society (and — which goes with the main topic of her book — to the way we educate our children to be full members of that society).

In particular, Nussbaum argues that three types of structure are pernicious because they are conducive to bad human behavior (though they are most certainly not its only determinants): lack of personal accountability; discouragement of dissent; and de-humanization.

Lack of accountability is what we see in action in the Milgram experiments, where people get to delegate moral responsibility to the authority (and notice that the authority there was a scientist, not a nazi with a machine gun); discouragement of dissent is what happened during the Asch experiment, where people gave what they probably knew was the wrong answer because everyone else around them was doing the same (indeed, crucially, when the experiment was conducted allowing just one of the confederates to openly dissent, subjects were much less likely to adopt the groupthink attitude); finally, de-humanization is what characterized the Zimbardo protocol.

It should be easy to see at this point why Nussbaum links these structural issues to liberal arts education. At its best, teaching humanities (and science) is precisely about encouraging students’ willingness to question authority (against Milgram type effect), to speak out even when in a minority position (contra Asch), and to appreciate differences between genders and across cultures as quintessentially human (against Zimbardo).

Instead, we spend increasing amounts of time and money making sure that “no child is left behind” by having kids learn how to pass a standardized test that has little if any relation to the structural issues affecting human behavior in modern society.

Tuesday, September 20, 2011

Massimo's Picks

by Massimo Pigliucci

* Seven questions about science and skepticism, and how I answered them.

* In defense of naturalism. As a naturalist, I find this defense pretty darn unconvincing.

* Keynes: the sunny economist.

* Is Texas about to give us yet another dangerous dumb ass for President?

* New Atheism: Kitcher better than Dawkins?

* When superstition kills via mind-body connection.

* My Amazon review of Martha Nussbaum's Not for Profit: Why Democracy Needs the Humanities.

* When neuroscientists and philosophers of mind clash.

* Stop talking about evil and do something about evildoers.

* Michelle Bachmann takes never ending liberties with reality.

* Epicureanism: the most important underestimated engine of the Renaissance?

* My interview with Gelf Magazine about humanism and skepticism.

* Here is how low CNN has sunk. Not to mention the Tea Party. Let's kill the uninsured.

* Corporations as bad spouses.

Saturday, September 17, 2011

Comic books and counterfactuals

by Massimo Pigliucci

www.beyondspock.de
So, I had my first DragonCon experience during Labor Day weekend. I gave a couple of talks and participated in two discussion sessions for the skeptic and science tracks, and generally admired bizarre costumes and semi-naked people hanging around. (I also got to see William Shatner, Martin Landau, and Mark Sheppard, but that’s another story.)

One of the nice surprises of the conference was an evening session (8:30pm on a Sunday) on “Comics and Philosophy,” featuring a talk by Christopher Belanger, of the Institute for the History & Philosophy of Science & Technology at the University of Toronto. The talk specifically focused on “Counterfactual Cognition and Ethical Dilemmas: Lessons from Duncan The Wonder Dog,” and just in case you are wondering who Duncan The Wonder Dog is, I’ll spare you the google search.

Christopher entertained an audience of more than a hundred young people (some tattooed, some dressed in shall we say highly imaginative and unconventional ways) while trying to explain that comic books can function like thought experiments to explore the implications of counterfactual conditionals.

Christopher was exploiting a growing movement sometimes referred to as “... and Philosophy” in which academic philosophers write for the general public using pop culture as a vehicle. I have contributed to a few of these myself, particularly The Daily Show and Philosophy (on the Socratic method), and the forthcoming Sherlock Holmes and Philosophy (on logic and inference) and The Philosophy of the Big Bang Theory (on scientism). (Also check my student Leonard Finkelman’s contribution to Green Lantern and Philosophy. He also has one coming up on the complexities of the relationship between Superman and Lex Luthor.)

The point of events like the one at DragonCon and of the “...and Philosophy” books (there are several other series by other publishers, by the way) is to bring philosophy to the general public using a palatable and yet informative platform. And that’s where the trouble starts. It seems like most of my colleagues cannot be bothered to hide their contempt for such lowly degradations of their cherished discipline. Never mind that if philosophers (and other academics) insist in not talking to the public because they are too busy analyzing (for the thousandth time) every single phrase of every one of Kant’s minor works, very soon there won’t be an academic discipline of philosophy at all. That’s because academic departments do not exist to do scholarship, but to serve students. The scholarship is a perk one gets in exchange for the grueling career path that goes through an endless PhD, one or more postdocs, and seven years of tenure track. But make no mistake about it: it’s a perk, not a right, and most certainly not the raison d'être of academia.

This is true also for the sciences, though to a lesser extent because scientists typically bring in the other currency that administrators care about: hard cash from grants. Even so, when I was running a lab it was the same story: writing for a blog, organizing “Darwin Day,” writing books for the public and so on were activities looked upon with a mixture of amusement and disgust. Clearly, if someone spends time doing that sort of thing s/he cannot be that good a researcher, otherwise s/he would care much more about another grant proposal or published paper (never mind that about a third of papers published in primary science journals are never cited once, and that most of the rest are read only by a handful of the author’s close colleagues and friends).

Michael Shermer told the story of how Carl Sagan didn’t make it into the National Academy of Sciences because of the perception that he wasn’t a sufficiently productive scientist — even though the record shows that he was as or more productive than plenty of others who were in fact admitted into that august body. Things have surely improved a bit since, as shown for instance by the fact that Stephen Jay Gould was later welcomed into the NAS, despite a Sagan-like perception of him shared by many of his colleagues. It is also now the case that the NAS, the American Institute of Biological Sciences, the Society for the Study of Evolution and a number of other organizations have moved beyond just paying lip service to the idea of talking to the public and have actually started to take the concept seriously. The NAS publishes position papers and organizes workshops on issues such as climate change and evolution, the AIBS hosts regular workshops for teachers, and the SSE has instituted a permanent education committee that bestows an annual prize to scientists who make a contribution to the public understanding of evolution — it’s called the Stephen Jay Gould prize, and has so far been awarded to Genie Scott of the National Center for Science Education, Sean Carroll, and most recently Ken Miller.

Philosophers have been a bit more slow to pick up on the idea that they need to talk to the public, but there are at least three good reasons to do it: a) it is the public that pays for most academic positions in university departments; b) the continued existence of philosophy as a professional academic discipline depends on people giving a damn about it; c) it is one of the goals of a field whose etymology traces back to the Greek term for “love of wisdom” to expand the circle of people capable of thinking philosophically.

Of course, the problem here may in large part be yet another consequence of American anti-intellectualism (other examples include the election of George W. and the popularity of Jersey Shore). I first noticed this when I realized that all three magazines of philosophy for the general public of which I am aware (Philosophy Now, The Philosopher’s Magazine, and Think) are published in England, despite the fact that the overwhelming majority of practicing philosophers is to be found in the US. And let’s not even get started on the fact that philosophers are regular guests (and sometimes hosts) of radio and TV programs throughout Europe, particularly in France and the UK (take that, Oprah!).

I keep being told that philosophy is a stuffy old field that cannot possibly interest the public, and yet my own regular philosophy meetup in New York is almost a thousand members strong — and we are neither the only nor the largest such group in the city! Events that I help organize with the Center for Inquiry and other local groups belonging to the Reasonable New York coalition, for instance on the nature of consciousness, on ethics for secular humanists, and an upcoming one on free will, regularly draw hundreds of paying participants, filling up whatever venue we set aside for them. And the Rationally Speaking podcast — which often deals with philosophical issues or at any rate adds a philosophical flavor to whatever Julia and I talk about — gets downloaded between 10 and 30 thousands times per episode. Other philosophy podcasts, like Philosophy Talk or Philosophy Bites, do even better.

So I guess I shouldn’t have been surprised at seeing more than a hundred young people huddled in a hotel conference room to listen to the connection between comics, counterfactuals and possible world scenarios. But it surely was a hell of a validating and entertaining way to spend an evening.

Thursday, September 15, 2011

Michael’s Picks

by Michael De Dora

* Paul O’Donoghue, a clinical psychologist and president of the Irish Skeptics Society, writes that that advances in science demand an earlier introduction to ethics.

* Do statistics take the wonder out of sports? That’s the question Joe Posnanski, perhaps the best living baseball writer, considers in one of his recent blog posts.

* Every four years, the United States Conference of Catholic Bishops publishes a report on how Catholics should think about important political issues in light of church teachings. Yet most Catholics apparently ignore this seemingly fundamental document.

* Victims of sexual abuse by Catholic priests have accused Pope Benedict XVI, the Vatican secretary of state, and two other high-ranking Holy See officials of crimes against humanity, in a formal complaint to the international criminal court (ICC).

* A couple of weeks ago in New York City, Massimo and I participated in a panel discussion on secular ethics. Here is the full video.

* The Tennessean details how Jay Sekulow — best known for his legal work at Christian broadcaster Pat Robertson’s American Center for Law and Justice — and his family have made millions of dollars from their so-called “legal charities.”

* The Mississippi Supreme Court has ruled to allow a ballot initiative that would amend the state constitution so that, “The term ‘person’ or ‘persons’ shall include every human being from the moment of fertilization.” Voters will decide the issue in the Nov. 8 election.

Tuesday, September 13, 2011

Is chance in the map or the territory?

by Ian Pollock

www2.isye.gatech.edu/
[Related (and much more in-depth): Fifteen Arguments Against Finite Frequentism; Fifteen Arguments Against Hypothetical Frequentism; Frequentist versus Subjective View of Uncertainty.]

Stop me if you’ve heard this before: suppose I flip a coin, right now. I am not giving you any other information. What odds (or probability, if you prefer) do you assign that it will come up heads?

If you would happily say “Even” or “1 to 1” or “Fifty-fifty” or “probability 50%” — and you’re clear on WHY you would say this — then this post is not aimed at you, although it may pleasantly confirm your preexisting opinions as a Bayesian on probability. Bayesians, broadly, consider probability to be a measure of their state of knowledge about some proposition, so that different people with different knowledge may correctly quote different probabilities for the same proposition.

If you would say something along the lines of “The question is meaningless; probability only has meaning as the many-trials limit of frequency in a random experiment,” or perhaps “50%, but only given that a fair coin and fair flipping procedure is being used,” this post is aimed at you. I intend to try to talk you out of your Frequentist view; the view that probability exists out there and is an objective property of certain physical systems, which we humans, merely fallibly, measure.

My broader aim is therefore to argue that “chance” is always and everywhere subjective — a result of the limitations of minds — rather than objective in the sense of actually existing in the outside world.

Random Experiments

What, exactly, is a random experiment?

The canonical example from every textbook is a coin flip that uses a fair coin and has a fair flipping procedure. “Fair coin” means, in effect, that the coin is not weighted or tampered with in such a way as to make it tend to land, say, tails. In this particular case, we can say a coin is fair if it is approximately cylindrical and has approximately uniform density.

How about a fair flipping procedure? Well, suppose that I were to flip a coin such that it made only one rotation, then landed in my hand again. That would be an unfair flipping procedure. A fair flipping procedure is not like that, in the sense that it’s … unpredictable? Sure, let’s go with that. (Feel free to try to formalize that idea in a non question-begging way, if you wish.)

Given these conditions, frequentists are usually comfortable talking about the probability of heads as being synonymous with the long-run frequency of heads, or sometimes the limit, as the number of trials approaches infinity, of the ratio of trials that come up heads to all trials. They are definitely not comfortable with talking about the probability of a single event — for example, the probability that Eugene will be late for work today. Will Feller said: “There is no place in our system for speculations concerning the probability that the sun will rise tomorrow. Before speaking of it we should have to agree on an (idealized) model which would presumably run along the lines ‘out of infinitely many worlds one is selected at random...’ Little imagination is required to construct such a model, but it appears both uninteresting and meaningless.”

The first, rather practical problem with this is that it excludes altogether many interesting questions to which the word “probability” would seem prima facie to apply. For example, I might wish to know the likelihood of a certain accident’s occurrance in an industrial process — an accident that has not occurred before. It seems that we are asking a real question when we ask how likely this is, and it seems we can reason about this likelihood mathematically. Why refuse to countenance that as a question of probability?

The second, much deeper problem is as follows (going back to coin flipping as an example): the fairness (i.e., unpredictability) of the flipping procedure is subjective — it depends on the state of knowledge of the person assigning probabilities. Some magicians, for example, are able to exert pretty good control over the outcome of a coin toss with a fairly large number of rotations, if they so choose. Let us suppose, for the sake of argument, that the substance of their trick has something to do with whether the coin starts out heads or tails before the flip. If so, then somebody who knows the magicians’ trick may be able to predict the outcome of a coin flip I am performing with decent accuracy — perhaps not 100%, but maybe 55 or 60%. Suppose that a person versed in such tricks is watching me perform what I think is a fair flipping procedure. That person actually knows, with better than chance accuracy, the outcome of each flip. Is it still a “fair flipping procedure?”

This problem is made even clearer by indulging in a little bit of thought experimentation. In principle, no matter how complicated I make the flipping procedure, a godlike Laplacian Calculator who sees every particle in the universe and can compute their past, present and future trajectories will always be able to predict the outcome of every coin flip with probability ~1. To such an entity, a “fair flipping procedure” is ridiculous — just compute the trajectories and you know the outcome!

Generalizing away from the coin flipping example, we can see that so-called “random experiments” are always less random for some agents than for others (and at a bare minimum, they are not random at all for the Laplacian Calculator), which undermines the supposedly objective basis of frequentism.

The apparent saviour: Argumentum ad quantum

“Ah!” you say, laughing now. “But you, my friend, are assuming determinism is true, whereas Quantum Mechanics has proven that determinism is false — objective chance exists! So your Laplacian Calculator objection fails altogether, being impossible, and the coin example fails if quantum randomness is involved, which it might be.”

Let’s review the reasons why Quantum Mechanics is held to imply the existence of objective chance (i.e., chance that does not depend on states of knowledge).

There are a variety of experiments that can be performed in order to bring out this intuition, but the simplest by far is simply to take a piece of radioactive material — say, uranium — and point a Geiger counter at it. The Geiger counter works because passing radiation makes an ionized (hence, electrically conductive) path in a gas between two electrodes that are at different voltages from each other. Electrical current flows through that ionized path, into a speaker, and makes a clicking sound, signaling the passage of an ionizing particle.

The ionizing radiation itself is caused by the radioactive decay of elements with unstable nuclei — for example, uranium. Uranium-238 has a half-life of 4.5 billion years, which means that in a given sample of U-238, about half of the atoms will have decayed after 4.5 billion years. However — and here is where the supposed objective randomness comes in — one can never say for any given atom when it will decay: now, or next Tuesday, or in 10 billion years. All one can say is that one would give odds of 1:1 (50% probability) that it will decay in the next 4.5 billion years (unfortunately, collecting on that bet is somewhat impractical).

For reasons that I do not have space to get into, the idea that the U-238 atom has some sort of physical “hidden variable” that would determine the time of its decay, if only we could measure this variable, is pretty much ruled out (by something called Bell’s theorem). So prima facie, it appears that nature itself is characterized by objective, irreducible randomness — therefore, chance is objective, therefore frequentism is saved!

There are at least two problems with this argument.

One is that the interpretation of the quantum mechanical experiments outlined above as exhibiting “objective chance” is highly controversial, although it is true that the above is the orthodox interpretation of QM (mainly due to historical accident, it must be said). Yudkowsky has done an excellent job of arguing for the fully deterministic Many Worlds Interpretation of QM, and so has Gary Drescher in “Good and Real,” so I am not going to try to recapitulate it. In essence, all you need in order to reject the standard interpretation above (usually called the Copenhagen interpretation) is to properly apply Ockham’s razor and guard against mind-brain dualism.

The most important problem with the argument that QM rescues frequentism, however, is that even given the existence of objective chance for the reasons outlined above, experiments that are actually characterized by such quantum randomness in anything higher than the tenth decimal place, are incredibly rare.* In other words, even if we accept the quantum argument, this only rescues a very few experiments — essentially the experiments done by quantum physicists themselves — as being objectively random.

It’s worth clarifying what this means. It does not mean that QM doesn’t apply to everyday situations; on the contrary, QM (or whatever complete theory physicists eventually come up with) is supposed to apply without exception, always & everywhere. No, the issue is rather that for macro-scale systems, Quantum Mechanics is well-approximated by classical physics — for example, fully deterministic Newtonian mechanics and fully deterministic electromagnetic theory (that’s why those theories were developed, after all!). Macro-scale systems, in other words, are almost all effectively deterministic, even given the existence and (small) influence of quantum indeterminacy. This definitely applies to something as flatly Newtonian as a coin toss, or a roulette wheel, or a spinning die — let alone statistics in the social sciences.**

So, unless frequentists wish to have their interpretation of probability apply ONLY to the experiments of quantum mechanics — and even that only arguably — they had better revise their philosophical views of the nature of probability.

Chaos theory?

Another common way to try to sidestep the problems of frequentism is to say that many physical systems are truly unpredictable, according to chaos theory. Take, for example, the weather — surely, this is a chaotic system! If a butterfly flaps its wings there, why, we might have a tornado here.

The trouble is that chaos theory openly acknowledges its status as a phenomenon of states of knowledge rather than ontology. Strictly speaking, chaos refers to the fact that many in-principle deterministic systems are, in practice, unpredictable, because a tiny change in initial conditions — perhaps too small a change to measure — makes the difference between two or more highly distinct outcomes. The important point is that higher resolving power (better knowledge) reduces the chaotic nature of these phenomena (i.e., reduces the number of possibilities), while never eliminating chaos completely. Hence, although chaos theory chastens us by revealing that the prediction of many deterministic physical systems is a fool’s errand, our view of chance remains essentially the same — chance is still subjective in the sense of depending sensitively on our state of knowledge. The weather is a chaotic system, but it is less chaotic to a good weatherman than to an accountant!

Some additional thoughts

Let’s return to the concept of objective chance. What does this phrase even MEAN, exactly?

Going back to the example of radioactive decay, how does the radioactive atom “decide” to decay at one time as opposed to another? We have already agreed that it does not have any physical guts which determine that it will decay at 15:34 on May 13, 2012. So are we saying that its decay is uncaused? Very well, but why is it un-caused to decay on May 13 as opposed to May 14? Your insistence that the decay is “objectively random” doesn’t seem to remove my confusion on this score.

The basic problem is that there is just no way of defining or thinking about “randomness” without reference to some entity that is trying to predict things. Please go ahead and attempt to do so! Imagine a universe with no conscious beings, then try to define “randomness” in terms that do not reference “knowing” or “predicting” or “calculating” or whatever.

Don’t get me wrong, I’m okay with the idea of unpredictability even in principle. Goodness knows, humans have epistemic limits, and some are doubtless insuperable forever. The problem is that unpredictability, randomness, chaos — whatever you want to call it, however unavoidable it is — is a map-level phenomenon, not a territory-level phenomenon.

So sure, you can tell me that there are some things that exist, but are un-mappable even in principle. But please don’t tell me that un-mappability itself is out there in the territory — that’s just flat out insane! You’re turning your own ignorance into ontology!

... And yet, that is exactly what the standard interpretations of QM say. But one person’s modus ponens is another’s modus tollens. Some would say “Standard interpretations of QM imply objective chance, therefore objective chance exists.” But it’s also possible to say “Standard interpretations of QM imply objective chance, but objective chance is gibberish, therefore the standard QM interpretations are incorrect.” ***

Hey, Einstein agreed with me, so I MUST be right!

To summarize:

> Frequentism would seem to be completely undermined by the fact that uncertainty is always subjective (i.e., less uncertain for agents with more knowledge).

> Quantum mechanics appears to offer an ‘out,’ by apparently endorsing objective chance against determinism.

> However, such an interpretation of QM is highly controversial.

> Also, even if that interpretation were accepted, the universe would still be deterministic enough to eliminate most of the classic questions of probability from a frequentist framework, making frequentism correct but almost useless.

> When considered carefully, the entire concept of ‘objective chance’ is highly suspicious, since (from a philosophical point of view) it turns epistemic limits into novel ontology, and (from a scientific point of view) it makes basic physics (e.g., radioactive decay) dependent on terminology (“unpredictable,” etc.) that is unavoidably mental in character.
_______________________

Footnotes:

* Okay, I admit it. I don’t actually know which decimal place. But it’s definitely not the first, second, or third.

** Likewise for free will. Even if objective chance would rescue contracausal free will (it wouldn’t), and even if objective chance actually existed (don’t take that bet!), the universe just would not be objectively chancy enough to make everyday decisions non-deterministic. Good thing free will isn’t undermined by determinism after all!

*** I wish I could give you the answer to the radioactive decay riddle now, but it’s not going to fit in one post — not even close.

Monday, September 12, 2011

New Rationally Speaking podcast: Women in Skepticism

No, this episode is not about "elevatorgate" or the Watson-Dawkins debacle, but we do use these recent (in)famous events as a springboard for a broader discussion of women in skepticism and science.

Is there a misogyny problem in the skeptic and atheist communities? Why aren't there more more women involved in these communities? Also, Julia tells us about her own experience as a young woman skeptic.

Saturday, September 10, 2011

On ethics, part VII: the full picture

by Massimo Pigliucci

commonsenseatheism.com
[This post belongs to a 7-part series ongoing series on ethics in which Massimo explores and tries to clarify his own ideas about what is right and wrong, and why he thinks so. Part I is on meta-ethics; part II on consequentialism; part III on deontology; part IV on virtue ethics; part V on contractarianism; part VI on egalitarianism; and part VIII on the full picture.]

Well, it’s time to bring this overly long series on ethics to an end, for now. The previous six posts have gathered a total of 390 comments at last count, and undoubtedly this post will add significantly to the total — a clear demonstration that moral philosophy is as popular and as controversial as always.

I sincerely hope that readers didn’t — despite my clear warnings — expect to find anything like an exhaustive treatment of the various aspects of ethics, nor to be served with my own original moral system emerging at the end of the series. This was simply an exercise in clarifying my thinking about something I care a lot about, and — as the motto of this blog says — to nudge truth to spring from argument amongst friends.

Nonetheless, I promised, and fully intend to deliver, some summary thoughts that have been shaped while doing the background readings for the series and then writing the individual entries. I tend to do much of my thinking while having discussions or writing (which for me is a time-delayed type of discussion), so this was the perfect medium to probe my own intuitions about moral philosophy. Here we go, then.

To begin with, I return to the opening essay, where I suggested that ethics is neither about absolute moral truths nor about relativism. The only sense I can make of the idea of absolute moral truths is in Platonic terms, similar to the way some mathematicians and philosophers of mathematics think of numbers, theorems and the like as having an ontological status independent of the human mind. Pythagoras’ theorem is, in a counter-intuitive and non-trivial sense, “out there.” But this can only mean that wherever conscious beings capable of abstract thought think along certain lines (i.e., about geometrical figures in plane geometry) they will have to agree that the theorem is true; certainly not in the sense that there is a non-physical realm where numbers and theorems happily while the time away.

Even so, the case for Platonism has certainly not been clinched for mathematics, and it looks even less promising for ethics. In other words, I agree with M.L. Mackie’s famous “argument from queerness” that “If there were objective values, then they would be entities or qualities or relations of a very strange sort, utterly different from anything else in the universe.” Not impossible, but extraordinary claims require extraordinary evidence, you know.

As for relativism, I simply find it preposterous, despite the fact that it is actually becoming increasingly popular among both the general public and professional philosophers. I think something is missing when someone says that moral rules are of a kind with rules of etiquette (if you actually act on such belief society will treat you as a psychopath, and rightly so), or that committing or not committing genocide cannot be distinguished from preferring vanilla or chocolate (chocolate is the objectively obvious answer, by the way). Yes, there is a significant amount of spatial and temporal cultural variation in what people value and what they consider moral or not. But the extent of such variation has been greatly exaggerated (see also here), and flies in the face of both a large number of human universals and of studies showing that even other social primates seem to share our sense of right and wrong about certain actions (intuitively, since presumably they don’t do philosophy).

In order to steer away from both the Scylla of absolutism and the Charybdis of relativism, therefore, I am convinced that the best way to think of ethics is as a set of tools to think rationally and instrumentally about how to achieve a society that is as just as possible, where people can flourish (in their varied ways) as much as possible. (Yes, I know, people keep asking what counts as well being: you’ll find a thorough discussion here.) Of course someone will immediately object that no such moral system can be “compelling,” and I honestly have no idea what they mean by that. Obviously, morality isn’t as “compelling” as, say, gravity. But neither is mathematics. You are perfectly free to disagree with the Pythagorean theorem, though that simply means you don’t understand geometry. Similarly, you can shrug off the entire idea of ethical reasoning and simply keep watching out for number one. Be my guest, but I’ll think of you as a psychopath or a pathological egoist, and I won’t invite you for dinner.

Okay, now what about the six central themes of this series? We have looked at the three fundamental theories of ethics: consequentialism, deontology, and virtue ethics. We have also looked at the concept of justice from the point of view of various social contract theories as well as of several — remarkably diverse! — ideas of what counts as equality.

To begin with, an important distinction is to be made, as we have seen, between consequentialism and deontology on the one hand and virtue ethics on the other. The first two are answers to the question: what is the right thing to do? The latter is an answer to the question: what sort of life should I live? The two questions are different enough that it really isn’t entirely clear to me why virtue ethics is considered an alternative to the other two.

Nevertheless, among these three, several readers have correctly picked up on my (qualified) sympathies for virtue ethics, properly updated and without the obvious stench of elitism that accompanied Aristotle’s version (oh, and no slavery; oh, and equal consideration for women). There are several reasons for this. First off, I simply can’t get past the fact that there are serious objections to consequentialism, and particularly to its chief mode, utilitarianism. Yes, I’ve read utilitarians’ responses to classic problems like the one posed by the doctor who is considering to cut up a healthy person in order to save five dying people. But I just don’t find them convincing enough. Utilitarians are forced to twist themselves into logical pretzels to avoid the obvious implications of an ethical system that cares only and exclusively about consequences. Consequences are important, but they are not the only or final arbiter of a moral life.

Deontology does incorporate directly ideas about rights, which are notoriously difficult to digest for utilitarians, but it does so at a high price. Without having to go to the extremes of Kant (who, as I mentioned, once famously said that it is “better the whole people should perish” than that injustice be done — one wonders, injustice to whom?) it just seems that a set of inflexible rules, and even more so a single all-encompassing rule like the categorical imperative, is far too blunt a tool to deal effectively with the variety of human experience. No, I think that if we followed either utilitarianism or deontology we would far too often arrive at monstrous ethical decisions with which we simply wouldn’t be able to live.

Which of course leaves virtue ethics as the last man standing. This is not an unproblematic option, because of the variety and complexity of human ways to flourish, and because it is about character, not about which particular actions are right or wrong. But it does capture the idea that there is something common to all human beings (and possibly other relevantly similarly social creatures), that life is better when people are fair to each other, refrain from violence if not absolutely necessary, act with integrity, respect other people’s civil liberties, have access to education and health care, and can generally pursue their interests with the utmost degree of freedom compatible with everyone else doing the same.

But virtue ethics is not a theory of society, it is a theory of individual behavior within society. Which brings us to social contracts and the various forms of egalitarianism. I tend to be sympathetic to a higher degree of egalitarianism than is materialized in the current state of affairs in the United States, but unlike Rawls I am not convinced that income and wealth ought to be equal except under very strict circumstances. I do, however, find the current level of income/wealth inequality in the United States appalling and indefensible except by a relatively small but exceedingly vocal horde of libertarians, Randians and Teapartiers.

I do find Rawls’ concept of a veil of ignorance to be by far the best way to think about a social contract, especially in multicultural societies. I especially like Rawls’ idea (embodied in his two principles of justice) that civil liberties ought to take precedence over economic advantages (precisely the opposite of what currently happens in the US). But it is certainly the case that Rawls’ ideas apply only if a society is guided by certain types of liberal values that have predominated in Western societies and in some non-Western ones (e.g., Japan). If you are into the lure of theocracies or totalitarian regimes you will be largely unmoved by his thought experiment. I wager that you and your society will be so much the worse for it.

Getting back to egalitarianism, however, even if we stay away from income and wealth it is pretty clear that much of the world (US included) is far from being anywhere near a just society. We still do not have complete formal equality of civil rights (think gay marriage), and arguably we are far from actual equality in that department (think about the conditions of a number of minorities, as well as persistent degrees of discrimination against women). We may say that all citizens have equal rights in front of the law, but the practice is such that we keep imprisoning a good number of innocent poor and uneducated people, while robber barons keep crashing the world economy and getting away with golden parachutes. We think that we live in a democracy where every citizen has one vote, but in fact the US Supreme Court has legally allowed corporations to freely buy elections, and we have a Congress occupied by a large number of millionaires (all currently serving US Senators are in that category) who make laws favoring their ilk. Not to mention the arcane two-Senators-per-State system which effectively means that the voters of Wyoming (the least populous state) are almost 69 times better represented than the voters of California (the most populous state).

So, I guess in the end I find myself to be a virtue ethicist when it comes to personal morality, with strong Rawlsian leanings in the social sphere, who would allow a limited amount of income and wealth disparity but is uncompromising about civil liberties, equality of representation and equality within the justice system. This is far from being a logically tight, perfectly coherent approach to ethics, of course. But, as Walt Whitman famously put it: “Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes.”

Thursday, September 08, 2011

The ethics of drone warfare

by Michael De Dora

As you probably already know, the United States has increasingly relied on drones, or unmanned aerial vehicles, to carry out warfare in recent years. Drone attacks have been particularly popular under President Barack Obama’s administration. According to the New America Foundation, there were 43 drone attacks between January and October 2009 (right when Obama took office), compared to just 34 in all of 2008 (when George W. Bush was still in office). The Obama administration has shown no indication that it will halt their use.

The government’s increased reliance on drones has sparked public debate: Are drone strikes legal? Are they ethical? In my reading of various news and opinion articles on the issue, those who object to drones have most often made three arguments:

1. Drones violate domestic law. Many, or even most, drone strikes take place in Pakistan or other Middle Eastern countries where the US has not declared war against a foreign state, but is instead working with local officials to root out terrorists under some “handshake agreement.” As such, many people feel drone strikes are an unjustified use of presidential and military power. US officials defend drone strikes on the grounds that they do not target a formal state, but a small group of people that have carried out attacks on domestic soil and plan to do so again. Thus, formal warfare laws do not apply (in other words: hey, it’s just the never-ending War on Terror).

2. Drones violate international law, which restricts when and how different states can engage in armed conflict. Yet, as with domestic law, there is no conflict between two formal states. Also, most drone strikes are carried out by the CIA, which as a civilian agency and a noncombatant under international law is not governed by the same laws of war that cover US military agencies.

3. Drones kill civilians. The Wall Street Journal reported via intelligence officials that since Obama took office, the CIA has used drones to kill 400 to 500 suspected militants, while only ~20 civilians have been killed. However, in 2009, Pakistani officials said the strikes had killed roughly 700 civilians and only 14 terrorist leaders. Meanwhile, a New America Foundation analysis in northwest Pakistan from between 2004 to 2010 reports that the strikes killed between 830 and 1210 individuals, of whom 550 to 850 were militants (about two-thirds of the total).

These arguments are nuanced and complex. You can read more about US arguments and other counter-arguments in this excellent article in the Wall Street Journal. But let us put these — and any discussion of just war theory — aside for a moment, for I think there is a more basic ethical point here.

Notice that the objections above do not inherently reject the use of unmanned drones. Instead, they focus on international law, domestic law, and the accuracy of drones. This raises an important question: are drone strikes inherently any more or less ethical than, say, manned aircraft strikes? Is there, or should there be, an ethical distinction between launching missiles from half a world away and sending fighter jets to carry out such an attack?

I have pondered these questions for several days now and have come to the tentative conclusion that there is no ethical distinction. In my view, the method in which war is carried out — by drone, jet, or a missile launched from a nuclear sub — is less important than the pretenses under which war is being carried out in the first place. If an act of war violates domestic or international law, it does so regardless of whether the attack was carried out by a manned or unmanned aircraft. If an act of war kills civilians, one must parse whether civilians were intentionally or knowingly put at risk, or whether it was an issue of collateral damage. But I have seen no indication that drones kill more civilians on average than manned strikes (your research is welcome). So why is there such an objection to, specifically, drone strikes?

In reading objections to drone use, I can’t help but feel an unspoken and lurking moral sentiment that drone use is wrong because it removes the human element of war. That is, people reject the use of drones because drones remove a pilot (or submarine crew) from harm’s way.

Consider these three passages. The first is from a story in the news outlet Christian Century:

With drones, operators sitting in front of computer monitors in Virginia and Nevada can target enemies halfway around the world. When their shift is done, drone operators retire to their suburban homes.

The second is from an essay in the Catholic magazine America:

Killing with drones is made easy for operators, who often work at great distances from the scene of attack. An Air Force ‘pilot’ may be in Nevada, while C.I.A. operatives are in Langley, Va., and others, including private contractors, are in Florida, Pakistan or Afghanistan. An operator may launch an attack from a trailer in Nevada viewing a computer monitor and using a joystick. The operators never see the persons they have killed. The pilot of a fighter jet flies over the place where the attack will occur and risks being shot down; a drone pilot never experiences the place where the attack occurs and knows he or she is in no personal danger. The operator can go home at the end of the shift.

The third is from an article on PBS.org:

Missile strikes launched from the comfort of Langley, Virginia, a half a world away from Waziristan, ... to critics, remain morally problematic.

On one hand, this seems backward to me. Drones actually remove a pilot or crew from harm’s way, and so they would seem a better manner of carrying out war. Imagine being able to carry out attacks on highly dangerous terrorists and counter-insurgents without having to put your own people at risk of death. This would seem desirable.

On the other hand, perhaps there is something to the idea that warfare made easier means warfare more often; that the more we remove the human element from one side of warfare, the more that side becomes willing to commit to warfare. This does not seem necessarily true, as warfare has not increased — and might be decreasing — with increasing technology. But I am also not entirely sure it is a compelling argument against drone use. Rather, it seems an argument against any advance in military technology — from guns that allow troops to shoot their weapons from further away, to planes that allow forces to drop bombs from higher elevations, to even bulletproof vests that provide more safety to soldiers engaged in war.

But, as always, I offer my thoughts to the peer review of Rationally Speaking. What do you think?

Tuesday, September 06, 2011

Massimo's Picks

by Massimo Pigliucci

* The complexities of hacker ethics.

* Generation limbo: when graduating from college doesn't mean you've got a good job, or any job.

* Derek Parfit's quest for a universally true morality, and his somewhat quirky personality.

* Men don't read, and women are likely not too far behind. How the new America is getting stupider and stupider.

* Philosophy Talk, on schizophrenia.

* Montaigne, the inventor of the essay format, and how to live one's life.

* The moral complexity of interventionism, the impossibility of isolationism.

* Another thoughtful commentary on interventionism in Libya.

* Taking blame or credit for economic success is fishy business, philosopher says.

* Human brain has been shrinking: are we evolving toward Idiocracy?

* The philosophy of laughing at Hitler.

Saturday, September 03, 2011

On ethics, part VI: Egalitarianism

by Massimo Pigliucci

[This post is part of an ongoing series on ethics in which Massimo is exploring and trying to clarify his own ideas about what is right and wrong, and why he thinks so. Part I was on meta-ethics; part II on consequentialism; part III on deontology; part IV on virtue ethics; part V on contractarianism.]

We are getting close to the end of this multi-part series on ethics. Before I try to put everything together in the next post, I am going to briefly discuss egalitarianism, a view that is as important as it is controversial to contemporary debates in moral philosophy. Depending on what one means by “equality” in this context, egalitarianism can describe moral philosophies as different as Rawls’ type of Kantian contractarianism, Nozick’s libertarianism, and Marxism. No, seriously.

The first obvious question about egalitarianism is: equality of what? For instance, in most modern democratic societies it is uncontroversial that citizens have an equal right to vote, or an equal right to justice. (Of course, both of these are true only in principle, considering that the rich can buy the best lawyers and even determine the composition of the Supreme Court, but that’s another story.) I doubt anyone would reasonably disagree with that sort of egalitarianism, except for despots, many men in a large part of the world (wherever women don’t have equal legal rights), and incurable aristocrats. So let’s move on.

For Rawls, as we have seen, egalitarianism goes pretty radical, as it affects wealth as well as resources. While Rawls does concede the possibility of some inequality in wealth being just because it may be of advantage to everyone, his principle of equality of fair opportunity is as radical as egalitarianism gets (although not-so radical affirmative action policies, for instance, go somewhat into that direction).

Then again, most democratic societies have no trouble with the milder idea of formal equality of opportunity, where public employment, education, and even private employment are regulated so as to guarantee equal access to individuals and groups. (This is distinct from, and less radical than, affirmative action, because it does not establish quotas.)

As I mentioned earlier, however, even libertarians can be thought of — surprisingly — as egalitarians. Modern libertarians are really followers of John Locke, who famously said that people are endowed with a set of fundamental “natural” rights, most clearly life and property. I tend to agree with Jeremy Bentham, who famously referred to the idea of natural rights as “nonsense on stilts” (i.e., very, very tall nonsense). But for the purposes of this discussion, clearly Lockean libertarians agree that all people are equal in terms of natural rights.

Marx, too, though hardly usually thought of as an egalitarian, can be interpreted as advocating some type of equality. When he famously said “from each according to his ability, to each according to his needs,” he was indeed advocating a principle of equal rights, though of course those rights are very different from those that Nozick and other libertarians would entertain.

So, we can see that different moral philosophies can be egalitarian about distinct criteria: a) opportunity (of jobs, of achieving a satisfying life, of pursuing happiness); b) income and wealth (the difference between the two being that the first is a flow, the latter a stock); c) resources (education, health care, etc.); d) civil liberties (including political representation and status under the law). There are more, of course, but these seem to cover most of what people actually care about.

There are several important issues about which, presumably, even the most Rawlsian egalitarian would readily agree. Equality does not mean that society is morally bound to correct all instances of bad luck a person may incur, nor that people cease to have to take responsibility for ill management of their resources. For instance, it is humane to provide additional resources to help people who — by genetic lottery or accident — are disadvantaged physically or mentally. But it would be absurd to pretend that society is morally bound to keep pouring resources on those people so that they achieve the same level of employment, education, and happiness as the population’s average. (Don’t laugh, New York City is legally obliged to pay hundreds of thousands of dollars in tuitions for some students who cannot graduate from high school, even though after all that effort and money they still are functionally illiterate.) By the same token, if people willfully and repeatedly squander their resources (e.g., in the case of addictions that lead to loss of income or jobs), surely it is not up to society to keep providing them, beyond basic sustenance and medical and psychological help.

There is also some interesting discussion concerning what exactly counts as a resource. Clearly, external things like education, access to health care, jobs, housing, and so on, are resources, about which we can have discussions concerning the degree to which they should be equally accessible or distributed. But some egalitarians consider internal resources — i.e., personal talents and inclinations — fair game as well. Obviously, people will always (well, short of permanent genetic engineering of the human species) come with a variety of natural talents, which will immediately put some people to an advantage and others at a disadvantage. But it seems to me that it is going beyond the pale (not to mention the limits of practicality) to say that society ought to redress inequalities in natural attributes.

A trickier problem, actually, comes when we consider the effects of so-called “unchosen luck” (like early socialization) or so-called “chosen luck” (one’s decisions later in life, in college, or on the job). Children cannot be responsible for their early socialization experiences, and yet we hold the resulting adults responsible for their choices, even though psychological research clearly shows that the two are far from being causally independent.

Yet another issue with egalitarianism arises when one asks the question of to whom the principle applies. If we respond that one is concerned with equality within one’s society, the obvious question is why not extend the principle to the whole world? Well, one may answer, because though that would be nice, we simply don’t have the resources, political will, etc. to be able to do much about the rest of the world, except in a slow and indirect way. But then egalitarianism risks being perceived either as too parochial (it’s all about our in-group) or hopelessly quixotic (it’s about the world, man).

A related problem is posed by the consideration of equality among groups, not just individuals. What if we find evidence of inequality between ethnic groups, or genders (if you can imagine that!). Are they going to be eliminated by a focus on inequality at the individual level, or is there room in moral philosophy for group-level considerations? And let’s stay away from the thorny issue of animal rights, of course...

Finally, there is a risk that egalitarians may be mistaken about their central concern: that the issue shouldn’t be equality, but something else that often highly correlates with it. The classic example is the gap between rich and poor. Is the problem posed by inequality per se? If so, we could solve it by making everyone poor, but that would strike most people as absurd. Then perhaps the issue isn’t inequality, but the fact that, as George Orwell famously put it (cited in this very comprehensive article from the Stanford Encyclopedia of Philosophy), “a fat man eating quails while children are begging for bread is a disgusting sight.” Would it be all right if the fat men kept eating quails while the children were still poor but with enough bread to live?

Next: the full shebang...

Friday, September 02, 2011

Michael's Picks

by Michael De Dora

* The Pew Research Center has released a comprehensive new public opinion survey on the attitudes of Muslim-Americans. The findings might surprise many Americans.

* Last week I agreed with Pope Benedict XVI, who argued that ethics should play a major role in economic policy making. The Pope’s sentiment has now been echoed by Cardinal Angelo Bagnasco.

* A new study suggests El Niño may be to blame for nearly a quarter of recent global conflicts.

* “When it comes to the religious beliefs of our would-be presidents, we are a little squeamish about probing too aggressively,” writes Bill Keller in the New York Times.

* Scientific findings suggest that exercise could be a helpful prescription for depression, though there are caveats.

* Charles Blow highlights what he considers a growing crisis for American children, and criticizes politically right approaches that he says “ignore that reality at best and exacerbate it at worst.”

* Peter Nardi notes that psychics have a perfect record: of being wrong, that is.

* And lastly, two follow-ups on my recent essay on Florida’s law that requires drug tests for welfare applicants. First, 98 percent of welfare applicants have passed the drug test. Second, Adam Cohen has a compelling article on why this is bad policy.