by Ian Pollock

thefertilityblogs.com |

*quantify*epistemic states — specifically, levels of certainty — if you wish to think clearly. The reason why was summed up rather nicely by one of my special historical heroes, James Clerk Maxwell:

*The actual science of logic is conversant at present only with things either certain, impossible, or entirely doubtful, none of which (fortunately) we have to reason on. Therefore the true logic for this world is the calculus of Probabilities, which takes account of the magnitude of the probability which is, or ought to be, in a reasonable man’s mind.*

In other words, even if everybody reasoned using classical logic

*without*committing logical fallacies (fat chance), practical reasoning would still be impossible, because questions in real life just*never*deal in certainties.*
One of the more beautiful things to discover in this world is that there are

*objective*rules for the manipulation of*subjective*certainties and uncertainties. Bayesian statisticians call these levels of uncertainty “probabilities.” (Frequentists... get confused at this point, on which I hope to write much more in the future).**
One of the most unexpected beneficial side-effects of thinking probabilistically as a habit, is that it makes you realize just how much you actually know. (This is probably the one skeptical conclusion that

*doesn’t*deflate one’s ego.)
For example, suppose that I ask you a weird question like “What did Peter Singer (the philosopher) eat for breakfast on October 12, 2007?”

The standard answer to such questions, which in my experience is elevated almost to the level of a Kantian imperative among some traditional skeptics, is “I don’t know.”

*Whereof one cannot speak, thereof one must be silent.*
The problem with this is that you usually

*do*know quite a lot. To illustrate, let’s consider my “Breakfast of Utilitarians” example.
To begin with, you know with near-certainty that Peter Singer didn’t eat anything that doesn’t actually exist — unicorn cheese, for example. Okay, but that’s trivial.

You also know with decent confidence that he didn’t eat anything actively poisonous — for example, fly agaricus mushrooms. But that’s pretty obvious too.

Fine, now we’ve narrowed it down to non-poisonous foods that actually exist. You also may know that he is a champion of animal welfare, and a utilitarian vegan, so all or most animal products are out of the running. Now we’re getting somewhere.

Further, the man is Australian, and of European ancestry, which

*ceteris paribus*makes various other world cuisines (e.g., Mexican, Finnish) somewhat less likely than not.
On the other hand, you might want to revise this last consideration if you’ve read “A Vegetarian Philosophy,” at the end of which he gives a recipe for Dal, an Indian dish. This suggests, if weakly, that he might have more international tastes.

Lastly (or is it?), the meal in question is breakfast, and people typically confine certain foods to specific meals. For this reason, tofu sausages are a fairly good bet relative to others, while onion soup is a fairly bad one. We could go on, if we wished...

The point is that if you cared enough, you could probably narrow Singer’s breakfast that day down to a sizeable, but not endless, list of possibilities, each weighted by its own likelihood. A probability density distribution over Platonic breakfast-space, if you will. You may not be able to pick one specific food and say “He ate this!”, but you are

*far*from wholly ignorant — you’ll know the best candidates. And this generalizes to almost all sensible propositions. Try it — it’s actually a rather fun exercise!
Of course, it’s still reasonable to say “I don’t know” as a quick gloss of the actual truth: “I have no special information on this question that you do not.” However, the problem with

*thinking*“I don’t know” in the sense of full ignorance, is that it allows you — intentionally or not — to sweep all your background knowledge under the rug and pretend to yourself that some question is perfectly uncertain. Background knowledge should always be the first thing to come to your mind when considering a truth question. This helps avoid mistakes like the base rate fallacy (and more generally, fallacies wherein you ignore your own priors), and allows for good decision-making under uncertainty.
However, if humans wish to think like this as a habit, it is often much more useful to forget about

*probabilities*per se, and use the mathematically equivalent concept of*odds*.
Let’s have a quick refresher on what “odds” are. We all know what a probability is (or at least, we’re familiar with the term!). Odds can be seen as ratios of probabilities. Just as we use P(A) for the “probability of A,” we may talk about O(A), the “odds of A” (where A is some apparently sensible proposition).

In terms of probabilities, O(A) = P(A)/P(~A). So for example, if there is a 66% probability of rain tomorrow, then O(rain) = 0.66/(1-0.66), or more easily 66:33, which finally reduces to 2:1 (usually read “two to one in favour”). The “:” is basically just a division sign, so O(rain) can be stated as “2 to 1” or as simply “2.” Although odds can be expressed as ratios of probabilities, they are best understood on their own terms altogether. In this case, “odds of 2 to 1 in favour of rain tomorrow” means something like “days like this are followed by twice as many rainy days as non-rainy days, to the best of my knowledge.”

Odds are even more familiar from the racetrack, where a bookie might give “10 to 1 on Longshot, to win.” What this means is that if the bookie is selling stakes for $5 each, then a single $5 stake will get you (10+1)*$5 = $55 if you win (i.e., a gain of $50 plus your $5 stake back), while a loss will simply lose you your $5 stake. (Of course, in order to make money, the bookie must think that the

*real*odds on Longshot are even longer than 10 to 1.)
I advocate using odds rather than probabilities to quantify your epistemic states on

*all sensible propositions*, for two main reasons:*(1) Odds have the appropriate mental associations.*

Odds are associated in our minds with betting, which is an earthy activity in which irrationality might actually lose you your shirt; whereas probabilities are abstract and academic, probably associated with mathematics and statistics courses, and with Spock from Star Trek. The latter being the case, either you don’t get statistics at all (and the word “probability” just brings up memories of Spock being cold and emotionless), or you learned about probability in the context of wildly overspecified textbook problems, in which you had way more information handed to you than humans typically have in real-world situations.

By contrast, thinking in terms of odds and the racetrack forces you to let belief constrain anticipation — if you say you are 98% sure that Obama will win in 2012, that sounds to me suspiciously like “I really really hope he’ll win,” whereas “5 to 1 in favour” leads to the obvious question: “Care to make it interesting?” Suddenly your wishful thinking needs to take a back seat to whether you can afford to lose this bet. (I think the advantages of this mode of thinking at least partly carry over, even if you don’t

*actually*bet any money.)
Moreover, probabilities sound too precise, as though they have to be calculated rigorously or not at all. Stating a 95% probability makes me ask myself (and others ask me) “Why not 96% or 94%?” By contrast, “5 to 1” seems more acceptable as a tentative verbalization of a level of certainty, the arguments for which might not be readily quantifiable.

*(2) Odds map epistemic states to numbers in a way that makes sense.*

Alice the juror believes that Casey Anthony is guilty, with probability 90%. Bob the juror also believes she is guilty, with probability 99%. They seem to pretty much agree with each other, and yet...

If we switch over to odds, we find that Alice gives odds of 9:1 in favour of guilt, while Bob gives 99:1. This is more than an order-of-magnitude difference! Actually, Alice is

*substantially*less convinced than Bob; they should still be arguing! Alice still entertains reasonable doubt — at this point, she should probably vote to acquit.
And tellingly, when Eve mentions that she is 100% certain of the defendant’s guilt, a quick conversion shows that she gives odds of 100:0, aka “infinity.” This means, if taken literally (which we should not actually do), that Eve should be willing to take a bet in which being proven right earns her a penny, while being proven wrong earns her unending torture. The fact that odds explode as mathematical objects when they try to map absolute certainty is a nice feature probabilities don’t have.***

In summary:

- Quantifying uncertainty about all sensible questions is a crucial cognitive tool, both for eliminating all-or-nothing thinking and for reminding us to

*always*use our substantial background knowledge.
- Odds are more useful than probabilities for this purpose, because they have: more appropriate mental associations for most humans; good mathematical properties showing the folly of extreme cases (perfect certainty); and an intuitive relation to frequency that humans readily understand. Also, talking in odds will make you sound badass.

________

* “But what about questions like 1+1=2?” you ask? Remember, probability has to reference the fact that it is calculated in a fallible human mind. Maybe “1+1=2” is 100% correct as mathematics (I think it is), but there is still a chance that

*I*can mistakenly think 1+1=2 (epistemology) — for example, because aliens are messing with my brain. So I have to assign a probability slightly less than 100% to it.
** Also, it is often a point of contention as to what sorts of propositions “probability” can be meaningfully applied. For example, does it make sense to speak of probabilities where straightforward empirical evidence is lacking (e.g., “the probability that immaterial souls exist”)? Without wishing to get into this issue too deeply, I hold that this use of the word

*does*make sense (provided*any*discourse about the existence or nonexistence of souls makes sense), since if we can discuss how likely souls are*at all*, we should be able to quantify our uncertainty in the same manner as for other questions.
*** If you use logarithms, you can get even nicer mathematical properties, but you lose all the intuitiveness.

Thanks for the very nice post (and yay Bayesians!). Your second reason for using odds was surprisingly persuasive, although I've never found odds to be intuitive. At the least I might have to start supplementing with odds.

ReplyDeleteThis comment has been removed by the author.

ReplyDeleteVery nice intro to odds. I use them all the time in my work, but it is interesting to note that they are in general use and readily understood in Anglo-Saxon countries but not so much in Latin countries: try speaking of odds in Italy, Spain or Brazil and you may probably be not so well understood as in Australia or the US. Brazilian or Italian racetracks use them of course, but they are not much used outside the domain of betting.

ReplyDeleteMy main concern about bayesian reasoning, however useful it is in practice, is the wide room it leaves for subjective prejudice. One usually tends to inflate the odds in favor of one's preferred outcomes, and this has been experimentally confirmed in a variety of experimental economics settings (see the works of Vernon Smith, Daniel Kahnemann or Gerd Gigerenzer for examples).

My other remark is that odds refer to a PLURALITY of events, not to a single one. The odds that the next coin I throw will be heads (instead of tails) is 1:1, and its probability 0.5, but in fact the next coin will be exactly heads or exactly tails, without room for any intermediate result. Even if the odds are 99 to 1 against my horse winning, in fact it may win next time (as it will do one out of every 100 times). More importantly: no conceivable outcome in a particular occasion (heads or tails, win or lose) can disprove my estimation of the odds. Only doing the attempt many times, and observing the frequencies, may tell me whether my assessment of the odds (or probabilities) was right; and thus we are back to frequentism, I'm afraid, as an objective basis for probability or odds assessment, and to generate empirically refutable statements about them.

Ian,

ReplyDeleteRe: '“But what about questions like 1+1=2?” you ask? Remember, probability has to reference the fact that it is calculated in a fallible human mind. Maybe “1+1=2” is 100% correct as mathematics (I think it is), but there is still a chance that I can mistakenly think 1+1=2 (epistemology) — for example, because aliens are messing with my brain. So I have to assign a probability slightly less than 100% to it'

In standard probability theory, as I am sure you know, necessarily true propositions (e.g. logical and mathematical truths and tautologies) are assigned 1 on the normalized probability scale. Conversely, necessarily false propositions are assigned a 0.

Now, you advise that one ought to assign necessarily true propositions something lower than 1, say, .99999, or something which approaches 1, because it is logically possible, though wildly improbable, that aliens, mad scientists, or evil demons, whatever, are causing us to be massively deceived into believing 1 + 1 = 2 is true when it is not. So, there is a logically possible world in which aliens are massively deceiving us.

But if 1 + 1 = 2 is a mathematical truth (and it is), then it is necessarily true. If it is necessarily true, then it is true in all possible worlds. If the world in which aliens massively deceive us is a possible world, then in that world it is true that 1 + 1 = 2. The world in which we are massively deceived by aliens is logically possible. Therefore, it follows that in that world 1 + 1 = 2 is true. So, even if *we are being massively deceived by aliens*, we must assign necessary truths a 1.

Of course one could fret that one has made an error in a rather long and complicated proof (e.g. Andrew Wiles' proof of Fermat's Last Theorem) and thus that the proof is no proof at all, in which case the mathematical proposition would not be a necessary truth. But this event really need not concern our assignment of degrees of belief to logical and mathematical truths and tautologies, and it nevertheless presents a different worry altogether than that which you presented.

Moreover, and more generally, all probability assignments must be constrained in light of innumerable logically possible alternatives. Take, for instance, the toss of a fair, six-sided die. The sample space of the experiment is {1, 2, 3, 4, 5, 6}. The probability of observing any one event is 1/6 only within the context of a ceteris paribus clause, since it is logically possible that the die could dematerialize, land on the edge, be transported away by aliens, turn into a marble, etc., etc. So, if you are not happy with the argument above (though, on pain of contradiction, you should be), you can assign 1 to the P(1 + 1 = 2) in light of a ceteris paribus clause. It seems to me you *must* do this when you explicate your odds approach to credences anyways, so why not do it here?

Eamon,

ReplyDeleteodds are not about necessarily true or necessarily propositions, and (also importantly though less obvious) they are not criteria for adjudicating truth either. They are just rules of decision (underline "decision") when facing uncertain (underline "uncertain") outcomes.

Sorry, my comment to Eamon omitted one word. It should begin with:

ReplyDelete"odds are not about necessarily true or necessarily false propositions". I had omitted the word "false".

Hector,

ReplyDeleteI did not mention odds in my reply to Ian, so I wonder what engendered your comment. My comment concerned Ian's contention that one ought not to assign 1 to various mathematical and logical truths.

Eamon, it was the whole subject of this post, odds being one of the mathematical ways of representing probabilities or quantitative chances. Your remarks about necessary truths or necessary falsities are off topic. Ian's fancy imaginings about aliens messing with his brain we may discard for the moment. Besides, even if a statement happens to be totally true, we may not know it is. Some mathematical truths (e.g. Fermat's last theorem) remained conjectural for centuries until proved, and some still are there, as mere conjectures, even if they ultimately MUST be either true or false. As far as we are concerned, they are uncertain.

ReplyDeleteBut odds are not about judging the truth of a logical or mathematical statement, but about making decisions in the face of uncertain situations, or at most for attributing a degree of probability or likelihood to a factual proposition that may or may not turn out to be true.

Hector,

ReplyDeleteI apologize. I *did* mention odds. But the mention was uncontroversial. Odds are constrained by the axioms of probability and the odds that a horse wins a race are also made within the context of ceteris paribus clauses as well. Namely, that the horses do not dematerialize, the horses are not abducted by aliens, etc.

Hector,

ReplyDeleteMy initial comment was not at all off topic. It addressed an explicit assertion of Ian's. As for the decision theoretic utility of odds, I could care less at the moment. Odds are employed by some subjective probabilists in order to measure the degree of belief in the truth of propositions or the occurrence of events. So, in *that* sense my initial comment is germane.

Ian,

ReplyDeleteP.S. When I have more time, I will give you some reasons why subjectivists should not employ odds to measure personal degrees of belief despite their prima facie utility to that end.

OK, Eamon, in regard to that concept of probability as a "degree of belief". I do not buy into that meaning of the word, except for use in psychological studies about beliefs. I can conceive of measuring degrees of beliefs and then analyze the causes and effects of different degrees of belief, but I would not dream of calling a degree of subjective belief a "probability" that something occurs, or the "odds" of it happening, since it does not refer to things that may or may not objectively happen, but to states of mind. Those states of mind may cause decisions to be made, and actions carried out, of course.

ReplyDeleteWe should be careful not to pretend we have good evidence for what the probabilities/odds of things are when we don't. Things that are simple or that we have studied considerably and are well quantified are amenable to being treated as you suggest but this leaves a lot out.

ReplyDeleteFor example, consider whether person P will do action A at time T. Now, if this is broad enough its not so difficult. Will John eat food today. If we know very much about John we can probably give a decent ball park odds for this. But venture out into more complex human phenomena and/or into people/things we don't know very well, like John's friend Jim who we've never met or a route I've not traveled before and our ignorance becomes so vast that if we try to assign probability/odds to these things we are simply pulling a number of out thin air.

So Ian, are you suggesting we all familiarize ourselves with actuarial tables and make daily life decisions according to them?

ReplyDeleteThis comment has been removed by the author.

ReplyDeleteIf, as I believe, we do most of our thinking on the less than fully conscious level, and do so by use of our own biological forms of predictive logic, we can't help but make predictions on an algorithmic scale of possible to probable rather than on an exponential scale of odds that go from simply possible to more and much more possible. We might make accurate yes or no decisions on a long term basis using odds, but for the short term, we need a better way to decide when possible becomes actionably probable.

ReplyDeleteWhat are exactly your odds that the Goldbach conjecture is true? (I admit up to four decimals).

ReplyDeleteJesus,

ReplyDeleteTough question. If one rejects the law of excluded middle when it comes to mathematical propositions, I suspect one cannot provide a cogent answer.

However, if the question were rephrased a bit

('What exactly are the odds that a constructive proof for Goldbach's conjecture will be given within a specified time period?'), then perhaps we can say something like 20:1 against. I am willing to bet $20 on a return of $1 that no proof will be on offer for Goldbach's conjecture within the next five years.

"...of European ancestry, which ceteris paribus makes various other world cuisines (e.g., Mexican, Finnish) somewhat less likely than not."

ReplyDeleteMy background knowledge says that Finland is part of Europe.

However, as an example it otherwise holds true -- they do not usually consume British- or French- type breakfasts.

I submit this snark I recently read on the blog of an investment adviser by the name of John Hussman:

ReplyDeleteA Bayesian is someone who, vaguely expecting a horse, and glimpsing the tail of a donkey, concludes he has probably seen a mule.More seriously, I deeply distrust any argumentation that essentially goes "we should always use this approach, and base all our reasoning on it". That never works out, not for Objectivists, not for Positivists, not for Solipsists, and thus I conclude Bayesianally that it is most probable to not work out for the idea of attaching odds to everything either. (What are the odds that it is useful to attach odds to everything?)

Ian,

ReplyDeleteBetting odds directly measure the ratio at which money changes hands in a bet and, at best, only indirectly measure degrees of belief. Many factors associated with betting odds restrict their utility in indirectly measuring uncertainty.

Epistemic agents may or may not have moral or religious qualms about betting. Epistemic agents are likely to possess differing attitudes toward risk. Given diminishing and increasing marginal utility of money, wealthier agents are apt to have a low aversion to risk and less wealthy agents are apt to have a high aversion to risk, resulting in the tendency of both groups to accept odds which overvalue and undervalue their credences in the events or propositions under consideration, respectively.

Considerations of risk are inextricably tied to matters of expected utility which lead to difficulties like the St. Petersburg paradox. In light of such difficulties, I would advise to drop attempts to cast talk of measuring uncertainty in terms of betting odds entirely.

Well, sometimes 1 + 1 = 10.

ReplyDeleteYou know, there are 10 types of people in the world: those who know binary and those who don't. :-)

Anyway, reinforce one thing Hector said above. I'm Brazilian, and while I do know, intellectually, what odds are, it is never very intuitive. I got better with time, though.

Thanks to all for the comments! FYI, sometimes work & other commitments leave me less time to answer them than I would like, so things will sometimes be a bit... laggy.

ReplyDelete@Hector: Alas, I think you're right about bias appearing in people's assessments of subjective probabilities. However, I like the alternatives even less.

As to the Bayesian/Frequentist jihad, I'm planning to write something about that in the fullness of time, so I will refrain (with some difficulty) from arguing the point now. :)

@Eamon: I had trouble following your reasoning on possible worlds and necessary truths, but I will take another run at it tomorrow. In practice I do think it's a good idea to have an implied ceteris paribus clause ("given gross model accuracy" or something) and work from there so that one does not have to mention the aliens too much. Saying 100% still gives me the willies, though.

As to your reasons for not using odds as a measure of degrees of belief, I originally had a paragraph on risk aversion, but decided the post was already too long. My approach is this: in doing a thought experiment involving betting (or, for that matter, actual betting), consider what amount of money would ruin you if you lost it, then make the "stake" maybe a hundredth part of that amount. Something significant but not ruinous.

@t-b: I definitely would never advocate assigning a probability to a statement without knowing at least roughly what the statement means and what its referents are. As for pulling a number out of thin air, it can happen that we have essentially zero knowledge on a question (ignorance prior). The point of my Peter Singer example is that this is extremely rare, however. Usually, if you even know what a proposition means, you've already got enough background knowledge to start seeing lumps in the distribution.

@Thameron: I know of a few people for whom that would be a significant improvement! ;)

@Baron: What do you mean by an algorithmic scale here?

@AlexSL: You are probably right - my trouble is that when I try to pin down exactly where odds are inapplicable (beyond the caveats above), I fail to do so convincingly. However, take all of this with a grain of salt. It is my new hammer, and boy does everything look like a nail.

"@Baron: What do you mean by an algorithmic scale here?"

ReplyDeleteThat would be the strategic process that set up the the optional choices available for whatever behavioral responses were to be "probably" involved.

>But if 1 + 1 = 2 is a mathematical truth (and it is), then it is necessarily true. If it is necessarily true, then it is true in all possible worlds. If the world in which aliens massively deceive us is a possible world, then in that world it is true that 1 + 1 = 2. The world in which we are massively deceived by aliens is logically possible. Therefore, it follows that in that world 1 + 1 = 2 is true. So, even if *we are being massively deceived by aliens*, we must assign necessary truths a 1.<

ReplyDeleteI am not looking to engage in another of those lengthy exchanges I've had recently, but I'll overcome my better judgment to offer my thoughts. Take them for what they're worth; perhaps Ian might expound on the issue better.

Eamon, I do not see an argument in your posts that a proposition that has a likelihood of 1 under a set of axioms, or even multiple sets of axioms, implies the axiom set, or at least the set of axiom sets, itself is "necessarily true," which (it seems to me) is what you would need to show to substantiate the first sentence I've quoted, since otherwise epistemic doubt that attaches to the axioms would then attach to propositions that the axioms attribute a probability of 1 to.

Additionally, I do not see how your implicit citation of the modal logics helps your argument. Possible-worlds semantics is just an interpretation of the formal operations the logics perform. It doesn't tell us that propositions it assigns the "necessarily true" value to are necessarily true independently of the logic's own epistemological status.

But how amazing it would be if someone managed to rebut the millennia-old problem of epistemic skepticism here in the comments section! Even epistemologists aren't so confident of their work. Perhaps it might help to persuade me if you were to show more expressly what the contradiction is that your first comment's last paragraph mentions, and discuss why the contradiction is of epistemological rather than just logical significance?

>Betting odds directly measure the ratio at which money changes hands in a bet and, at best, only indirectly measure degrees of belief. Many factors associated with betting odds restrict their utility in indirectly measuring uncertainty.<

"Betting" is just an interpretation; Ian's other interpretation, "'odds of 2 to 1 in favour of rain tomorrow' means something like 'days like this are followed by twice as many rainy days as non-rainy days, to the best of my knowledge'" seems less behavioral. Also I'm not sure why you cast your discussion in terms of the epistemic agent. It's been quite some time since I've had this material at all, but as I recall the modal epistemic agent knows all the logical consequences of her or his belief, and in any case there are many idealized assumptions for epistemic agents. Although Ian's reply here casts the issue more in terms of utility, so perhaps behavioral considerations are important for Ian's position.

What are the odds that, if you pick a natural number randomly, it is a multiple of 3? One third? But there are the SAME amount of natural numbers and of multiples of 3. Your guess about the odds depends on your HYPOTHESIS about how are the numbers ordered and what are the statistical process by which you pick one instead of other. (For example, if you assume that the natural numbers are given in their 'natural' order -1, 2, 3...-, then the odds seem to be 1/3; but if you assume that ALL the numbers are mixed in an infinite lottery drum and THEN you pick one, then the odds are 1/2).

ReplyDeleteThe question, hence, is that in many cases we don't have ANY sensible idea about what are the 'underlying statistical process' of our pickings, so we don't have ANY reason to opt for some odds instead of others.

Timothy,

ReplyDeleteThank you for your reply. Don't fret, I really do not care to carry on an extended exchange either. I will simply offer the following and leave matters alone.

Re: 'But how amazing it would be if someone managed to rebut the millennia-old problem of epistemic skepticism here in the comments section! Even epistemologists aren't so confident of their work.'

I did not offer a refutation of external-world skepticism. I offered an argument as to why one ought to assign 1 to logical and mathematical propositions in the face of external-world skepticism. I could have provided more detail in the argument, sure, but as it is it is sufficient. (Though I did not, I could have gone further and showed that insofar as Ian refuses to assign logical and mathematical truths 1 [and logical and mathematical falsehoods 0], he finds himself employing a probability calculus which is a straightforward extension of a deductive logic whilst denying that the laws of that deductive logic are true in all possible worlds.)

Re: Possible worlds

Logicians employ possible worlds semantics ubiquitously in order to explicate, amongst other modal notions, a desired concept of logical necessity. Moreover, Ian initiated possible worlds talk when he mentioned skeptical alternatives to our being certain that 1 + 1 = 2. Anyway, the issue is simple: Ian grants that 1 + 1 = 2 is a mathematical truth, and ipso facto a necessary truth, but yet advises not to grant 1 to P (1 + 1 = 2) because 1 + 1 = 2 *might* not be true in a possible world in which aliens massively deceive us.

In a nutshell, he grants that 1 + 1 = 2 is a necessary truth (i.e. a statement true in all possible worlds) but then admits a possible world in which 1 + 1 = 2 *may not be true*.

As for the rest of your comment, it is completely off the mark, particularly the bit about epistemic agents.

P.S. Ian, I suspect your 'hundredth part stake' amendment would not work. Perhaps you should offer up something on this matter for consideration in future?

1+1=2 is at best a tautology. It's a descriptive truth, but only if we have agreed to accept it as such.

ReplyDeleteExcept that in reality it can only describe itself with complete accuracy and otherwise is always to some degree an approximation of whatever it's being used to describe outside of the descriptive system we've agreed to use.

>I offered an argument as to why one ought to assign 1 to logical and mathematical propositions in the face of external-world skepticism.<

ReplyDeleteJust briefly, and sorry to reiterate, but - I still don't see how so; Ian's comment drew a distinction between mathematical probability and an epistemic degrees-of-belief, which we'd model using probability. In particular, your comment "Ian grants that 1 + 1 = 2 is a mathematical truth, and ipso facto a necessary truth (i.e. a statement true in all possible worlds)" seems to be what's at issue. If we suspend skepticism to explore an axiom system, then we pronounce things like "this proposition has prob=1" under the axioms, but when we use such a system to model actual beliefs, it doesn't seem we can ascribe a probability of 1 to them. Perhaps the issue is that where I'm reading Ian to be discussing epistemic skepticism (as he says epistemology), you're not? I'd infer Ian is willing to assign probability of 1 to many propositions whilst working in formal contexts ("100% correct as mathematics"), but he's saying we shouldn't when discussing our degrees of belief.

> when Eve mentions that she is 100% certain of the defendant’s guilt, a quick conversion shows that she gives odds of 100:0, aka “infinity.” ... The fact that odds explode as mathematical objects when they try to map absolute certainty is a nice feature probabilities don’t have. <

ReplyDeleteBut odds don't explode when you go the other direction. When the probability of an event is zero, the odds is zero. When the odds of event A is infinite, the odds of event ~A is 0.

I recently examined somewhat similar issues from a slightly different angle.

Part (most) of my preference for lurking is just a self-discipline issue in that I find it hard to resist spending excessive amounts of time reviewing the matter in my head. By way of evidence, and at the risk of rambling or making myself look even more foolish, here's this post.

ReplyDeleteBut, I was puzzling what you (Eamon) might be referring to when you mentioned a contradiction, and speculating, it might be something like:

-If there are no propositions that have a probability of 1, then (by the possible-worlds interpretation) there does not exist any proposition that is true in all possible worlds. Then the proposition, "there does not exist any proposition that is true in all possible worlds" is not true-in-all-possible-worlds, so there is a world where it is false. Then there is a world where there exists a proposition that is true in all possible worlds, which provides us a contradiction.

The problem I see is that such derivation requires logical inference, and doubt that attaches to logics' axioms would mean that there is some (however infinitesimal) chance that the contradiction does not obtain. So within the formalism, as it were, you'd say there are propositions true in all possible worlds, but when assessing your degrees of belief, you wouldn't as aliens might be utterly deceiving you about what logic is. (Which might be a convoluted way of saying I don't see why it would be of concern that he might deny the laws of logic (as we know them) are true in all possible worlds, since the propositions that logic assumes obviously wouldn't escape his no-prob-1 view.)

ReplyDeleteI know of a few people for whom that would be a significant improvement! ;)I don't doubt it, but I'd say the odds of them adopting that practice are 500 to 1 (give or take).

Well, other people have already commented, but I will go anyway.

ReplyDeleteI would say that statistics goes about what we ignore. Maybe I am a little bit skeptic and/or negative here. Or maybe a little bit of semantics.

And about the example of Peter Singer's lunch, I see it like wishful thinking. There are a lot of assumptions going on, which are not checked self-consistently. Or at least, it seems to me so. I agree however it is a lot of fun, but to me it is just fun and nothing else.

Moreover, if we can construct all the possible cases, it would be fine, but in most of the cases this is not possible and the interpretation becomes difficult, not to mention that we are making the problem bigger.

Again, maybe I am too strict, but still, if all of us start to make such """subjective"" computations, will we be able to share the results?

And in the end, we have to decide, and this is not probabilistic at all.

Finally, sorry if this seems too random. I hope that adds something to the other contributions.

Rarely do we have enough quantitative information to state any probabilities (or odds, if you prefer) that have any connection to reality. Rather than try and fail miserably, why not simply use the all natural *qualitative* probabilistic reasoning (something seems "more likely" than something else now, but would seem "less likely" if we were to learn that something else is the case; we are "very sure" or "indifferent" or "very uncertain" etc.).

ReplyDeleteI've been thinking about it still more, and I think I would attach a degree of belief of 1 to propositions like "existence exists," which would be true no matter how deceived you are.

ReplyDeleteHere's why we should never use odds in making decisions: if a lottery has ten times higher odds of winning compared to another, how much more money should you spend on the first lottery?

ReplyDeleteThe answer is...it depends. If the first lottery's odds of a winning are 9 to 1 (pr=0.1) and the second's 99 to 1 (pr=0.01) you should spend ten times more. But if the first lottery's odds are 1 to 99 (pr=0.99) and second's 1 to 9 (pr=0.9) you should only spend 1.1 times more. Comparing the odds of two choices doesn't actually tell you anything.

If I tell you the chances of winning one lottery over another is ten times better, you know right away you should spend ten times more. The human brain can compare probabilities but can't compare odds. If you're going to be making decisions, and what good is a probability if you have no other to compare it to, you're better off sticking with probabilities.

This is a problem in epidemiology where we use a lot of ratios. People's intuition is to interpret ratios of odds as ratios of probabilities because ratios of odds are entirely foreign to us. This is fine when dealing with tiny odds but when dealing with high odds (such as your example) the difference can be huge (10 compared to 1.1) and can lead to radically different interpretations. In fact, some people preferentially report odds because their effect sizes just look bigger.

Also, I think the benefit you're attributing to odds can actually be attributed to expressing either odds or probabilities as fractions. In fact, totally without evidence to back this up, I'd say most people would say 1 chance in 3 is more intuitive than saying an odds of 2 to 1.

Jeremy wrote:

ReplyDelete> If the first lottery's odds of a winning are 9 to 1 (pr=0.1) and the second's 99 to 1 (pr=0.01) you should spend ten times more. But if the first lottery's odds are 1 to 99 (pr=0.99) and second's 1 to 9 (pr=0.9) you should only spend 1.1 times more. <

I think you need to rework your example. First, you didn't specify the value of the prize in each of the lotteries. Usually lotteries with higher chances of winning have smaller prizes. Second, lotteries are generally losing propositions, so it's wisest not to spend anything! Finally, and related to the last point, it's unusual to find a lottery where the odds of winning is greater than 1 (i.e. greater than 50% chance of winning). What would the value of the prize be in such a case?

Sorry, I didn't make it explicit that the prizes were all equal. And yes, lotteries are all losing propositions. My example was to demonstrate the flaws of odds, not comment on lotteries. Point is, we aren't wired to compare odds, we're wired to compare probabilities.

ReplyDelete@Jeremy: You are definitely right about expected utility calculations, as in a lottery. That counterexample did not occur to me, but should have, and it does make odds less advantageous for such purposes.

ReplyDeleteHowever, the very thing that makes odds bad for expected utility calculations seems like a feature when one moves to the realm of propositional beliefs.

I think a lot of this business depends on how one mentally pictures odds vs probabilities. I imagine a probability as slice on a pie chart, which conduces to expected utility calcs for sure. But I imagine odds as the quantities in two sides of a weigh scale, which does a nice job of capturing the importance of the diminishing marginal impact of extra information. In terms of expected utility, 99% probability of winning may be only 1.1 times better than 90%. But in terms of

collecting evidence, getting 99% probability really is something like 11 times harder than getting 90% probability, not 1.1 times.Also, and this will come into play later, Bayes theorem looks WAY nicer in odds form than the ugly probability form.

Love your practicality. I'm new to the world of stats and Bayesians and have struggled with the "conceputal opaqueness" you have referred to other places. With "Odds", I get the real world nature of it as opposed to just something theoretical. I'm giving 10 to 1 that learned something important here.

ReplyDelete