Probability Intuition

Last quarter I taught discrete math. One component of the class was to cover some basic probability theory. On one of the homeworks we asked the following two questions about random five card poker hands:

  • Given that the hand contains an ace, what is the probability that the hand contains another ace?
  • Given that the hand contains the ace of diamonds, what is the probability that the hand contains another ace?

Without doing any explicit calculations, which of the above probabilities do you think will be larger?

I find this problem interesting because while I can do the calculation and understand why it comes out the way it does, I was never able to give myself a good intuitive explanation beyond comparing this problem to a much simpler case with a similar behavior. Anyone have a good inuitive explanation for why one of the above probabilities is bigger than the other? (And yes I'm avoiding telling you the answer of which one is bigger: work it out for yourself! Or see the comments were I'm sure someone will leak the correct answer.)

More like this

Without calc: The first hand.

By Who Cares (not verified) on 02 Jan 2008 #permalink

Am I missing something?

Case 1 will be larger. The first ace *could* be the Ace of diamonds, in which case you have case 2. Or it *could* be one of the other aces, which means there are more ways of getting "yes" on case 1 than there are on case 2.

Of course, I can't say how much larger as that might need calculation ;)

Suppose we play the following game. I deal you a poker hand, you check whether it contains an ace. If not, we start over.

If the hand does contain an ace, I can decide to guess that there are are multiple aces in your hand. If I'm correct, you give me some amount of money, if I'm wrong you win some smaller amount. (So, I don't *have to* make a guess!)

However, suppose you are forced to tell me whenever there is the ace of diamonds in your poker hand. That certainly gives me more information: namely, I know you do not have a hand which contains only 1 ace that is not a diamond.

Since there is only 1 in 4 chance a 1-ace hand contains the ace of diamond, but an at-least 50% chance a multiple-ace hand contains the ace of diamond, I would decide to make my guess only when you give me that extra information.

To be honest my first guess was both have the same probability until I read the anyone has a good explanation why one of these is bigger.

By Who Cares (not verified) on 02 Jan 2008 #permalink

I agree with "Who Cares". The suit of the ace doesn't change the number of aces still available or the pool from which they come. Purely on intuition, I wager Case one and Case two have equal probability.

Apparently, my intuition disagrees with everyone else's (my comment above implies I vote for case 2).

Rupert: I think what you just said is that the probability of having a hand with the ace of diamonds is lower than the probability of having a hand with an ace of any suit. What I'm trying to ask about is the probability that, given one of those two events is true, that you have at least one other ace in the hand.

Who Cares and Joe: The fact that the probabilities aren't the same really is surprising, isn't it!

Hmm, except that "my comment above" doesn't appear :-(

Because the second set has a more specific "given", there are a large number of one-ace hands that are a part of the first set, but not a part of the second set. There are also two-ace and three-ace hands that are excluded from the second set, but since these should be rarer than the one ace hands, I don't expect them to tip the balance.

Therefore the second set (with the ace of diamonds) should offer higher probability.

Still, subtleties of phrasing make a big difference in problems like this. For example, if you were to say "the top card in a deck is an ace. What is the chance that the next four cards contain an ace?", then it would offer the same probabilities as "the top card in a deck is an ace of diamonds. What is the chance that the next four cards contain an ace?"

The difference between my examples and yours is that mine analyzes a single card for the first ace, while yours allows any of the five to be the first ace. Tricky stuff!

By Spaulding (not verified) on 02 Jan 2008 #permalink

Steven: sorry somehow it got stuck in the junk comment folder. Not sure why. Must be biased against poker players.

Dave: You can delete my later duplicate comment, it seems fixed now!

My intuition tells me the second hand is more likely. My reasoning is that the ace of diamonds specifically would be more common in hands with multiple aces. Hmm... I guess that reasoning isn't particularly coherent. Now I am curious.

The ace of diamonds is contained in 1/4 of the one-ace hands, 1/2 of the two-ace hands, 3/4 of the three-ace hands, and all of the four-ace hands. So once we know that the hand contains the ace of diamonds, we have excluded 3/4 of the one-ace hands and we have excluded a smaller fraction of the multiple-ace hands. That makes a multiple-ace hand more likely than if we knew only that the hand contains at least one ace.

I tried it with the math, and it turns out that it's not too hard if you're willing to approximate. (spoiler alert)

We can define P(n) to be the probability of having exactly n aces. The probability of having exactly n aces AND having the ace of diamonds is P(n)*n/4. The probability in the first case is [P(2)+P(3)+P(4)]/[P(1)+P(2)+P(3)+P(4)]. If we assume P(n-1) is much greater than P(n), then we can approximate this to P(2)/P(1). Similar approximation of the second case leads to 2*P(2)/P(1). Therefore, the second case is more likely by about a factor of 2. My numerical calculations (not shown) agree with this assessment.

Dave: I like that example a lot. Just to be sure that I've got it right, could you use that to talk about quantum peeking or measurement? That is, once you know more information, you thn change the probability of what follows; a Texas hold 'em EPR.

Reminds me of the Monty Hall problem, although that one's easier.

Shouldn't the probability of case one just be P(2)+P(3)+P(4). Why are you dividing there? Also the assumption of P(n-1) being MUCH greater than P(n) raises some questions. Is difference between P(n) and P(n-1) great enough to allow the approximation, I think your in some murky water there.

can i take that last comment back? pls?

Intuition tells me that case 1 has greater probability simply since getting ANY ace is more likely than getting a SPECIFIC ace. But often in probability, intuition is flat out wrong.

I believe conditional probability applies here. P(B|A) = P(A intersect B)/P(A).

P(A) in the first case is 4/52 and 1/52 in the second.
P(A intersect B) is 4/52 * 3/51 in the first case and 1/52 * 3/51 in the second case.

So in case 1 P(B|A) = (4/52*3/51)/(4/52) = 3/51
And in case 2 P(B|A) = (1/52*3/51)/(1/52) = 3/51

If I'm remembering my probability correctly the odds of having the second ace are identical. However the odds of a 2 ace hand and the odds of an ace of diamonds plus another ace hand are not the same.

Reminds me of the Monty Hall problem, too...

OK, so we agree that it's more likely to get a second ace if you're told the hand contains the ace of diamonds specifically, yes?

I'm good on the simpler explanation. Say you're pulling two cards in a four card deck that consists only of the Ace of diamonds, the Ace of clubs, and two jokers. You can pull any of these combinations:
a) Ad Ac
b) Ad J1
c) Ad J2
d) Ac J1
e) Ac J2
f) J1 J2

If you're only told you have one ace, any of cases a through e might apply, and you have a 1/5 chance of having a second ace. If you're told you have the ace of diamonds, only cases a through c might apply, and you have a 1/3 chance of having a second ace.

This scales. When you say someone has one ace, of any suit, instead of just the ace of diamonds, you're allowing for proportionately more one-ace situations than you are for two-ace situations. As in the case above: AA can only happen one way, but AJ1 can happen two ways. By limiting yourself to Ad cases, you cut out more one-ace cases than two-ace cases.

My God, that's unclear...

By ThePolynomial (not verified) on 02 Jan 2008 #permalink

Let p be the probability of another ace in case 1 and q be the probability of another ace in case 2. q>p.

If I am told the hand contains an ace, the probability there is another ace is p. Now I ask the suit of the ace, knowing the answer will be one of four things. Regardless of the answer I get, I will raise the probability from p to q. The strange thing is that the probability of their being an ace does not get raised until I get a response, even though it doesn't matter which response I get.

By Matt Elliott (not verified) on 02 Jan 2008 #permalink

The first question is equivalent to: I hold an ace from a 52-card American deck, and I will fill a 5-card hand by drawing 4 more cards from the deck. What is the probability that I will draw at least one more ace?

If you accept that as valid, then the second question is equivalent to: I hold an ace of a certain suit from a 52-card American deck, and I will fill a 5-card hand by drawing 4 more cards from the deck. What is the probability that I will draw at least one more ace?

It doesn't matter if I tell you the suit or not. It doesn't matter if I know the suit. Whichever ace I hold, it is of a certain suit. This is implied in the first question and explicit in the second question. The card's rank and suit are given: they are both certainties, of unit probability. Therefore the sole card of that rank and suit no longer plays any part in the probabilities befalling the remaining 51 cards.

The two questions are equivalent and their answers must be identical.

One point that occurs to me: The "given" Ace of diamonds may not be the first ace -- that multiplies the probabilities of the multiple-ace cases.

By David Harmon (not verified) on 02 Jan 2008 #permalink

I've been thinking about this all day and I agree with CRM-114. I think this is a mistake that's being made: the "probability that a hand has two aces given that one is the ace of diamonds" and the "probability that a hand has two aces and one is the ace of diamonds" are two different questions. I think the probability is identical.

There are six 2-ace hands with all suits allowed. There are three 2-ace hands with a diamond ace. My intuition is that 1. is more likely. I don't see how you can increase the probability by specifying a suit. Let's say I'm playing against you and you tell me you have at least one ace. I then have to bet on whether you have another one. If you then tell me you have "the ace of ____" where ____ is whatever you have, it seems to me like all you're doing is reducing the number of possible 2-ace hands you could have and thereby reducing the chance of you having a 2-ace hand.

Ok, here's my go at this:

The total pool of possible hands is defined by the first condition: either it has A) an ace (any ace) or it has B) the ace of diamonds. 3/4th of the hands that have a single ace do not have the ace of diamonds, so the total potential hands is smaller if you know that it has the ace of diamonds.

Multiple-ace hands are rarer, but the ace of diamonds is in a larger fraction of them: 1/2 of the 2-ace hands, 3/4th of the 3-ace hands, and all of the 4-ace hands.

So, if you know it has the ace of diamonds, you disproportionately eliminated from consideration hands that don't have more than one ace (relative to those that have 2 or more), and so are more likely to have a hand that has another one.

BTW, a nifty problem, as my first reaction was the same as the others: what difference does it make?

Let's say I'm playing against you and you tell me you have at least one ace. I then have to bet on whether you have another one. If you then tell me you have "the ace of ____" where ____ is whatever you have, it seems to me like all you're doing is reducing the number of possible 2-ace hands you could have and thereby reducing the chance of you having a 2-ace hand.

You're reducing the number of possible two-ace hands by half...but aren't you also reducing the number of one-ace hands by 3/4? Perhaps I am confused.

By ThePolynomial (not verified) on 02 Jan 2008 #permalink

This seems like the case of many discrete probability problems. It depends how you generate the sets.

Deal 5 cards turn over the first one it "happens" to be an ace. Chance of any of the other 4 being an ace 1-48/51*47/50*46/49*45/48. (1-P(no aces))
and same thing; 5 cards first one show "happens" to be ace of diamonds then we are in the same case as above.

however if the sets are generated thus

1) generate all poker hands
2) remove all poker hands with no ace of diamonds
3) calc number of 2 ace hands / ace of diamonds hands = p1

1) generate all poker hands
2) remove all poker hands with no ace
3) calc number of 2 ace hands / number of ace hands = p2

So we can now see that p1 Prob(2 aces given ace of diamonds) and p2(prob of 2 aces given one ace) are not necessarily the same.

Then the logic of ThePolynomial and miller hold

There is a host of similar questions that rely on how you form the sets. I feel in most cases the intuition is generally to pick the first kind of assumption because that is how you would come across these problems in real life. Then when some math shows the other is correct you feel you are missing something. In reality both are correct depending on the set generation assumption. More trained mathematicians may think more like the 2nd set as that is either more interesting or just something they have become used to

KJC: I agree that they yield different answers, but how would you ever interpret "Given that the hand contains an ace" as "Given that the first card dealt to you is an ace?" (and similar for the ace of diamonds) I think the english language is strong enough in this case to exclude the later setup. Or am I wrong?

Fair enough, but I think it is how most people see the problem. The problem still holds even if it isn't the first card. If I get dealt 5 cards and there is the chance of 0 aces and then I turn over any of those cards and it is the ace of diamonds I don't gain any more info than it just being an ace (info about the chance of more aces anyway). So I don't think the sentences as written contain no ambiguity. It is "contains ace" but still "generation forces the ace" or "chance happend to give you the ace". In practice the hand I just picked up off the table, could have contained no aces, but one of the cards I glimpsed happens to be an ace, I think is how people see this played out in there heads. The language is strong enough. If you asked. how many of all poker hands containing 1 aces contain at least 2 aces, and how many of all poker hands containing the ace of diamons how many contain 2 aces it is unlikely peoples intuition will go wrong. "given it contains an ace" doesn't tell me how it was generated IMHO. I think most trained mathmeticians would understand the language in the way you describe, for us lesser trained (or perhaps people who play too much poker) I am not so sure

However if you original question was: Given we are creating sets to ensure the situation described then how can you intuitively see which is the bigger probability (i.e. you were not trying to get a way of understanding / explaining why it wasn't the same prob in either case) then perhaps my insights are pointless and (slightly) off topic.

Sorry for the comment spam. But is my longer comemnt to your question in moderation or did I mess up posting (just don't want to re-type it was a bit long (over long?)

KJC: Sorry the spam filter seems to be over eager. Probably the word "poker!" It should be posted now.

So is there a better way to phrase it? Should it be "Given that an honest observor tells you that the hand has at least one ace..." Or even more verbose "Given that an honest observor tells you that the hand has at least one ace, but doesn't tell you which of the cards are aces..."?

There are six 2-ace hands with all suits allowed. There are three 2-ace hands with a diamond ace. My intuition is that 1. is more likely.

I believe the question is asking what the probability is of having a second ace, not the probability of a 2 ace hand versus a 2 ace hand with one having a specified suit.

I still don't think this: There are six 2-ace hands with all suits allowed. There are three 2-ace hands with a diamond ace. is what the problem is asking. If that's what it intends to ask, then it's worded wrong. Why should the probability be any different if you're told the suit of the ace? In other words, I think it only appears to be like the Monty Hall problem, where the seemingly useless information actually does matter.

Anyways, I assume the blogger knows the answer. When can we know?

It seems that the sequence in which the cards are dealt, and the knowledge of the cards that have actually been dealt are important for this question.

For the first situation, the sequence of the discrete probabilities for two aces (dealt one after another) might look like this:

Situation 1: 1/52 --> 1/51

You're told that there's an ace among the five cards dealt, but you aren't told that when the cards were dealt, it was the first one to show up among these cards. So for all intents and purposes, the chances of getting an ace of diamonds for the first card dealt would be 1/52; the chances of getting another ace would be 1/51.

The sequence of discrete probabilities for the second situation would look like this:

Situation 2: 1/1 --> 3/51

Here, you're basically told there's already an ace among the five cards that are dealt, so for all intents and purposes, you could say that the probability of getting an ace as the first card *was* 1/1. This would mean that there's a 3/51 chance that the second card dealt will be an ace. So if we just go by the probabilities of the 2nd sequence probabilities for aces dealt, it looks like there's a higher probability for the second situation.

By Tony Jeremiah (not verified) on 02 Jan 2008 #permalink

The reason this breaks intuition is in a subtlety that a few people are pointing out. The following cases are different.

(1) Before the deal, we decide that we will only accept hands with the Ace of Diamonds (others will be redealt). What is the chance that the hand has two Aces?

(2) Before the deal, we decide that we will only accept hands with one ace, and we will be told what the suit of that Ace is (e.g. Diamonds). What is the chance that the hand has two Aces?

The English phrasing in the original problem setup could be parsed either way.

Personally, Aces are special to me, and I assume that the hand was redealt until as Ace arrived. But the Ace of Diamonds isn't really a special Ace, so I figured that we just took whichever one came along.

Mory I'm confused: "two Aces" should be "two or more aces"? And I don't get your (2) at all... If the hand has "one ace" it cannot have "two Aces?" You mean "at least one ace"? And if there is more than one ace, I'm telling you the suit of which one exactly?

It seems to me that the question at least partially depends upon whether one takes a Bayesian approach or not, i.e. does knowledge of the ace of diamonds really condition our knowledge of the other aces? Quick, someone get Chris Fuchs (who is not only a Bayesian but also a Texan...)

So is there a better way to phrase it? Should it be "Given that an honest observor tells you that the hand has at least one ace..." Or even more verbose "Given that an honest observor tells you that the hand has at least one ace, but doesn't tell you which of the cards are aces..."?

That phrasing can also be misinterpreted. Did the observer pick a random card in the hand, which happened to be an ace, and tell you about it? Or did you ask the observer if there was at least one ace and he answered "yes"? These two cases give two different answers. I assume you want the latter case.

I think it is important to include some clarity of how the hand was generated. Like Mory says we have to say up front before we deal which hands we are dealing with. It all stems on whether, before the deal, there was a chance that I get 0 aces. I think that is a very common assumption. If you eliminate that possibility clearly then you have the right language (IMO). I don't think it is easy to clarify it though without getting too verbose. In normal language "Given..." clearly means to some people "It happened that..." and to other "We assure that...". Obviously there are lots of ways to specify it clearly, but they are to verbose. I cannot come up with a pithy way that I think is 100% clear.

take a look at the "principle of restricted choice" in card play, specifically in bridge.

Stupid English. Let me try and restate

"Two Aces" should be "Two or more Aces".

Case (2) restated:
(2) Before the deal, we decide that we will only accept hands with at least one Ace, and we will be told what the suit of the Ace on the left (e.g. Diamonds). What is the chance that the hand has two or more Aces?

OK, I thought of an analogy that may help explain intuitively why there is a difference in the two cases. Instead of cards, imagine a pair of dice where one is red and the other is blue.
Case 1: Without looking at the dice what is the probability of rolling snake eyes (two 1s)? As any craps player knows, it is 1/36.
Case 2:We sneak a peak and notice that the red die is a 1. Now what is the chance that the blue die is a 1? Well, now only the blue die is unknown and so the probability is now calculated on a single die! The answer here, of course, is 1/6.
Thus, conditioning really matters and Pr(Case 2) ⥠Pr(Case 1) where the inequality holds if both cases have zero probability (if not, they asymptotically approach one another as the number of configurations increases).

Ian, I read the analogy more like this:

Case 1: Without looking at the dice what is the probability of rolling snake eyes (two 1s), given that one die is a 1? As any craps player knows, it is 1/6.

Case 2:We sneak a peak and notice that the red die is a 1. Now what is the chance that we rolled snake eyes? Still 1/6.

I think the non-intuitive part of this problem lies in whether we treat all of the aces as indistinguishable or not, and whether their order in our hand is relevant.

By Caledonian (not verified) on 02 Jan 2008 #permalink

I should clarify, that should read "we know that it's the red die that's a 1, and we don't know the other die" vs. "we know that one of the dice is a 1, and we don't know the other die".

In other words, it doesn't matter what color the dice are because they're not different. In the problem in this post, it doesn't matter the suit of the aces. If it matters, I think the problem is not clearly stated.

jeffk, you have an interesting point. That is an extremely subtle difference and is dependent upon when the knowledge is revealed. If we're revealing both dice simultaneously, it is 1/36 but if we reveal them one at a time and know the first is a 1, then the probability of the second being a 1 is 1/6. In that sense there is no difference between the cases and I would agree that the problem, as stated, should have been more properly worded.

Dave Bacon (#7) wrote:

Who Cares and Joe: The fact that the probabilities aren't the same really is surprising, isn't it!

Not really. I've worked quite a bit with math and the one thing I've learned is to never trust intuition if I have a chance. That said I've just dug out a discrete math book (university comp sci introductory level) to see if they have an explanation.

By Who Cares (not verified) on 02 Jan 2008 #permalink

First, the easiest way to NOT the same is to think about a coin toss. The red/blue distinction someone mentioned is useful and can make it more intuitive, so let's say one is red and one is blue. Using the following abbreviations (R = red, B = blue, H = heads, T = tails), the only possible outcomes after flipping both coins are, of course, RH&BH, RH&BT, RT&BH, RT&BT.

Now, two cases.

Case 1) I tell you that the red coin landed heads up. What are the odds that the blue coin also landed heads up? Obviously, the outcomes where the red coin landed tails up are eliminated, leaving us with RH&BH and RH&BT. So there's a 50% chance that the blue coin landed heads up.

Case 2) I flip the coins without you looking and tell you that AT LEAST ONE coin landed heads up, without specifying which. What are the odds they both landed heads up? Well, the only outcome eliminated by the informatoin I gave you is the one where neither lands heads up - RT&BT. So we still have RH&BH, RH&BT, and RT&BH. Each is equally likely (because each had a 1/4 chance to begin with). Thefore, there is only a 33% chance that both coins landed heads up.

This is extremely UNintuitive, I think, and it takes a really long to convince some people. It seems that if I tell you one coin landed heads up, it should be just like if I told you which coin (i.e., you can arbitrarily pick a coin to be the one that landed heads, because you know both had the same chance). But it doesn't work. Let's try it: I tell you one coin landed heads up, and you assume the red coin landed heads up, so there's a 50% chance as in case 1. But wait; what if the red coin landed heads down? You've unfairly excluded that equally probable outcome, so 50% can't be right.

Eliminating certain outcomes will probably have an effect (I assume there are cases where you could introduce additional information with the discarded outcomes having no effect on the probability, although I'm not thinking about that now).

So the simplest way to look at probability problems, at least for me, is to look at the effect of the eliminated outcomes. Unfortunately, this doesn't work as easily for the poker problem because we're not dealing with binary choices - Case 1 both adds a bunch of extra cases where there's more than one ace, and a bunch of extra cases where there's only one ace. But I can still think through this with a little bit more effort.

Add three cases: 3, 4, and 5. These are the same as Case 2, but with different suits.

Case 1 multiplies the number of single-ace outcomes from Case 2 by 4 (add up the single-ace outcomes from Cases 2-5), but it can't possibly multiply the number of multiple-ace outcomes by 4 (using the same technique, adding up the multiple-ace outcomes of Cases 2-5, that would necessitate a ton of overlap - you would have to, say, count "Ace of Diamonds & Ace of Hearts" twice, or "every Ace" four times, which of course doesn't make any sense). This might count as a calculation, because my explanation doesn't make sense if, for example, there are more suits than there are cards in a hand. Still, for the given rules, if Case 1 has four times as many single-ace outcomes as Case 2, and less than four times as many multiple-ace outcomes, the probability of a multiple-ace outcome would have to be greater in Case 2.

Does that make any sense?

Ok, I hope I get this right; I should really go to bed.

First off, every 5-card poker hand is unique and thus has 120 permutations. You can factor this right out of the picture. We can deal purely in combinations.

#1: "Given that the hand contains an ace, what is the probability that the hand contains another ace?"

I.e., given that you have greater than zero aces, what's the probability of you having greater than one ace?

Greater than zero aces is all poker hands, minus those that have no aces: (52,5) - (4,0)*(48,5) = (52,5) - (48,5) = 886656

Greater than one ace is the previous, minus those that have one ace: (52,5) - (48,5) - (4,1)*(48,4) = 108336

Probability = 108336/886656 ~ 0.12

#2: "Given that the hand contains the ace of diamonds, what is the probability that the hand contains another ace?"

I.e., given that you have the AD, what's the probability of you having greater than zero more aces?

A hand with the AD: (1,1)*(51,4) = (51,4) = 249900

A hand with the AD and more aces is the previous minus those that have zero more aces: (51,4) - (1,1)*(48,4) = 55320.

Probability = 55320/249900 ~ 0.22

Intuition tells us nothing about the truth of a/b > c/d when a>c and b>d. So we need to look at the numbers. It's not surprising that the number of hands with an ace is about four times as much as the number of hands with the ace of diamonds, at least not to me. What's surprising, though, is that the number of hands with at least two aces is only about twice as much as the number of hands with the ace of diamonds and at least one other ace.

That is until you realize that these possibilities are dominated by the hands with only a pair. As someone already pointed out, the number of ace pairs with the ace of diamonds is half that of the total number of ace pairs:

Hands with ace pairs = (4,2)*(48,3) = 6*(48,3)
Hands with the ace of diamonds and one other ace = (1,1)*(3,1)*(48,3) = 3*(48,3)

Thus, the denominator is diminished by a factor of four, but the numerator is only diminished by a factor of 2.

By John Moeller (not verified) on 02 Jan 2008 #permalink

jeffk, I think you've nailed a very good analogy in your 9:39pm post, but you have the math wrong. If the dice are different colors, there are 36 possible rolls, and 11 of them have at least one "1" showing. So for case 1, the answer is 1/11, not 1/6. The answer for case 2 is indeed 1/6. This gives the same answer as the original ace question (the more specific case leads to a higher probability).

The analogy isn't perfect (it is a coincidence that the nearly-factor-of-2 difference is the same for both the die and the cards), but I think quite useful all the same.

By Ken Wharton (not verified) on 03 Jan 2008 #permalink

To make the dice analogy that Ian and jeffk raised as much like the original problem as possible, the two cases must be as follows:

1. We know that at least one of the dice is a 1, but we don't know whether it's the red die or the blue die (or both). There are 11 possibilities: six in which the red die is 1, six in which the blue die is 1, but we've double-counted the case where both are 1. Only one of those cases is snake eyes, so the probability is 1/11.

2. We specifically know that the red die is a 1. Now there are only six cases, and as Ian correctly pointed out, the probability is 1/6.

Having done this calculation, you can see that the inequality should go the same way in the original problem (which was contrary to my initial intuition that the probabilities should be the same). More hands will contain at least one ace of any suit than will contain the ace of diamonds, and this ratio is greater than the ratio between the number of hands with two or more aces of any suit and the number of hands with two or more aces, one of which is the ace of diamonds.

By Eric Lund (not verified) on 03 Jan 2008 #permalink

I'm still not sure what's going on here. Has the generation of the sets question been settled? If not, maybe it would be better to say that you have 52 marbles of different colors. That way you eliminate the experiential issues dealing with dealing, which are really not relevant to the original question. I read that original question as being about an existing set of "cards" that had been generated in the past by an unknown but random method. Thus there is an existing set of N cards selected from 52 cards that is under examination. Is it necessary to know the value of N? That differs with different card games. Or are we poker snobs here?

I have an inkling of a rationale for why the probabilities might be different, but I would like to see the explanation, and then the discussion about why the explanation is wrong. Or right.

I don't know if this got through last night; I posted it, but it did not appear.

First off, every 5-card poker hand is unique and thus has 120 permutations. You can factor this right out of the picture. We can deal purely in combinations.

#1: "Given that the hand contains an ace, what is the probability that the hand contains another ace?"

I.e., given that you have greater than zero aces, what's the probability of you having greater than one ace?

Greater than zero aces is all poker hands, minus those that have no aces: (52,5) - (4,0)*(48,5) = (52,5) - (48,5) = 886656

Greater than one ace is the previous, minus those that have one ace: (52,5) - (48,5) - (4,1)*(48,4) = 108336

Probability = 108336/886656 ~ 0.12

#2: "Given that the hand contains the ace of diamonds, what is the probability that the hand contains another ace?"

I.e., given that you have the AD, what's the probability of you having greater than zero more aces?

A hand with the AD: (1,1)*(51,4) = (51,4) = 249900

A hand with the AD and more aces is that minus those that have zero more aces: (51,4) - (1,1)*(48,4) = 55320.

Probability = 55320/249900 ~ 0.22

Intuition tells us nothing about the truth of a/b > c/d when a>c and b>d. So we need to look at the numbers. It's not surprising that the number of hands with an ace is about four times as much as the number of hands with the ace of diamonds, at least not to me. What's surprising, though, is that the number of hands with at least two aces is only about twice as much as the number of hands with the ace of diamonds and at least one other ace.

That is until you realize that these possibilities are dominated by the hands with exactly two aces. As someone already pointed out, the number of ace pairs with the ace of diamonds is half that of the total number of ace pairs:

Hands with ace pairs = (4,2)*(48,3) = 6*(48,3) = 103776
Hands with the ace of diamonds and one other ace = (1,1)*(3,1)*(48,3) = 3*(48,3) = 51888

Thus, the denominator is diminished by a factor of four, but the numerator is only diminished by a factor of 2.

By John Moeller (not verified) on 03 Jan 2008 #permalink

My attempt. Compare: p(E2|E1)=p(E2)/p(E1), the probability of event E2, at least 2 aces in the hand, given the event E1, at least one ace in the hand; and p(E1H|E0H)=p(E1H)/p(E0H), the probability of event E1H, the ace of hearts (nicer than diamonds) plus at least one other ace in the hand, given the event E0H, the ace of hearts is in the hand, plus at least zero other aces. [The wording is such that E2 could be the probability of exactly 2 aces in the hand, and likewise for E1H, exactly 2 aces, one of them the ace of hearts, but I will discount that.]

We want to see "intuitively" whether p(E2|E1) is greater or less than p(E1H|E0H), which is equivalent to wanting to know whether p(E0H)/p(E1) is greater or less than p(E1H)/p(E2). Intuitively, if we have one or more aces, the chance of one of them being the heart is less than if we have 2 or more aces (I hope that's right! I take it I'm not allowed to check), so p(E2|E1) is less than p(E1H|P(E0H).

For the dice problem, E2 is snake-eyes (1/36), E1H is also snake-eyes (1/36), E1 is "at least one die is 1" (11/36), E0H is "just the red die is 1" (1/6), so p(E0H)/p(E1)=6/11 is less than p(E1H)/p(E2)=1, and p(E2|E1)=1/11 is less than p(E1H|E0H)=1/6, which agrees with the above.

I wouldn't say this is intuitive exactly, the mathematical setup is wearying but essential, as always, but the last stage is perhaps more-or-less intuitive, and there's only one algebraic step to it, and I've obeyed the rule of not actually calculating the probabilities, right? Except not for the dice, sorry.
Indeed, I personally find probability never intuitive.

This is the same as the principle of restricted choice in card play. That principle states that if someone does not play a card when they could have, it affords an inference that they don't have that card. As an example, suppose there are two cards, say a queen and a jack, held between the two opposing players. You'd expect that 50% of the time each of the opposing players has one of the two cards, and 25% of the time the left-hand players has both and 25% of the time the right-hand player has both. (In reality, the probabilities are very slightly different because if a player has one of the two cards, he has fewer free places in his hand to hold the other card, but let's not worry about that) Then, at some point in the card play, your right-hand opponent has the chance to win a trick with one of these cards. Let's say he plays the queen. Now, where is the jack? Is it 50/50 that it is in either hand? No, it's not...remember, the chance that the right-hand players has either the queen or the jack or both is 75%. So, 75% of the time we are in the situation that the right-hand player will just have one a trick with one of the two cards. But, only 25% of the time does he hold both cards. So, only 1/3 of the time does he hold the remaining card, in this case the jack, and 2/3 of the time he does not hold that card. This is called the principle of restricted choice because if he held only the queen, he has to play the queen (his choice is restricted), but if he holds both the queen and then jack, he has a choice which one to play and only 50% of the time will he play the queen, which affords an inference that it is less likely that he holds the jack also. This is the same as David's problem.

Kudos to Ken (jeez, you're up early!) and Eric for finding the correct dicing analogy. Personally, I tend to work with dice a lot since I partially developed an introductory physics lab on probability that uses them and, for whatever reason, I intuitively think that way (perhaps it is because I have a collection of very odd dice including a pair that are perfectly round...).

oops, many typoes removed:

This is the same as the principle of restricted choice in card play. That principle states that if someone does not play a card when they could have, it affords an inference that they don't have that card. As an example, suppose there are two cards, say a queen and a jack, held between the two opposing players. You'd expect that 50% of the time each of the opposing players has one of the two cards, and 25% of the time the left-hand player has both and 25% of the time the right-hand player has both. (In reality, the probabilities are very slightly different because if a player has one of the two cards, he has fewer free places in his hand to hold the other card, but let's not worry about that) Then, at some point in the card play, your right-hand opponent has the chance to win a trick with one of these cards...and suppose either one will work equally well to do this. Let's say he plays the queen. Now, where is the jack? Is it 50/50 that it is in either hand? No, it's not...remember, the chance that the right-hand player has either the queen or the jack or both is 75%. So, 75% of the time we are in the situation that the right-hand player will just have won a trick with one of the two cards. But, only 25% of the time does he hold both cards. So, only 1/3 of the time does he hold the remaining card, in this case the jack, and 2/3 of the time he does not hold that card. This is called the principle of restricted choice because if he held only the queen, he has to play the queen (his choice is restricted), but if he holds both the queen and then jack, he has a choice which one to play and only 50% of the time will he play the queen, which affords an inference that it is less likely that he holds the jack also. This is the same as David's problem.

Hey Matt, I wonder if bridge players develop a better intuition for probabilities like this because of their exposure to the principle of restricted choice?

I think so, especially since they don't get to stop and calculate the probabilities.

Interesting computer science problem: a good program for playing bridge. There exist extremely fast programs to solve for best play given knowledge of all the cards. These lead to Monte Carlo attempts to solve the problem of unknown cards based on dealing out lots of hands which fit what you do know about the opposing cards and then picking the play which works in the majority of situations. These have the problem, though, of dealing with choices in the future: the program has a tough time knowing that it will have to take a guess in the future, and it will tend to assume that it takes any guess which will occur on a future card play correctly, since it just finds best play for each hand it deals. Simple situations like the one above can be dealt with, but to my knowledge there is no algorithm comparable to those in chess. Namely, the chess algorithms have the property that you run them on a faster machine and they get better and better, but I think there are certain situations which are just beyond the ability of existing bridge programs to solve. (that's just in the card play phase, not even in bidding, and not even worrying about deceptive play and so on)

Great problem!!!

By Gil Kalai (not verified) on 03 Jan 2008 #permalink

John Preskill's comment is correct, but I thought I'd add in an explanation of why the problem is confusing. It has to do with how the random hands are generated. Let's consider the given-an-ace-of-diamonds scenario in the following two cases:

Case 1:
I deal a random five card hand, then look at it. If it does not contain the ace of diamonds, I reshuffle and redeal.

Case 2:
I deal a random five card hand, then look at it. If it does not contain at least one ace, I reshuffle and redeal. Otherwise, I randomly pick one of the aces in the hand and announce its suit (which we suppose happens to be diamonds in a particular instance).

In case 1, Preskill's analysis applies. In case 2, knowing the suit provides no additional information about the probability of having a second ace, and thus in case 2, problem 2 has the same probability as problem 1. This should be verifiable with Bayesian statistics, but I'm too lazy to actually check.

The dice version (Eric Lund 10:02) also has a simpler setup (which I learned here, right beneath the statement and solution of the card version).

Person A and Person B each have two children. Person A tells you she has a son named John while person B tells you that his older son is also named John. Which of the two is likelier to have two sons?

By Joe Renes (not verified) on 03 Jan 2008 #permalink

For anyone who wants to see the exact answer, I suggest you follow the link Joe posted in the last comment.

If person B's older son is named John, person B has a 100% chance of having two sons.

By Caledonian (not verified) on 03 Jan 2008 #permalink

Caledonian, I knew I would screw that up somehow. Person B tells you his older child is also named John (presumably a boy).

By Joe Renes (not verified) on 03 Jan 2008 #permalink

Again there is ambiguity is the son problem.

In the first case you could say that there are three possibilities: older son and younger daughter, older daughter and younger son, older son and younger son. One out of three then yields two sons. However, another interpretation is that if one child is a son, then there is a one in two chance that the other is a son.

Therefore in the first interpretation the probabilities of the two cases are different and in the second interpretation they are the same.

Conditional probabilities assume the first interpretation and this is what gives our intuition problems. For example suppose I ask if Person A has a son and she says yes. Then I say there is a 1/3 change of her having two sons. But then I ask if that son is the older or younger child. Regardless of her answer, I will raise my probability to 1/2.

By Matt Elliott (not verified) on 03 Jan 2008 #permalink

Joe:

Thanks for the link to the solution. I was afraid that I got it horribly wrong. :-)

Matt posted his excellent bridge explanation while my posts were caught up in the queue, though. Oh well.

By John Moeller (not verified) on 03 Jan 2008 #permalink

My intuition said that you can look at this as a set problem.

All the possible dealt options being the whole set,
options with two or more aces is a subset of this,
and options with two or more aces (one being the ace of diamonds) is a further subset of that.

Therefore the smallest subset has a higher probability as there are less options.

This suggests that with a fixed number of possibilities the more information you have about the system, the more accurate your guess about which one it is.

(But I am new to probability, so I may be way off here.)

Nancy, in my experience you can look at almost anything as a set problem! But probabilities are particularly amenable to this since, historically, the two are linked in many ways (e.g. Venn diagrams, etc.).

I find the second question a bit ambiguous. If I asked is there an ace? And you said yes, and I asked show me? It would be unremarkable whatever suit you showed me, i.e. the card had to have some suit? If I had instead asked "Does it contain an Ace of diamonds?", and the answer was yes, then isn't this a different kettle of fish. The second question is only true in the particular, and I know that cases with more than one ace would be oversampled relative to the first instance.

I think the problem, as stated, is ambiguous. See, for example, Ambiguity in Probability Problems by I.G. Betteley.

Nancy, the trouble with the sets you listed is that we are comparing p(E2|E1)=p(E2)/p(E1), the probability of event E2, at least 2 aces in the hand, given the event E1, at least one ace in the hand; and p(E1H|E0H)=p(E1H)/p(E0H), the probability of event E1H, the ace of hearts (nicer than diamonds) plus at least one other ace in the hand, given the event E0H, the ace of hearts is in the hand, plus at least zero other aces (all this copied from my post earlier).

Thus, there are four different events involved (indeed the definition in mathematics of probability spaces is in terms of sets, and events are defined as subsets of the sample space). All four of these sets have to be straight in your head.

Consider the following simplification:

Let the set of cards in the deck = {Ace hearts, Ace diamonds, Ace clubs, 2 spades}

Now define the following:

Ace hearts = {Ace hearts}
Ace not hearts = {Ace diamonds OR Ace clubs}
Ace = {Ace hearts OR Ace diamonds OR Ace clubs}

Assume that a hand contains 2 randomly-drawn cards (without replacement) from the deck, the 6 possible permutations being:

Ace hearts, Ace diamonds
Ace hearts, Ace clubs
Ace hearts, 2 spades
Ace diamonds, Ace clubs
Ace diamonds, 2 spades
Ace clubs, 2 spades

Adding the instances of various permutations according to a conditionality rule

Probability (Ace not hearts | Ace hearts) = 2/3(where | denotes conditionality)

Probability (Ace not hearts | Ace not hearts) = 1/3

THE AMBIGUITY

Note that, according to the above formulation, the following is meaningless

Probability (Ace hearts | Ace not hearts)

Probability (Ace | Ace not hearts)

as we always count permutations containing Ace hearts as belonging to Probability (Ace not hearts | Ace hearts)

I think the ambiguity can be resolved by noting that the notion of conditionality involves a sense of temporal order, with one thing following (conditional upon the occurrence of) another. If one states the problem in terms of temporal order (one card is seen, then the second) the probabilities for the two cases will equal. Do you agree?

I must be dense. Isn't Probability (Ace hearts | Ace not hearts)=0? And Probability(Ace|Ace not hearts)=1?

If one states the problem in terms of temporal order (one card is seen, then the second) the probabilities for the two cases will equal. Do you agree? Yes, assuming by "seen" you mean suit and rank are shown.

See the problem for me is that I've been through this before! To me "Given X" has a very specific meaning. A better way to phrase these questions might be "Suppose that I tell you that X has occured" (i.e. suppose that I told you that my hand contained an ace versus suppose that I told you my hand contained the ace of diamonds.) Of course I suspect frequentists might not like this version of the question?

I still think it is in part ambiguous but it may be that the following is what is going on. When you say I have an ace, I imagine that I can see that ace, now I have 4 unseen cards and 3 aces left in the deck. My probability in that case is Prob(2 aces | ace dia) = Prob(2 aces | ace spades) = Prob(2 aces | ace clubs) = Prob(2 aces | ace hearts). This is like being dealt an ace first card and in fact has the same probability

Where as in fact when I have an ace what is meant is you are given the information that you have an ace in the hand, but that is all. In that case I could have hit the ace on any of the cards and it is more than likely that is the only ace I hit.

The other way to think about it is lets say you take the ace of diamonds out of the deck and deal me 4 more cards (again like hitting the ace on the first card). Then in another hand you shuffle the entire deck and deal a random hand, but due to your bad dealing an ace accidentally shows face up. It is the ace of diamonds. Now I have two hands both showing the ace of diamonds, which would I pick to bet on having 2 aces? I think intuition would lead you to expect (correctly) that the first is more likely.

So for me the intuition breaks down because you expect to be presented with the information of "you have an ace" by looking at a randomly dealt hand.

If you tell me you will spot me the ace then sure that seems like a better deal (or we only have to play if you get the ace of diamonds or whatever). A better deal but not a game people are familiar with, or expect.

If you are looking at total number of 2+ ace hands with ace of diamonds / total hands with ace of diamonds versus total number of 2+ ace hands / total hands with 1+ ace then again you will see a higher proportion but that isn't the way the intuition goes if people are imagining playing a game they have some familiarity with.

Perhaps if asked in an abstract way the interpretation of the question would be handled better. Consider all the sets of 5 numbers chosen (without replacement) from between 1 and 52.

"Given 1 of the numbers is divisible by 13, assign P1 the chance you have 2 numbers divisible by 13, is p1 smaller, larger or the same as P2, the probability that you have 2 numbers divisible by 13 given that 1 of the numbers is 13"

Now maybe this is too verbose, but I think people would take a different approach in this case and I think it is unlikely people would say "the same". That may just be a case of the more abstract wording make them think it is a harder problem though.

As you say above Dave maybe it is layman vs. mathematician use of "Given X" I would be interested if you think that by using an everyday situation this may skew people to think like a layman and base there intuition on how they are used to seeing the game played rather than thinking in a more mathematical way (hence the above wording suggestion, which I hope I haven't loused up and is the same problem)

My good, intuitive explanation (and this may have already been stated, but I only skimmed most of the comments):

First, ignore the 'given' part. It should be obvious that the second condition ("what's the chance of a second ace?") doesn't care which of the 'given' parts existed - either way, there are three aces left in the deck to be chosen. So, either way you slice it, the second condition will give you the same number of hands.

Now, consider the given. Given conditions slice down the size of the set you're looking at. The second one ("Ace of Hearts") is more restrictive than the second, so it gives you a smaller set to work with.

So, you have the same number of hands that fulfill the criteria, but they occur among a smaller number of possible hands in the second given. So, the actual percentage of valid hands is higher in the second statement.

Most people correctly guess that the specificity of the second statement lowers a number. They simple guess that it lowers the number of matches, not the number of possibilities. The two have opposite effects on total probability.

By Xanthir, FCD (not verified) on 03 Jan 2008 #permalink

Here's my general rule that I also try to get my students to use (especially the pre-med folks I teach in my intro. class).
Rule #1 (I have others): Don't overthink the problem!! Note this is really just a variation on K.I.S.S. (keep it simple, stupid)
In this case, while the question may seem ambiguous, when presented with ambiguity interpret the question purely at face value. Then, break it down mathematically remembering to take it purely at face value (i.e. don't inject interpretations, etc.).
Given that the hand contains an ace, what is the probability that the hand contains another ace?
Known: A hand contains 5 out of 52 cards. One of the five is an ace.
Before asking the question, I try to get my students to learn Rule #2: Make sure you ask the right question.
Question: What is the probability that one of the other four is also an ace? That is, we have 4 out of 51 cards where 3 of those 51 are aces and we want to know what the probability is that 1 of the 4 is an ace. This is a straightforward calculation from basic probability theory. In fact it's essentially a binomial.
Finally, to quote an example from Jan Gullberg's awesome book,
Draw two cards from a deck of cards; what is the probability that the second card will be the seven of spades?
There are several answers to this question:
- If we have not looked at the first card drawn but simply laid it aside, the probability for the second card will be the same as for the first card, that is 1/52.
- If we looked at the first card, there are two possibilities:
-- If the first card
was not the seven of spades, the chance that the second card will be is 1/51.
-- If the first card
was the seven of spades, the probability for the second card is nil.
This simple example shows that the "probability should always be viewed in the light of what
information there is at hand. [Gullberg, p. 966]

Consider the case where two cards are dealt. If one is an ace, what are the chances that the other is too?

Let A stand for an ace, and N stand for not-an-ace. There are four ways two cards could be dealt: NN, AN, NA, and AA. We don't care about NN, and the next two are equivalent.

Chance of only one ace: 4/52 x 48/51 x 2, or 384/2652
Chance of two aces: 4/52 x 3/51, or 12/2652.

Therefore, there are 396 ways we could have at least one ace, and twelve of those ways involve two aces. So the probability that the other card is an ace is 12/396 or 1/33.

Now consider the possibility that we know one card is the ace of diamonds. There are now three symbols needed: D for ace of diamonds, A for non-diamond-ace, and N for not-an-ace.

Now we can have DA, AD, DN, ND, AN, NA, NN, and AA.

The only cases we're interested in are DA and DN (they're each equivalent to their reversed orders again).

Chance of DA: 1/52 x 3/51 x 2
Chance of DN: 1/52 x 48/51 x2

This time there are six ways to get DA, and 96 ways to get DN. We only care about the chance of DA, which is 6/102, or 1/17.

The chances of getting another ace, given that one is the ace of diamonds is higher (1/17) than the chance without knowing what the ace is (1/33).

Weird.

Did I get the math right?

By Caledonian (not verified) on 03 Jan 2008 #permalink

Here is my best attempt at a brief, intuitive explanation: The second scenario has a smaller probability because it requires a stricter criterion to be met (one card is now an ace of diamonds versus the first scenario in which it could have been an ace of any suit).

So, when calculating the probability in the second scenario, you now have three groups: (1) the ace of diamonds; (2) all other aces; and (3) all non-ace cards. Having fewer cards in the second group makes it less probable that the event (a hand with multiple aces) will occur.

It's the second one, but I had to do the calculation (why are people approximating above? Just use conditional probability to compare the two cases, and the fact that [X=>Y == P(Z|X and Y) = P(Z|X)].

And after looking it for a couple minutes, I realized that you won't get an intuition by messing around with probability. It's only a syntax which everyone chooses to give semantics in various spaces. But one of the semantics people use in algebraic combinatorics is to show that an object with certain properties exists because it has a nonzero probability. If we take our probability and instead go the opposite direction, into combinatorics, and ask how the number of hands (the number of objects with some property) scales with the number of aces, then I think it becomes clear.

Consider the space of all possible poker hands. Now generate two sequences, a_i = # of hands containing exactly i aces of any kind, and b_i = # of hands containing exactly i aces, one of which is a specific ace X.

We lift this combinatorial question into probability syntax by noting that the conditional probability P(k aces|1 ace of any kind) = a_k/a_1, and P(k aces|ace X) = b_k/b_1.

It's easy to see that a_k = (4 \comb k) * (52-k \comb 5-k) and b_k = (3 \comb k-1) * (52-k \comb 5-k). The second term is the same in both. The ratio of the first terms gives a_k/b_k = 4/k. It starts to look like a comparison between algorithms. a_k scales as O(k^a) and b_k scales as O(k^(a+1)). Then there is the question of the constant of the algorithm.

This is probably not the end. I suspect that there's some nice algebraic geometry in here as well: if we regard a_k as a subspace of the space of hands, then b_k is its projection onto the line containing ace X. Successively higher k extends a_k mostly perpendicular to that line. I don't know enough algebraic geometry to make this rigorous.

I think one of the morals from this is that we (and here I definitely include myself) need to get more comfortable with unordered tuples and bags/multisets. I think sets and ordered tuples took the limelight because sets were used as an axiomatization of mathematics, and tuples are needed for doing products of sets. Now that we've got set theory axiomatized in category theory, it's time to go back and give other data types the attention they deserve. The multiple particle stuff in quantum mechanics would be trivial if we were comfortable doing analysis on bags, but instead everyone desperately wants to do it in terms of ordered tuples and it turns into a nightmare.

I have to go run experiments and be useful now.

T wrote:
Case 2) I flip the coins without you looking and tell you that AT LEAST ONE coin landed heads up, without specifying which. What are the odds they both landed heads up? Well, the only outcome eliminated by the informatoin I gave you is the one where neither lands heads up - RT&BT. So we still have RH&BH, RH&BT, and RT&BH. Each is equally likely (because each had a 1/4 chance to begin with). Thefore, there is only a 33% chance that both coins landed heads up.

This is extremely UNintuitive, I think, and it takes a really long to convince some people.

I'm one of those people, since I find math to be utterly unsurprising. This is how I see it, maybe I'm wrong:

After I'm told one coin lands heads up, I have the following possibilities:

1. It was the red coin, which landed heads up: RH&BH, RH&BT
2. It was the blue coin, which landed heads up: RH&BH, RT&BH.

This still gives us 50% for RH&BH.

Here's my attempt at an intuitive explanation:

When we don't specify what the ace is, there are twelve ways we could draw two aces, and 384 ways we could draw one ace.

When we specify, the number of possible two-ace combinations drops to six, and that's a 50% reduction.

However, the number of possible one-ace combinations drops to 96, which is less than half of 384.

Being more specific narrows the field of probability, which we intuitively expect will make things less probable. And generally speaking, that's true. But in this case, we eliminate relatively more of the one-ace conditions than two-ace.

So the second situation is actually more probable than the first.

By Caledonian (not verified) on 04 Jan 2008 #permalink

Another (slightly edited) quote from Gullberg's book:
"I'll see you. What've you got?"
"Four aces - and you?"
"Two revolvers."
"All right, you win."

I can't believe this - my 'intuitive' explanation was entirely wrong! Not that my answer was wrong, but the explanation itself was completely off!

Here's the simple, intuitive way to do it. It's easier if you have a blackboard or something to write on:

Whenever probability gets confusing, just remember the basic rule - the chance of something happening is simply the number of ways it can happen the way you want it divided by the total number of ways it can possibly happen. Most of the time you'll be changing the first number, but occasionally the second number will change as well. This is why "givens" can be so confusing - they can change both numbers at the same time. ((While saying this, draw a simple ratio of "desired/possible" on the board.))

The important thing to note here is that these two examples are asking for *almost* the same thing. This makes them easy to work with, because we can compare them directly.

Now, the first one is saying that, given an ace already in your hand, what's the chance of another ace? The latter half of this only affects the number of 'desired' hands. The former part, the 'given', however, affects both. We *desire* a hand with two aces, but we're not looking at all possible hands, only hands that already have an ace in them!

Now, look at the second line. The only difference is the given - we now want hands with an AoD rather than any random ace. Again, this affects both the 'desired' and 'possible' parts of the equation.

First we'll look at 'possible'. The first statement allowed the hand to have *any* ace. ((Draw four dots on the board, or cards, or whatever, in the form of a square.)) This second statement only allows a single ace - the ace of diamonds. ((Circle the dot/whatever that represents AoD.)) So, we see that changing the given cut the number of 'possible' hands to 1/4 what they were.

Now, look at the 'desired'. Again, the first statement allowed any pair of aces. ((Now, draw lines between each pair of dots, representing the pairs.)) The second statement only allows pairs with an AoD in them. ((Circle the three pairs that contain AoD.)) So we drop from 6 desired pairs to only 3 desired pairs. This means we've cut the number of 'desired' hands to 1/2 of what they were.

Now we can see what happened. The equation for probability is desired/possible. We reduce the 'desired' to 1/2 what it was, but reduce the 'possible' to 1/4! ((Draw 1/2 over 1/4.)) Simplify the fractions... ((Simplify the fractions, forming 4/2=2.)) ...and we see that the second statement should have about twice the probability of happening that the second one does!

Reviewing, this worked because we changed *both* the numerator and the denominator of the probability equation. In this case, the second statement reduces the denominator much more than the numerator, which makes the ratio end up much larger than before. The reason this is counterintuitive is because we are used to things only affecting the numerator. If you didn't realize that 'given' statements affect both sides, you'd have reasonably concluded that the number of 'desired' hands was less, and so the probability is less.

Now you know! And knowing is half the battle. GO JOE!!!

Work? I find visual aids extremely helpful in probability. ^_^

Crap. I just realized that Cal's latest post said the same thing as mine. But mine is probably still a touch easier to follow for a probability newbie (or student)!

Further thoughts. (sorry for the multiple posts!)

It's obvious that the reason this is so confusing is because it affects both desired and possible hands. That's why, in my 'intuitive' explanation, I emphasized the words desired and possible over and over again. I attempt to drill in the fundamental equation of probability, and ensure that it's clear exactly what we're talking about at all times.

As such, if one was teaching this problem, I think it would be *especially* illuminating if one first highlighted the fact that givens change the 'possible' number. Contrast "what's the chance of getting a hand with two aces" against "what's the chance of getting a second ace, given that I already have an ace in my hand". Do this explicitly, with simple calculation, like so:

What's the chance of getting two aces in a five card hand? That's easy. ((Write down appropriate bits from the following on the board.)) Again, the fundamental equation of probability is desired/possible. We desire two aces, and three cards of any type. So the number of desired hands is 4 * 3 * 50 * 49 * 48. Since we don't care about order, we divide this whole thing by 5!. The possible is even easier - just five cards of any type - so that's 52 * 51 * 50 * 49 * 48, again divided by 5! because we don't care about order. Set the desired on top of the possible, and we have our probability. ((For effect, go ahead and cancel all the similar terms, then rewrite as 4*3/52*51 to show it in simpler form. You may want to cancel this all the way down to 1/13*17, but it depends on the audience you're speaking to. If everyone has calculators, call out for someone to figure this quickly, and write it down.))

Now, the chance of two aces given that we already have an ace is pretty simple as well. We're still desiring two aces, so that's the same thing. ((Write down the same as before.)) Now, though, the number of possible hands has shrunk. We don't want any ol' 5 cards. We want an ace, and then five cards of any kind. So that's 4 * 51 * 50 * 49 * 48, divided by 5! again. Set desired over possible again, and we see the difference. ((Cancel now, leaving you with just 3/51. Again, you may want to cancel down to 1/17. You definitely want to have someone figure this again, because the difference is very striking in decimal form.))

The given affected the number of both the desired and possible hands. In fact, it raised the probability up significantly. Givens can have a powerful effect on the probability of an event.

This then segues directly into the question you asked, and primes the student to understand why it works the way it does.

I had to do the math, but it is definitely the second one.

My interpretation: in each case, the problem reduces to (P2+P3+P4)/(P1+P2+P3+P4), where P1=probability of exactly one ace, P2=prob of exactly two aces, etc.

Specifying the ace of diamonds changes the probabilities, but not equally; P1 is reduced to 1/4 its original value, P2 to half its original value, P3 to 3/4 original, while P4 is unchanged. You are reducing the universe of possibilities (the denominator) faster than you are reducing the successes (numerator), so your success ratio goes up.

By exact numbers, if I've done it right (counting the order the cards are received) - there are 106,398,720 hands containing an ace, of which 13,000,320 contain at least two aces - 12.2% of the total.

There are only 30,078,000 hands containing an ace of diamonds, of which 6,728,400 contain at least one more ace - 22.4% of the total.

I suspect we screw this up because we correctly recognize that requiring an ace of diamonds makes those specific hands much less probable, and we incorrectly generalize to the situation when we've already established that we DO have an ace of diamonds.

By Caledonian (not verified) on 08 Jan 2008 #permalink

Here's an equivalent question which I think supplies the missing intuition. I'll use spades as the special suit rather than diamonds because I need to order the suits from "best" to "worst" and the standard order (in contract bridge at least) is:

spades > hearts > diamonds > clubs.

A 5-card hand is dealt at random. Given that at least one of the 5 cards is an ace (i.e. after having repeatedly shuffled and dealt until the first occasion that this is true), there is some probability that the hand contains more than one ace. Now suppose you ask about the "best" ace in the hand, and learn it is the ace of spades. Would this increase or diminish your confidence that the hand contains multiple aces? Well, the more aces there are, the better the best one is likely to be, and conversely the better the best one is, the more likely it is that more than one ace contributed to the pool of candidates. But asking whether the best ace in the hand is the ace of spades is exactly the same as asking whether the ace of spades is present at all, so the constraint of having the ace of spades gives a better chance of multiple aces than just the constraint of having at least one ace.

To show more rigorously (but still qualitatively rather than quantitatively) that the ace of spades improves your odds, consider the four possible answers to the best-ace question. The pre-question probability of multiple aces is a weighted average of each post-question probability (i.e. the revised odds given the answer clubs, diamonds, hearts, or spades) where the weighting factor is the probability of each answer in turn. The answer "clubs" does sometimes occur and when it does the odds go to zero: there can be no other ace if the best ace in the hand is the worst one possible. Since some answer pulls the average odds down, some other answer must pull the average odds up, and the best possible odds come from the last answer, "spades", which therefore gives better-than-average odds.

Of course, this suggests other questions, and my intuition fails for at least one of them: are the odds for the answer "hearts" above or below average? To use the original wording:

* Given that the hand contains the ace of hearts but does not contain the ace of spades, what is the probability that the hand contains another ace?

Calculations, someone?

By Tracy Hall (not verified) on 09 Jan 2008 #permalink

Similar problem:

(1) What is the probability of the younger child of a family being a boy, if we know that the older one already is.
(2) What is the probability of the younger child of a family being a boy, if we know that the older one is a boy and has the "rare" name Maximilian.

Compare.

One could say:
" When I'm holding five cards and one of them is a specific ace, I do not care which ace it is. I will have equal probability for another ace regardless of which ace I have. "
Or:

"When I look and find I have an ace of diamonds, I know I took a 1 in 52 chance. When I know I have an ace in general, I know i took a 4 in 52 chance.
But when I start out with the game, I have four different ways of picking a SPECIFIC, that is 1 in 52 ace! So If I have a specific ace I don't have to worry about there being less chance to get another one. The trick is: don't look at your cards ;) "

The problem is difficult because people tend to think in this way. We know that regardless of which ace we have, there are three more, so equal opportunities regardless of which ace we have, and regardless of wheter we know which one we have.

But the actual problem is not asking this specific question. The question in the problem is about how many combinations of hands contain two aces of which one is an ace of diamonds compared to how many hands contain two aces. This number is different of course.

You requested an answer that explains how intuition may lead to a correct conclusion---yes?
In this case, the choice having the more precise the information is most likely to be the more correct one.

The odds are the same for both cases. It doesn't matter what suit the Ace you are holding is. In both cases there are three other aces left in the deck. The odds of the next card in your hand being an ace are 3/51. The odds for the 3rd card are 3/50, the 4th 3/49, and the 5th 3/48. Add those four fractions together to get the odds of having a second ace.