# Probability & Statistics/Creating Odds

Question
This might be a simple process but assuming I have a fair coin toss or some other means of producing exactly a 1 in 2 probability of one outcome and the same for the other, can this be used to produce other probabilities? As I understand it each "toss" is an independent event, but if all I had was a fair coin toss is there anyway to produce 1 in 4, 1 in 9, 1 in 137 odds with a certain number of tosses and treatment of the results?

Yes, this is entirely possible.

What if you wanted to produce 1/4?

Well, then you flip it twice. If it's heads twice (HH), you "win" and otherwise (HT, TH, or TT) you "lose." (I don't know what the outcomes mean, so I'm just going to call it win and lose.)

To produce 1/8, you win with HHH and lose otherwise. For 1/16, HHHH, and so forth.

Now, what if you wanted 3/8? Or 7/16?

You can do fractions like that too by including more of the outcomes. If you wanted 3/8, flip the coin three times, and say you win if you get exactly one H (which is three of the eight possible outcome). For 7/16, pick your favorite seven outcomes (HHHH, HHTT, HTHT, HTTH, TTHH, THTH, THHT) and call those the winners.

If you want a fraction that does not have 2, 4, 8, 16, etc. in the denominator, simply flip the coins many times and some outcomes are "win" some are "lose" and some are "do over." For example, if you want 7/12, you can flip the coin four times:

7 winners: HHHH, HHTT, HTHT, HTTH, TTHH, THTH, THHT
5 losers: TTTT, TTTH, TTHT, THTT, HTTT
4 do-over: HHHT, HHTH, HTHH, THHH

Because there are only 12 possible outcomes that end this process, the probability of winning is now 7/12 (unlike the previous 7/16 example).

If you want to approximate the probability p/q, you flip the coin n times so that 2^n is at least q. Then you pick 2^n-q possibilities as "do overs" and another p as "winners." (The rest are "losers.")

Note that this process will not work for irrational numbers (numbers that are not fractions p/q of some sort).

You see, a combinatorial probability is (in its most basic sense) defined as:

[ number of "winning" outcomes ] / [ total number of outcomes ]

By flipping the coin many times, we can increase the denominator. If it gets to be bigger than we want (since it is only 2,4,8,16...), we throw a few out to reduce it to some number besides a power of 2.

By including more scenarios as "wins," we increase the numerator.

That can give us any probability. I hope this answers your question.

First, the idea of throwing out some outcomes (calling them "do-overs") is an application of Bayes's Theorem. Bayes theorem talks about conditional probabilities:

P(A|B) = the probability that the outcome A occurs, assuming B occurs too

For example, in the 7/16 example above P(win) = 7/16, but P(win|first flip is heads) = 1/2 because there are eight outcomes with heads first, and four are winners.

Bayes's Theorem states that:

P(A|B) = P(A and B) / P(B) = [ # outcomes where A and B occur ] / [ # of outcomes where B occur ]

This is a logically valid idea -- assuming B occurs, divided that into the number of ways A can occur (but only if B also occurs). Here, "A" would be the winning outcome, and "B" would represent the scenarios that are not the do-overs. In our procedure, we always assume that the "winners" are separate from the "do-overs" so that:

P(win and not-do-over) = P(win) = [ # winning outcomes ] / [ # non-do-overs ]

Since we can choose how many winners there are and how many do-overs there are, we have complete control over this fraction. (Note that this logic tells us we can make up a similar procedure if instead of a fair coin we have an unfair coin, some dice, or any other object.)

My second follow-up comment is that this can be used in a different way, maybe in a computer-programming context. This also might help us apply this method if we need to use it many times to approximate many different probabilities, since the above method would be tedious if we wanted to do it millions of times (or worse, to approximate a fraction with huge numbers in it like 1172641/12803573249803).

Most modern computing languages have a function called "rand" which will produce a random number between 0 and 1. If you want to use a computer to produce a winner with probability 0.248175, you just use the following code (or something like it):

if ( rand <= 0.248175 ) { winner } else { loser }

But how does a computer pick a random number? What if we want to use the ideas above?

Well, you could simply apply the idea above and create your own function that returns a winner whenever the probability is 0.248175. But that could be tough to actually decide which combinations of heads and tails work.

The better idea might be to use our ideas to come up with a random number generating function "rand" that works in all cases and produces a random number between 0 and 1. We can just declare winner if rand <= 0.248175, and loser otherwise.

The way to conceptualize this idea is something called "bisection." Take the interval 0 to 1, denoted [0,1], and split it in half as [0,0.5] and [0.5,1]. You have to be in one of those two intervals. Flip a coin (tails go to the first, heads to the second).

Say you got tails. Flip again to decide if you are in [0, 0.25] or [0.25, 0.5].

Say it's tails again. Flip again for [0, 0.125] or [0.125, 0.25].

If you flip tails again, you are a winner because [0, 0.125] is automatically less than the value of 0.248175.

This corresponds to assigning T = 0 and H = 1 and constructing a string of binary data from the coin flips:

000101010010101010....

And then putting that after the decimal point:

0.000101010010101010....

And then converting this to a "normal" decimal number:

0.08267974853515625

which is the value of "rand" for this outcome.

We can apply that methodology to the running example of 7/12. Now, instead of splitting up the coin-flips into weird categories, we simply do them one-by-one to determine a numerical outcome. For example, if you flip:

Note that 7/12 is 0.583333...

HHTHT → 0.11010 = 0.8125 (lose)

HTTTH → 0.10001 = 0.53125 (win)

T followed by anything is a winner since that is automatically less than 0.5, which is less than 7/12.

Probability & Statistics

Volunteer

#### Clyde Oliver

##### Expertise

I can answer all questions up to, and including, graduate level mathematics. I do not have expertise in statistics (I can answer questions about the mathematical foundations of statistics). I am very much proficient in probability. I am not inclined to answer questions that appear to be homework, nor questions that are not meaningful or advanced in any way.

##### Experience

I am a PhD educated mathematician working in research at a major university.

Organizations
AMS

Publications
Various research journals of mathematics. Various talks & presentations (some short, some long), about either interesting classical material or about research work.

Education/Credentials
BA mathematics & physics, PhD mathematics from a top 20 US school.

Awards and Honors
Various honors related to grades, various fellowships & scholarships, awards for contributions to mathematics and education at my schools, etc.

Past/Present Clients
In the past, and as my career progresses, I have worked and continue to work as an educator and mentor to students of varying age levels, skill levels, and educational levels.