Management Consulting/Quantitative Methods


The bulbs manufactured by a company gave a mean life of 3000 hours with standard
deviation of 400 hours. If a bulb is selected at random, what is the probability it will
have a mean life less than 2000 hours?
1) Calculate the probability.
2) In what situation does one need probability theory?
3) Define the concept of sample space, sample points and events in context of
probability theory.
4) What is the difference between objective and subjective probability?
The price P per unit at which a company can sell all that it produces is given by the
function P(x) = 300 4x. The cost function is c(x) = 500 + 28x where x is the number
of units produced. Find x so that the profit is maximum.
1) Find the value of x.
2) In using regression analysis for making predictions what are the assumptions
3) What is a simple linear regression model?
4) What is a scatter diagram method?
Mr Sehwag invests Rs 2000 every year with a company, which pays interest at 10% p.a.
He allows his deposit to accumulate at C.I. Find the amount to the credit of the person
at the end of 5th year.
Question :
1) What is the Time Value of Money concept.
2) What do you mean by present value of money?
3) What is the Future Value of money.
4) What the amount to be credited at the end of 5th year.
The cost of fuel in running of an engine is proportional to the square of the speed and is
Rs 48 per hour for speed of 16 kilometers per hour. Other expenses amount to Rs 300
per hour. What is the most economical speed?
1) What is most economical speed?
2) What is a chi-square test?
3) What is sampling and what are its uses.
4) Is there any alternative formula to find the value of Chi-square?

I  will send  the balance  asap.

The bulbs manufactured by a company gave a mean life of 3000 hours with standard
deviation of 400 hours. If a bulb is selected at random, what is the probability it will
have a mean life less than 2000 hours?
1)   Calculate the probability.
The bulbs manufactured by a company gave a mean life of 3000 hours with standard
deviation of 400 hours. If a bulb is selected at random, what is the probability it will have a mean life less than 2000 hours
z(2000) = (2000-3000)/400 = -1000/400 = -5/2

P(x < 2000) = P(z < -5/2) = binomcdf(-100,-5/2) = 0.0062
2) In what situation does one need probability theory?

Probability theory is applied to situations where uncertainty exists. These situations include
1.  The characterization of traffic at the intersection US 460 and Peppers Ferry Road (i.e., the number of cars that cross the intersection as a function of time)
2.  The prediction of the weather in Blacksburg
3.  The number of students traversing the Drill Field between 8:50 and 9:00 on Mondays
4.  The thermally induced (Brownian) motion of molecules in a
(a)  copper wire
(b)  a JFET amplifier
Note that the latter two situations would resort in a phenomenon known as noise whose power would be a function of the Temperature of the molecules.
In all of these situations, we could develop an excellent approximation or prediction for each of these experiments. However, we would never be able to characterize them with absolute certainty (i.e., deterministically). However, we could characterize them in a probabilistic fashion (or probabilistically) via the elements of probability theory. This is what weather forecasters are doing when they say
The chance of rain tomorrow is 40% In such cases they are saying that the probability of a rain even is 0.4. Other probabilistic characterizations include
   One the average 300 cars per minute cross US 460 and Peppers Ferry Road at Noon on Saturdays. The chances of so many cars crossing that intersection at 8:00 AM on Sunday is very small
   The average noise power produced by this amplifier is 1μW
Scientists and Engineers apply the theories of Probability and Random Processes to those repeating situations in nature where
1.  We can roughly predict what may happen.
2.  We cannot exactly determine what may happen
Whenever we cannot exactly predict an occurrence, we say that such an occurrences is random. Random occurrences occur for the following reasons
   All the causal forces at work are unknown.
   Insufficient data for the conditions of the problem do not exist
   The physical mechanisms driving the problem are so complicated that the direct calculation of the problem is not feasible
   There exists some basic indeterminacy in the physical world
The Notions of Probability
One can approach probability through an abstract mathematical concept called measure theory, which results in the axiomatic theory of probability, or through heuristic approach called relative frequency, which is a less complete (and slightly flawed) definition of probability. However, it suits our need for this course. Student's that continue on to graduate studies will be introduced to the more abstract but powerful axiomatic theory. Before continuing it is necessary to define the following important terms
Definition 1 An experiment is a set of rules that governs a specific operation that is being performed
Definition 2 A trial is the performance or exercise of that experiment
Definition 3 An outcome is the result of a given trial
Definition 4 An event is an outcome or any combination of outcomes
Example 1 Consider the experiment of selecting at random one card from a deck of 52 playing cards and writing down the number an suit of that cared. Notice that the rules are well defined
1.  We have a deck of cards
2.  Somebody selects the card
3.  The records the result
Suppose somebody decided to perform the experiment. We would then say that he/she conducted a trial. The result of that trial could have been the 3 of spades (or 34♠). Hence the outcome would be: 3♠. Another outcome could have been 10♥ or J♦. Indeed there are as many as 52 possible outcomes. An event is a collection of possible outcomes. So 3♠ is an event. However, 10♥ is also an event, and J♦ is an event as well. Further {3♠, J♦}, {3♠, 10♥}, and {2♥, 5♣, K♥, A♦} are also events. Any combination of possible outcomes is an event. Note that for this experiment, there are 252 = 4.50359962737 x 1015 different events
The Axiomatic Theory of Probability
This is actually an application of a mathematical theory called Measure Theory. Both theories apply
basic concepts from set theory.
The axiomatic theory of probability is based on a triplet
(Ω,Ι,P) where
   Ω is the sample space, which is the set of all possible outcomes
   Ι is the sigma algebra or sigma field, which is the set of all possible events (or combination of outcomes)
   P is the probability function, which can be any set function, whose domain is Ω and the range is the closed unit interval [0,1] , i.e., any number between 0 and 1 including 0 and 1. The only requirement for this function is that it must obey the following three rules
(a)   P [Ω] = 1
(b)  Let A be any event in Ι, then P [A] ≥ 0
(c)   Let A and B be two events in Ι such that A ∩ B = φ, then P [A U B] = P [A] + P [B]
Relative Frequency Definition.
The relative frequency approach is based on the following definition: Suppose we conduct a large number of trials of this a given experiment. The probability that a given event, say A, is the following limit

where nA is the number of occurrences and n is the number of trials.
For example suppose we conduct the above experiment 10,000 times (n = 10000). Further suppose the
event A = {3♠} occurred 188 times (nA = 188). Then

As n increased to infinity (and assuming that the cards are fair), then the ratio would approach the probability of A

Suppose the event B was the set of all spades. Then
B = {A♠, 2♠, 3♠, 4♠, 5♠, 6♠, 7♠, 8♠, 9♠, 10♠, J♠, Q♠, K♠}
Now we say that The event B occurs whenever an outcome is of a trial is contained in B. In this case, whenever the outcome of a trial contains a 4b, the outcome belongs to B and we say that B occurred. So, suppose we conducted the above experiment 10, 000 times, and the event B occurred 3000 times. Then

As n increased to infinity (and assuming that the cards are fair), then the ratio would approach the probability of B

Note that this definition makes sense. Since P [B] = , it follows that the chance of B occurring in any given trial is 25%. Similarly, the chance of a 4♦ occurring from a particular trial is (1/52) ♠ 100 or 1.9%. However, note that the probability of any event will always be a number between 0 and 1. Now, suppose the event C consists of the set of all diamonds. Then
C = {A♦, 2♦,3♦,4♦,5♦, 6♦,7♦, 8♦,9♦,10♦, J♦, Q♦, K♦}
Let's look at both B and C. Note that none of the members in B belong to C and none of the members in C belong to B. We say that B and C are disjoint and that their intersection is the empty set
В ∩ C = φ
In this case the probability of either B or C occurring is equal to
where the U symbol stands for the union of two sets. Note that (2) describes the additive concept of probability: If two events are disjoint, then the probability of their sum is the sum of their probabilities.
Finally, let the event Ω equal all possible outcome. We call Ω the certain event or the sample space. Since every possible outcome is contained in this event, nΩ = n. Therefore
Example 2 Consider the experiment of flipping a fair coin twice and observing the output. What are the possible outcomes? List all possible events. Assume that all outcomes are equally likely, assign probabilities to each event. The set of all possible outcomes is
Ω = {HH, HT, TH, TT} There are 24 = 16 events. They are
{HH, HT} , {HH, TH} , {HH, TT} , {HT, TH} , {HT, TT} , {TH, TT} {HH, HT, TH} , {HH, HT, TT} , {HH, TH, TT} , {HT, TH, TT} {HH, HT, TH, TT} , φ
Note that {HH, HT, TH, TT} is the certain event. Therefore
P [{HH, HT, TH, TT}] = 1   (4)
However, from the additive property, it follows that
P [{HH, HT, TH, TT}] = P [{HH}] + P [{HT}] + P [{TH}] + P [{TT}]    (5)
but since each outcome is equally likely we have that
P [{HH}] = P [{HT}] = P [{TH}] = P [{TT}]      (6)
Therefore from (5) and (6), it follows that
P [{HH}} = P [{HT}} = P [{TH}} = P [{TT}} =    (7)
Using the additive concept of probability we can see that
P [{HH, HT}] = P [{HH, TH}] = P [{HH, TT}] =
= P [{HT, TH}] = P [{HT, TT}] = P [{TH, TT}] =    (8)
P [{HH, HT, TH}] = P [{HH, HT, TT}] = P [{HH, TH, TT}] = P [{HT, TH, TT}] = 3/4        (9)
P[φ] = 0   (10)
Joint Probability
Let the events D and E be defined as follows
D ={4♠, 2♣, 8♦, 4♠}
E ={4♥, K♠, 4♠, 2♣}
Note that
P[D] = 4/52 = 1/3
P[E] = 4/52 = 1/13
Then the intersection of D and E is another event that contains the elements of D and E
D ∩ E ={4♠, 2♠}
P[D ∩ E] = P[V,Ј] = P[D and E] = 1/16     (11)
We call P [D ∩ E] = P [D, E] = P[D and E] the joint probability of D and E. It describes the probability of both events occurring
Conditional Probability
In a number of cases, knowledge about one event provides us additional information about the occurrence of another event. Suppose that we conduct an experiment and we find out that D has occurred. Thus we know that the outcome was either a A♠, 2♣, 8♦), or 4♠. Does this tell us anything about the occurrence of E The answer is yes.
Given that D has occurred, we know that the outcome was either A♠, 19♦, 8♦), or 4♠. Since each outcome is equally likely, and we know that these are now the only for possibilities, we have that the probability of each of these outcomes given that D has occurred is in other words
P [A♠|D] = P [2♠|D] = P [8♦|D] = P [4♠|D] =      (12)
Note that P [A|B] means the probability of A given the B has occurred
Now look at the event E = {4♥, K4(t, 4♠, 2♦}. Given that D has occurred and that the possible outcomes were only A♠, 2♣, 8♣, or 4♣, we can conclude that the outcomes 4♥ and K♠ could hot have happened because 4♥ and K♠ are not in D. Therefore
P[4♥|D] = P[K♣|D] = 0      (13)
Therefore, given that we know D has occurred, we can write the probability of E as follows
P[E|D] = P[4♥|D] + P[K♣|D] + P[8♦|D]+P[4♠|D]
= 0 + 0 + + =
We can also compute the probability of E given that D has occurred using the following definition.
Definition 5 Let A and B be two events from the same experiment. Then the conditional probability of A given B, P[A|B] is defined as follows
If P [B] = 0, then P[A|B] is undefined. So, for the case of E given that D, we have that

Similarly for the case of A given C above we have that

Another important concept in probability is independence.
Definition 6 Let A and B be two events from the same experiment. Then A and B are said to be independent if the joint probability of A and B is equal to the product of their two individual probabilities.
P [A, B] = P [A] P [B]      (16)
From this it follows that
This implies if two events are independent, the knowledge about one event provides absolutely know information about the other event.
Random Variables
You may have noticed thus far that we have working with sets and events that contain odd symbols such as 3♠, 4♠, 5♦, 6♥,.... Other examples, both in this note set and the book, have worked with events with
other symbols such as {HH} for the coin problem and {...} for the dice problem. We would like to
perform some mathematical analysis and manipulation on these events. However, such a task would be difficult and would not provide any insight.
It would be better if we could assign (or map) each outcome a number. The we could work with these numbers using the standard mathematical techniques that we have learned over the years.
Example 3 Consider our card experiment. Suppose we assign each card an integer as follows
Then each card is assigned an integer on the real line. Suppose we assign this mapping the function X (ζ), where ζ is any outcome
Such a mapping is called a random variable.
Definition 7 A random variable, X (ζ) is a deterministic function that assigns each outcome, ζ ε Ω, in an experiment, a number.
X : Ω -> R     (18)
So that P [X (ζ) = ∞] = P [X (ζ) = -∞] = 0
The first thing you can notice about a random variable is that
1.  There is nothing random about it
2.  It is not a variable. Rather it is a set function.
However, in performing analysis, it is convenient to treat X (ζ) as a variable. Consider a second example of a random variable, since the random variable is a function of an outcome, which has an associated probability, the random variable also has an associated probability.
P[X(ζ) = x] = P(ζ = X-1(x))

Example 4 Consider the same card experiment. Suppose we define a random variable S (Ј) such that
if ζ contains a ♠    S(ζ)    = 1
if ζ contains a ♣    S(ζ)    = 2
if ζ contains a ♦    S(ζ)    = 3
if ζ contains a ♥    S(ζ)    = 4
Note that the outcome can be assigned the same number. The function defined by the random variable can by any set function provided that the associated probabilities as the ∞ is zero
What about the probabilities
A random variable is a function of the outcomes of an experiment. Therefore, since each outcome has a probability, the number assigned to that function by a random variable, also has a probability. The probability of a given random variable is determined as follows.
P[X(ζ) = x] = P[ζ = X-1(x)]
where X -1(x) is a mapping from the real line, R, to the set of all outcomes, Ω.
Example 5 Consider the random variable X (ζ) defined in Example 3
P[X(ζ) = 1] = P[ζ = X-1]=P[ζ = A♠]= 1/52;
P[X(ζ) = 26] = P[ζ = X-1(26)]=P[ζ = K♠] = 1/52
P[X (ζ) = 28.9] = P [ζ = X-1(28.9)] =P [φ] = 0
P[X(ζ) = 0] = P[ζ = X-1(0)]=P[φ]=0
P[X(ζ)≤2] = P{ζ=X-1(-∞,2]} = P[ζ = {A♠,2♠}] = 2/52
Example 6 Consider the random, variable S (ζ), defined in Example 4
P[S (ζ) = 1] = P[(ζ = S -1(1)]
=    P[ζ = {A♠, 2♠, 3♠, 4♠, 5♠, 6♠, 7♠, 8♠, 9♠, 10♠, J♠, Q♠, K♠}] =
P[S (ζ) = 2] =    P[ζ = S -1(2)]=
P [S (ζ) = 2.5] =    P[ζ = S -1(2.5)}=P[4φ]=0
P[2≤S(ζ)≤3] =    P{ζ = S -1 [2,3]} =P[ζ= {all ♣ and all ♥}] =
Cumulative Distribution Functions and Probability Density Functions
Two ways to analyze experiments via random variables is through the Cumulative Distribution Function (CDF) and probability density function (pdf)
Definition 8 Let X (ζ) be a random variable defined on a sample space Ω. Then the Cumulative Distribution Function (CDF) of X is denoted by the function Fx (x0) and is defined as follows
Fx (x0) = P[X≤ x0]
where the set [X ≤ x0] defines an event or a collection of outcomes.
Example 7 Consider the random variable in Example 3. The CDF of X is

Example 8 Consider the random variable in Example 4- The CDF of S is
Fs (so) = u (s0 -l) + u (s0 -2) + u (s0 - 3) + u (s0 - 4)

Definition 9 Let X (ζ) be a random variable defined on a sample space Ω. Then the Probability Density Function (pdf) of X is denoted by the symbol fx (x) and is defined as the derivative of its CDF
(24) Note that we can recover the CDF from the PDF through integration
(25) Therefore fx (x) will always integrate to 1

Furthermore, the probability that X (ζ) is between x1 and x2, x2 < x1 and x<i inclusive, can be computed as follows

Note that the argument, ζ, has been dropped from the random variable, X. For the remainder of this discussion, we will assume that the dependence of a random variable X on ζ is implicit and omit the argument for convenience
Moments of Random Variables
Moments of random variables include a number of quantities such as averages, variances, standard deviations, etc. They are particularly useful in communications because they provide valuable information about a random variable with having to know its statistics (i.e., the CDF or pdf)
Definition 10 Let X be a random variable with the pdf fx (x). Then the expected value or average of X, is
Definition 11 Let X be a random variable with pdf fx (x). Then, the variance of X is

and the standard deviation of X is

Definition 12 Let X be a random variable with pdf fx (x). Then, the nth moment of X is

and the nth central moment of X is
Finally, let X be a random variable and let g (X) be some function of that random variable. In most cases this g (X) will transform X into a new random variable, which we will call Y
Y = g(X)     (33)
which has its own CDF and pdf, which must be found through the use of random variable transformation techniques. However, if we are interested in the moments of Y, such as its average, then knowledge about Y's CDF or pdf are not required. Rather we can just use the already known CDF and pdf of X.
Theorem 1 Let X be a random variable with pdf fx (x). Let Y = g (X) be some transformation of X. Then the expected value or average of Y is

Example 9 Let X be a random variable with the following pdf

Functions of a Random Variable
Definition 1 Consider a function of a random variable, g(X). The result is another random variable, which we could call Y, Y = g(X). However the CDF and pdf of Y will be different than the CDF and pdf of X. In fact we will use information about the Fx (x), fx (x), and g (X) to determine Fy (y) and fy (y)
The distribution of Y = g(X)
The PDF of the random variable, Y, is nothing more than the probability of the event that {Y ≤ y}. This consists of all outcomes ζ ε Ω such that Y (ζ) ≤ Y This is equivalent to the set of all outcomes ζ ε Ω such that g [X (ζ)] ≤ y, and this is equivalent to the set of all outcome ζ ε Ω such that X (ζ) ≤ g-1(y). So
F(y) = P({Y≤ y}) = P({g(X) ≤ y}) = P({X ≤ g-1(y)})     (36)
The Probability Density Function (pdf) of Y = g (X)
Once the PDF of Y is computed, the computation of the pdf of Y is straightforward
fY(y) = dFY(y) / dy
Example 10 Let X be a random, variable. Let
Y = g (X) = 2X + 1
Find Fy (y) and fy (y). To find Fy (y), we must evaluate the event {Y ≤ y} where Y is the random variable and y is simply a number. Consider the following graph

The shaded vertical line corresponds to the event {ζ :Х(ζ)≤ y} and the horizontal line corresponds to the event {ζ : X (ζ) ≤ (y 1)}. Both events have the exact same outcomes. Therefore
{Y ≤ y} = {2X + 1 ≤ y} = {X ≤ (y - 1)} = {X ≤ g-1 (y)}
Note that the inverse function g-1(Y) is
g-1(Y)=(X - l)
FY(y) = P[Y ≤ y]=P[2X + l ≤ y] = P[X ≤ (y - 1) = Fx((y - 1))
Now suppose X was uniformly distributed between 0 and 1. Then

We can now plot Fy (y). Let's plug in some values
y = 0     Fy(0) = Fx((0 - 1))=Fx(-)=0
y = 1    Fy(l) = Fx (1 - 1) = Fx (0) = 0
y = 2    Fy (2) = Fx (2 - 1) = Fx () =
y = 3    Fy(3) = Fx ( (3 - 1)) = Fx (1) = 1

The density of Y, fy (y), is simply

and from, (41) we have that

We can also compute the fy (y) as follows

Substituting (38) into the above equation, we get the same result as (43).
Example 11 Let X be a random variable. Let
Y = g(X) = X2
Then for y ≥ 0, we have that Y ≤ y, when X2 ≤ y for -√y ≤ X ≤ √y.

Note that in this case there are two inverse functions for g (X)
g-11(Y) = √Y
g-12(Y) = -√Y
FY (y) = P(Y ≤ y) = P(X2 = y) = P(-√y ≤ X ≤ √y) = Fx (√y) - Fx (-√y)

Now for y<0, there are no values of x such that x2 < y. therefore
FY{y)=0    y < 0
Again, suppose X is uniform between the interval [0,1]. Then we have that



Figure 1: pdf of a Gaussian Random Variable
Common Random Variables
The Gaussian Random Variable
The Gaussian random variable is the most common of all random variables. It is used to characterize a number of random phenomenon such as noise most communication system to the random fluctuations of the desired received voltage in non-line-of-sight wireless communications systems such as cellular systems and personal communication systems. The Gaussian random variable has a pdf which is completely defined by its mean m and variance σ2. Let X be a Gaussian random variable with a mean of mx and a variance of σ2x. Then, fx (x) is

and is plotted as follows. The CDF of X is found in the usual way

This integral cannot be evaluated in closed form. However the integral is well tabulated. A tabulation common to communication engineers is the Q-function where


Other well tabulated integrals include the error function and the complementary error function. The error function is defined as follows

The complementary error function is

Note that the complementary error function and the Q-function are related as follows

Note that P [x1 < X < x2 is computed as follows

The Rayliegh Random Variable
Let X and Y be two independent Gaussian random variables with mx = my = 0 and σ2x = σ2y = σ2. Suppose we define a new random variable Z from the transformation

Now if X and Y represent the received voltages from two signal components in a wireless channel, then Z represents the received envelope, namely , Z is called a Rayliegh random, variable. The pdf of Z

and is plotted below for various values of σ2x = σy2 = σ2

The CDF of Z is found by integrating the pdf. Therefore

The received signal envelop in a cellular or PCS system, where line-of-sight is not established, is typically modeled clS Qb Rayliegh random variable
The Exponential Random Variable
Let Z be a Rayleigh random variable. Suppose we define a new random variable, W, such that
W = Z2
If Z represents the voltage envelope of a signal, the W represents the power in a signal, to within a constant. Then W has the following pdf

W is called an exponential random variable, and the pdf of w looks as follows

The CDF of W is

Exponential random variables are used to model the power of cellular and PCS systems where line-of-sight propagation is not established
Uniform Random Variable
In systems or signals where the phase is not known, it is generally modeled as a random variable θ, which is uniformly distributed between 0 and 2π. The pdf of such a random variable is

The CDF is

The Central Limit Theorem
Suppose we take a large number of random variables and sum them together. The central limit theorem states that the resultant random variable will have a Gaussian distribution. This is one of the reasons why Gaussian random variables are so common. A more formal statement of the theorem follows
Theorem 2 Let Xi, i = 1, 2,..., N be a collection of independent random variables with a mean of 0 and a variance of 1. Define a new random variable, Y, as follows

Then Y is a Gaussian random variable with a mean of 0 and a variance of 1, i.e., my = 0, σ2y = 1.
Rician Random Variables
We just learned that when line-of-sight does not exist in a cellular system, that the received envelope follows a Rayliegh distribution. However, if line-of-sight is established, then the signal follows a Rician distribution, which has the following pdf

where I0 () is the modified Bessel function of the 0th order. Note that when v = 0, Iq (0) = 1 and Z degenerates to a Rayliegh distribution.

3) Define the concept of sample space, sample points and events in context of
probability theory.

Define the concept of sample space in context of probability theory
In probability theory, the sample space of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually denoted using set notation, and the possible outcomes are listed as elements in the set. It is common to refer to a sample space by the labels S, Ω, or U (for "universal set").
For example, if the experiment is tossing a coin, the sample space is typically the set {head, tail}. For tossing two coins, the corresponding sample space would be {(head,head), (head,tail), (tail,head), (tail,tail)}. For tossing a single six-sided die, the typical sample space is {1, 2, 3, 4, 5, 6} (in which the result of interest is the number of pips facing up).
A well-defined sample space is one of three basic elements in a probabilistic model (a probability space); the other two are a well-defined set of possible events (a sigma-algebra) and a  
probability assigned to each event (a probability measure function).


Sample space, sample points and events
Let  be a set of things that can happen. We say that  is a sample space, or space of all possible outcomes, if it satisfies the following two properties:
1.   Mutually exclusive outcomes. Only one of the things in  will happen. That is, when we learn that  has happened, then we also know that none of the things in the set  has happened.
2.   Exhaustive outcomes. At least one of the things in  will happen.
An element  is called a sample point, or a possible outcome.
When, and if, we learn that  has happened,  is called the realized outcome.
A subset  is called an event. In the section below, you will see that not every subset of the sample space is, strictly speaking, an event; however, on a first reading you can be happy with this definition.
Note that  itself is an event, because every set is a subset of itself, and the empty set  is also an event, because it can be considered a subset of  .
Example Suppose that we toss a die. Six numbers, from 1 to 6, can appear face up, but we do not yet know which one of them will appear. The sample space is: Each of the six numbers is a sample point. The outcomes are mutually exclusive, because only one number at a time can appear face up. The outcomes are also exhaustive, because at least one of the six numbers will appear face up, after we toss the die. Define:   is an event (a subset of  ). It can be described as "an odd number appears face up". Now define: Also  is an event and it can be described as "the number 6 appears face up".
Probability and its properties
The probability of an event is a real number, attached to the event, that tells us how likely that event is. We denote the probability of an event  by  .
Probability needs to satisfy the following properties:
1.   Range. For any event  ,  .
2.   Sure thing.  .
3.   Sigma-additivity (or countable additivity). Let  be a sequence of events. Let all the events in the sequence be mutually exclusive, i.e.,  if  . Then:
Property (1) is self-explanatory. It just means that the probability of an event is a real number between 0 and 1.
Property (2) says that at least one of all the things that can possibly happen will happen with probability 1.
Property (3) is a bit more cumbersome. It can be proved (see below) that if sigma-additivity holds, then also the following holds:
This property, called finite additivity, while very similar to sigma-additivity, is easier to interpret. It says that if two events are disjoint, then the probability that either one or the other happens is equal to the sum of their individual probabilities.
Example Suppose that we flip a coin. The possible outcomes are either tail ( ) or head ( ), i.e.: There are a total of four subsets of  (events):  itself, the empty set  , the event  and the event  . The following assignment of probabilities satisfies the properties enumerated above: All these probabilities are between 0 and 1, so the range property is satisfied.  , so the sure thing property is satisfied. Also sigma-additivity is satisfied, because: and the four couples  ,  ,  ,  are the only four possible couples of disjoint sets.
Before ending this section, two remarks are in order. First, we have not discussed the interpretations of probability, but below you can find a brief discussion of the interpretations of probability. Second, we have been somewhat sloppy in defining events and probability, but you can find a more rigorous definition of probability below.
Other properties of probability
The following subsections discuss other properties enjoyed by probability.
The probability of the empty set is 0
Here we prove that  .
Define a sequence of event as follows:  The sequence is a sequence of disjoint events, because the empty set is disjoint from any other set. Then:  which implies and  .
A sigma-additive function is additive
A sigma-additive function is also additive:
Define a sequence of events as follows:  The sequence is a sequence of disjoint events, because the empty set is disjoint from any other set. Then:
Probability of the complement
Let  be an event and  its complement (i.e. the set of all elements of  that do not belong to  ). Then
Note that: and that  and  are disjoint sets. Then, using the sure thing property and finite additivity, we obtain: which implies
In other words, the probability that an event does not occur is equal to one minus the probability that it occurs.
Probability of a union
We have already seen how to compute  in the special case in which  and  are two disjoint events. In the more general case, in which they are not necessarily disjoint, the formula is:
This is proved as follows. First note that: so that: Furthermore the event  can be written as follows: and the three events on the right hand side are disjoint. Thus:
Monotonicity of probability
If two events  and  are such that  , then:
This is easily proved using additivity: where the latter inequality is a consequence of the fact that  (by the range property of probability).
In other words, if  occurs less often than  , because the latter contemplates more occurrences, then the probability of  must be less than the probability of  .
Interpretations of probability
This subsection briefly discusses some common interpretations of probability. Although none of these interpretations is sufficient per se to clarify the meaning of probability, they all touch upon important aspects of probability.
Classical interpretation of probability
According to the classical definition of probability, when all the possible outcomes of an experiment are equally likely, the probability of an event is the ratio between the number of outcomes that are favorable to the event and the total number of possible outcomes. While intuitive, this definition has two main drawbacks:
1.   it is circular, because it uses the concept of probability to define probability: it is based on the assumption of 'equally likely' outcomes, where equally likely means 'having the same probability';
2.   it is limited in scope, because it does not allow to define probability when the possible outcomes are not all equally likely.
Frequentist interpretation of probability
According to the frequentist definition of probability, the probability of an event is the relative frequency of the event itself, observed over a large number of repetitions of the same experiment. In other words, it is the limit to which the ratio: converges when the number of repetitions of the experiment tends to infinity. Despite its intuitive appeal, also this definition of probability has some important drawbacks:
1.   it assumes that all probabilistic experiments can be repeated many times, which is false;
2.   it is also somewhat circular, because it implicitly relies on a Law of Large Numbers, which can be derived only after having defined probability.
Subjectivist interpretation of probability
According to the subjectivist definition of probability, the probability of an event is related to the willingness of an individual to accept bets on that event. Suppose a lottery ticket pays off 1 dollar in case the event occurs and 0 in case the event does not occur. An individual is asked to set a price for this lottery ticket, at which she must be indifferent between being a buyer or a seller of the ticket. The subjective probability of the event is defined to be equal to the price thus set by the individual. Also this definition of probability has some drawbacks:
1.   different individuals can set different prices, therefore preventing an objective assessment of probabilities;
2.   the price an individual is willing to pay to participate in a lottery can be influenced by other factors that have nothing to do with probability; for example, an individual's betting behavior can be influenced by her preferences.
Rigorous definitions
A more rigorous definition of event
The definition of event given above is not entirely rigorous. Often, statisticians work with probability models where some subsets of the sample space are not considered events. This happens mainly for the following two reasons:
1.   sometimes, the sample space is a really complicated set; to make things simpler, attention is restricted to only some subsets of the sample space;
2.   sometimes, it is possible to assign probabilities only to some subsets of the sample space; in these cases, only the subsets to which probabilities can be assigned are considered events.
Denote by  the space of events, i.e. the set of subsets of  that are considered events. In rigorous probability theory, it is required to be a sigma-algebra.
Definition  is a sigma-algebra on  if it is a set of subsets of  satisfying the following three properties:
1.   Whole set.  .
2.   Closure under complementation. If  then also  (the complement  is the set of all elements of  that do not belong to  ).
3.   Closure under countable unions. If  is a sequence of subsets of  belonging to  , then:
Why is a space of events required to satisfy these properties? Besides a number of mathematical reasons, it seems pretty intuitive that they must be satisfied. Property 1) means that the space of events must include the event "something will happen", quite a trivial requirement! Property 2) means that if "one of the things in the set  will happen" is considered an event, then also "none of the things in the set  will happen" is considered an event. This is quite natural: if you are considering the possibility that an event will happen, then, by necessity, you must also be simultaneously considering the possibility that the same event will not happen. Property 3) is a bit more complex. However, the following property, implied by 3), is probably easier to interpret: It means that if "one of the things in  will happen" and "one of the things in  will happen" are considered two events, then also "either one of the things in  or one of the things in  will happen" must be considered an event. This simply means that if you are able to separately assess the possibility of two events happening, then, of course, you must be able to assess the possibility of either one or the other happening. Property 3) simply extends this intuitive property to countable collection of events: the extension is needed for mathematical reasons, to derive certain continuity properties of probability measures.
A more rigorous definition of probability
The definition of probability given above was not entirely rigorous. Now that we have defined sigma-algebras and spaces of events, we can make it completely rigorous.
Definition Let  be a sigma-algebra on the sample space  . A function  is a probability measure if and only if it satisfies the following two properties:
1.   Sure thing.  .
2.   Sigma-additivity. Let  be any sequence of elements of  such that  implies  . Then:
Nothing new has been added to the definition given above. This definition just clarifies that a probability measure is a function defined on a sigma-algebra of events. Hence, it is not possible to properly speak of probability for subsets of the sample space that do not belong to the sigma-algebra.
A triple  is called a probability space.

4) What is the difference between objective and subjective probability?
A probability provides a quantatative description of the likely occurrence of a particular event. Probability is conventionally expressed on a scale from 0 to 1; a rare event has a probability close to 0, a very common event has a probability close to 1.
The probability of an event has been defined as its long-run relative frequency. It has also been thought of as a personal degree of belief that a particular event will occur (subjective probability).
In some experiments, all outcomes are equally likely. For example if you were to choose one winner in a raffle from a hat, all raffle ticket holders are equally likely to win, that is, they have the same probability of their ticket being chosen. This is the equally-likely outcomes model and is defined to be:
P(E) =   number of outcomes corresponding to event E
________________________________________total number of outcomes
1.   The probability of drawing a spade from a pack of 52 well-shuffled playing cards is 13/52 = 1/4 = 0.25 since
event E = 'a spade is drawn';
the number of outcomes corresponding to E = 13 (spades);
the total number of outcomes = 52 (cards).
2.   When tossing a coin, we assume that the results 'heads' or 'tails' each have equal probabilities of 0.5.

Subjective Probability
A subjective probability describes an individual's personal judgement about how likely a particular event is to occur. It is not based on any precise computation but is often a reasonable assessment by a knowledgeable person.
Like all probabilities, a subjective probability is conventionally expressed on a scale from 0 to 1; a rare event has a subjective probability close to 0, a very common event has a subjective probability close to 1.
A person's subjective probability of an event describes his/her degree of belief in the event.
A Rangers supporter might say, "I believe that Rangers have probability of 0.9 of winning the Scottish Premier Division this year since they have been playing really well."

Independent Events
Two events are independent if the occurrence of one of the events gives us no information about whether or not the other event will occur; that is, the events have no influence on each other.
In probability theory we say that two events, A and B, are independent if the probability that they both occur is equal to the product of the probabilities of the two individual events, i.e.

The idea of independence can be extended to more than two events. For example, A, B and C are independent if:
a.   A and B are independent; A and C are independent and B and C are independent (pairwise independence);
If two events are independent then they cannot be mutually exclusive (disjoint) and vice versa.
Suppose that a man and a woman each have a pack of 52 playing cards. Each draws a card from his/her pack. Find the probability that they each draw the ace of clubs.
We define the events:
A = probability that man draws ace of clubs = 1/52
B = probability that woman draws ace of clubs = 1/52
Clearly events A and B are independent so:
= 1/52 . 1/52 = 0.00037
That is, there is a very small chance that the man and the woman will both draw the ace of clubs.

'Subjective Probability'

A probability derived from an individual's personal judgment about whether a specific outcome is likely to occur. Subjective probabilities contain no formal calculations and only reflect the subject's opinions and past experience.

Subjective probabilities differ from person to person. Because the probability is subjective, it contains a high degree of personal bias. An example of subjective probability could be asking New York Yankees fans, before the baseball season starts, the chances of New York winning the world series. While there is no absolute mathematical proof behind the answer to the example, fans might still reply in actual percentage terms, such as the Yankees having a 25% chance of winning the world series.

objective  probability

The probability that an event will occur based an analysis in which each measure is based on a recorded observation, rather than a subjective estimate. Objective probabilities are a more accurate way to determine probabilities than observations based on subjective measures, such as personal estimates.

For example, one could determine the objective probability that a coin will land "heads" up by flipping it 100 times and recording each observation. When performing any statistical analysis, it is important for each observation to be an independent event that has not been subject to manipulation. The less biased each observation is, the less biased the end probability will be.

Objective probability is where I know that if I toss a fair coin enough times, itll turn up heads 50% of the time. Subjective probability is where I think theres a 10% chance itll rain tomorrow, and I dont care to repeat the event. The former is an informed statement about a system. The latter is our best guess about an event. The former number is a constant, if weve done the calculations right. The latter number can change as our knowledge of the event increases.


Management Consulting

All Answers

Answers by Expert:

Ask Experts


Leo Lingham


management consulting process, management consulting career, management development, human resource planning and development, strategic planning in human resources, marketing, careers in management, product management etc


18 years working managerial experience covering business planning, strategic planning, corporate planning, management service, organization development, marketing, sales management etc


24 years in management consulting which includes business planning, strategic planning, marketing , product management,
human resource management, management training, business coaching,
counseling etc




©2017 All rights reserved.