Probability and probability distribution 2nd semester
Probability:The term probability refers to the chance of happening or not happening of an event. The use of word chance indicates that there is an element of uncertainty about the statement. e.g. The probability of wining the match are 50:50 So the theory of probability provides a numerical measure of the element of uncertainty. Random Experiment: A random experiment is any attempt or operation which is performed a large number of times under essentially the same conditions whose results or outcomes cannot be predicted with certainty is known as random experiment . Events:The outcomes of an experiment or the set of outcomes of an experiment are known as events. e.g. In tossing two coins outcomes are HH, TT, TH,HT Define event “E” at least one Head will occur, E = (HH, TH, HT) Types of Events. Exhaustive Events:All possible outcomes of the experiment are known as exhaustive events. E.g. In tossing of a die all possible outcomes are 1,2,3,4,5,6.
1/1
Mutually Exclusive Events:Events are said to be mutually exclusive, if the happening of any one of them precludes the happening of all others. e.g. In throwing a die, any one face of the die will occur in a single trial. Equally likely Events: If all possible outcomes of an experiment have equal chance to occur, then events are equally likely. e.g. In tossing of an unbiased coin, Head and tail are equally likely Events.
Impossible Event:An event that contains no sample point is called impossible event and is denoted by ∅=( …….) Sample Space: Sample space is denoted by Ω (𝑜𝑚𝑒𝑔𝑎) which is collection of all possible outcomes of an experiment. e.g. Throwing a single die Here Ω ={1,2,3,4,5,6} TossingThree coins at a time Ω = {(HTT), (HTH), (HHT), (THH),(T,T,H) (THT) (TTT) (HHH)} Event is a Subset of Sample space. Mathematical or classical definition of probability:If there are N mutually exhaustive, mutually exclusive and equally likely events for any random experiment.
Out of N
let“m” cases are favourable to the
happening of the event “E”. Then P(E) = m/N If “p” denotes the probability of happening of “E” and “q” denotes not happening of “E” 2/1
Then P(E) = p = m/N P(𝐸) = q = Nm/N = 1p P and q are nonnegative numbers. Statistical or Empirical definition of probability:This definition was given by VanMoses. If a trial is repeated a number of times under essentially the same conditions, then the limiting value of the ratio of happening of an event to the total no. of trials which tending to infinity is known as probability of that event, provided the limit is finite and unique. If “n” is the number of trials and “m” is the number of happening of Event “E” Then P(E) = lim
𝑚
𝑛→∞ 𝑛
However, this definition of probability cannot be obtained in practice as the number of trials is sufficiently large and the experimental conditions may not remain homogenous in a large number of repetitions.
Axiomatic approach of probability:This definition of probability is based on the following axioms:If “S” be the sample space and let “A” be any event in “S”, Then P(A) is called a probability function defined on the sample space provided the following basic axioms are satisfied. i ) P(A) is a real number i.e. P(A)≥ 0 ii) P(S) = 1 i.e. total probability = 1 iii) If A1, A2, ……,An are mutually exclusive events, then P[⋃𝑛𝑖=1 𝐴𝑖 ] = ∑𝑛𝑖=1 𝑃(𝐴𝑖 ) 3/1
Some important Theorems:
1. Probability of impossible event is zero. i.e p(∅) = 0 S ∪ ∅=S P(S ∪ ∅)=P(S)
By using condition of prob. Function.
P(S) + P(∅) =1 P(∅) = 0
2.
P(𝐴) = 1P(A)
Sol. AU𝐴 = S P(AU𝐴) = P(S) = 1 P(𝐴) = 1P(A)
3.
using condition of probability function.
P(AUB) = P(A) + P(B) – (PA∩B)
Sol. (A∩B) and (𝐴 ∩B) are mutually exclusive Therefore P[(A∩B) U ( 𝐴 ∩B)] = P(B) Then by condition of probability function P(A∩B) + P(𝐴 ∩B) = P(B). ⇒P(𝐴 ∩B) = P(B) – P(A∩B) …….
(1)
(𝐴 ∩B) and A are mutually exclusive Then P[(𝐴 ∩B)∪A] = P(A∪B). P(𝐴 ∩B) + P(A) = P(A∪B)
Using condition of probability function
P( 𝐴 ∩B) = P(A∪B) – P(A)
4/1
Equation (1) becomes P(A∪B) – P(A) = P(B) – P(A∩B) ∴P(A∪B) = P(A) + P(B) – P(A∩B).
For Three Events A,B and C P(A∪B∪C) = P(A) + P(B) + P(C) – P(A∩B) – P(B∩C) – P(A∩C) + P(A∩B∩C).
Conditional probability:The conditional probability of the event “A” given that event “B” has already occurred is defined as:P(AB) =
P(A∩B) P(B)
, P(B) >0
Similarly conditional probability of the event “B” given that event “A” has already occurred is defined as P(BA) =
P(A∩B) P(A)
, P(A) >0
Independent Events:An event “A” is said to be independent of event “B” if the conditional probability of event “A” given “B” is equal to the unconditional probability of “A” i.e. P(AB) = P(A) Similarly P(BA) = P(B). By conditional probability: P(AB) = P(A∩B) / P(B) P(A) . P(B) = P(A∩B) P(A∩B) = P(A). P(B). Hence the events A and B are independent if and only if P(A∩B) = P(A). P(B).
5/1
Bayes’ Theorem:If E1, E2, …..En are n mutually exclusive e vents with P(Ei)≠ 0 (i=1,2,3,…,n) Then for any arbitrary event “A” which is subset of ⋃𝑛𝑖=1 𝐸𝑖 such that P(A)>0 we have 𝑃 (𝐴𝐸𝑖 )𝑃(𝐸𝑖 )
P(EiA) = ∑𝑛
𝑖=1 𝑃
(𝐴𝐸𝑖 )𝑃(𝐸𝑖 )
Sol. Since A ⊂ ⋃𝑛𝑖=1 𝐸𝑖 , Then A =A∩(⋃𝑛𝑖=1 𝐸𝑖 ) A =⋃𝑛𝑖=1(A ∩ 𝐸𝑖 )
Since A∩Ei (i=1,2,3,…,n) are mutually disjoint or exclusive events, then P(A) = P[⋃𝑛𝑖=1(𝐴 ∩ 𝐸𝑖 )] = ∑𝑛𝑖=1(𝐴 ∩ 𝐸𝑖 )
By axiom iii
= ∑𝑛𝑖=1 𝑃(𝐴𝐸𝑖)𝑃(𝐸𝑖 ) …….(1) Also P(A ∩ 𝐸𝑖 ) = 𝑃(𝐴𝐸𝑖)𝑃(𝐸𝑖 ) ⇒P(𝐸𝑖 𝐴)
=
𝑃(𝐴∩𝐸𝑖 )
=
𝑃 (𝐴) 𝑃(𝐴 𝐸𝑖 )𝑃(𝐸𝑖 ) 𝑃(𝐴)
=∑𝑛
𝑃 ( 𝐸𝑖 )𝑃(𝐴𝐸𝑖 )
( 𝑖=1 𝑃(𝐸𝑖 ) 𝑃
𝐴 𝐸𝑖)
using (1)
Random variable and their probability Functions:With the help of the example, we will try to understand the idea of Random variable and their probability function. Suppose two coins are tossed simultaneously, Sample space S= {HH, HT, TH, TT}
6/1
Here the Table will show the sample point, number of Heads and the corresponding probability:
Sample point
No. of Heads(X)
HH
Probability P(X)
2
¼
HT
1
¼
TH
1
¼
TT
0
¼
Let X denote the number of Heads and P(X) the corresponding probability to the variable X. Therefore we have X=x
0
1
2
P(X)=p(x)
1/4
2/4
1/4
Since the value of X is a number determined by the outcome of an experiment, so X is called a random variable and the set {X, P(x)} is called probability function of X. Discrete Random variable and continuous Random variable:A Random variable which can assume only finite or count ably large number of values within its range and for which the value which the variable takes depends on chance is called a discrete random variable. Continuous random variable is one which can take any value in the interval of its range e.g. X. Rainfall during a rainy day.
Probability Mass function:Suppose X is a discrete random variable taking at most count ably infinite values X1, X2,….. If p(x) = P(X=x) we call it probability of X=x
7/1
This p(x) is known as probability mass function if it satisfies the following (ii)∑∞ 𝑖=1 𝑝(𝑥𝑖 ) = 1
(i) p(x)>0
Some times in case of continuous “x” we denote it by f(x) and read it as probability density function. Then we say f(x) is a probability density function if it satisfies the following: f(x) ≥,0 for all x
(i) ∞
(ii)∫−∞ 𝑓 (𝑥 )𝑑𝑥 = 1
Mathematical expectation:If a random variable “X” is taking values
X1,
X2,….,Xn with probabilities p(x1), p(x2),…..,p(xn), Then mathematical expectation of “X” is given by E(X) = ∑𝑛𝑖=1 𝑥𝑖 p(𝑥𝑖 ) When “X” is a continuous random variable ∞
Then E(X) = ∫−∞ 𝑥𝑓(𝑥 )𝑑𝑥 We should note that ∫ 𝑥𝑓(𝑥 )𝑑𝑥 exist. E(x) is also called the Mean of “X”
Example : Find the expectation of the number on a die when thrown. Let “X” be the random variable taking values 1,2,3,4,5,6 each with equal probability 1/6
i.e. X: p(x):
1 1/6
2
3
4
1/6
1/6
1/6
5 1/6
6 1/6
E(x) = 1(1/6)+2(1/6)+3(1/6)+4(1/6)+5(1/6)+6(1/6) = 7/2 = 3.5 Then on an average one can get 3.5 on each toss. 8/1
Some important results on Expectation: If X and Y are two random variables, Then E(X + Y) = E(X) + (E(Y) ( ) and E(XY) = E(X). E(Y) V(X) = E(X2) − [E(X)]2
Moment Generating function :Here the name suggest, the moment generating function is used for calculating moments. The moment generating function of the distribution of a random variable “X”(if it exists) is given by the expected value of 𝑒𝑡𝑥 and is given as MX(t) = E[𝑒𝑡𝑥 ] ∑ 𝑒𝑡𝑥 𝑓(𝑥 ) ={ 𝑡𝑥 ∫ 𝑒 𝑓(𝑥 )𝑑𝑥
𝑤ℎ𝑒𝑛 𝑥 𝑖𝑠 𝑑𝑖𝑠𝑐𝑒𝑡𝑒 𝑣𝑎𝑖𝑎𝑏𝑙𝑒 𝑤ℎ𝑒𝑛 𝑥 𝑖𝑠 𝑐𝑜𝑛𝑡𝑖𝑛𝑜𝑢𝑠 𝑣𝑎𝑖𝑎𝑏𝑙𝑒
Differentiating Movement Generating function w.r.t “t” at t=0, we will get different moments of the distribution i.e. 𝜕2 𝜕𝑡2
𝑀𝑋 (𝑡) t=0 =
𝜕2 𝜕𝑡2
𝐸(𝑒𝑡𝑥 ) t=0
The first derivative of the moment generating function at t=0 is the Mean of X.
Some important properties of moment generating functions: (i) Sol.
MX+a(t) = 𝑒 𝑎𝑡 Mx(t) MX+a(t) = E[𝑒𝑡𝑥 𝑒𝑡𝑎 ] = 𝑒𝑡𝑎 E[𝑒 𝑡𝑥 ] =𝑒 𝑡𝑎 Mx(t)
∵ 𝑎 & 𝑡 𝑎𝑟𝑒 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡𝑠 By definition of mgf
9/1
(ii)
Mbx(t) = Mx(bt)
Sol. Mbx(t) = E[𝑒𝑡(𝑏𝑥) ] =E[𝑒 𝑥𝑡𝑏 ] =Mx(tb) 𝑡𝑎
(iii)
𝑀𝑋+𝑎 (t) = 𝑒 𝑏 MX(t/b) 𝑏
Sol.
𝑀𝑋+𝑎 (t) = E[𝑒𝑡
𝑥+𝑎 ) 𝑏
]
𝑡𝑎
𝑡𝑥
(
𝑏
= E [𝑒 𝑏 𝑒 𝑏 ] 𝑡𝑎 𝑏
𝑡𝑥 𝑏
=𝑒 E [𝑒 ]
∵ 𝑎, 𝑏 & 𝑡 𝑎𝑟𝑒 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡𝑠
𝑡𝑎
=𝑒 𝑏 MX(t/b)
10/1
Theoretical Distributions The empirical or experimental frequency distributions which are based on sample studies help us in computing certain statistical measures like the averages, measures of dispersion , skewness etc. However the values of the variables in the population may be distributed according to some definite probability law which can be expressed mathematically and the corresponding probability distribution is known as theoretical probability distribution. These distributions may be based on a prior consideration or posterior inferences Some of the theoretical distributions are: 1. 2. 3. 4. 5. 6. 7. 8. 9.
Uniform Distribution Binomial Distribution Poisson Distribution Geometric Distribution Hyper geometric Distribution Rectangular Distribution Exponential Distribution Gamma Distribution Normal Distribution
Uniform Distribution: A discrete random variable x is said to follow uniform discrete distribution with parameter n if its probability mass function if given as: P(x)=
1
𝑛
; x=1,2,3, … , n
Moment Generating Function of Uniform Distribution: The mgf of uniform distribution is given as: MX(t) =E(etx) =∑𝑛𝑥=1 𝑒 𝑡𝑥 11/1
=∑𝑛𝑥=1 𝑒 𝑡𝑥 =
1
𝑛
1 𝑛
∑𝑛𝑥=1 𝑒𝑡𝑥 𝑛
1
𝑒𝑡 [1−(𝑒𝑡 ) ]
𝑛
(1 −𝑒𝑡 )
= [ 1
𝑛𝑡 𝑒𝑡 (1−𝑒 )
𝑛
1−𝑒 𝑡
= [
]
]
Mean and Variance of Uniform Distribution through mgf: Mean = E(X) =∑𝑛𝑥=1
1 𝑛
1
= [ 1 + 2 + 3 + ⋯ + 𝑛] 𝑛
1 𝑛(𝑛+1)
=
𝑛
2
(𝑛+1)
=
2
Variance = E(X2)[E(X)]2 =∑𝑛𝑥=1 𝑥 2 p(x)–[ 1
= ∑𝑛𝑥=1 𝑥 2–[ 𝑛
𝑛+1 2 ] 2
1 𝑛 (𝑛+1 )(2𝑛+1 )
= { 𝑛
6
𝑛+1 2 ] 2
}[
𝑛+1 2 ] 2
12/1
(𝑛 +1 )(2𝑛+1) 𝑛+1 2 [ ] 6 2
=
(𝑛 +1 )(𝑛−1)
=
12
Binomial Distribution: Consider a set of n independent trials, in which the probability of success for each trail is p and the probability of failure is q=1p .The probability of getting x successes and consequently nx failures in a specified order is px(1p)nx A Random variable X has a binomial distribution if and only if its probability distribution is given by:
B(x; n,p)= (𝑛𝑥) px (1p)nx ; x=0,1,2,…..n
Moment Generating Function of Binomial Distribution:The mgf of binomial distribution is given as: Mx(t)=E[etx] = ∑𝑛𝑥=0 𝑒 txp(x;n,p) = ∑𝑛𝑥=0 𝑒 tx (𝑛𝑥)px (1p)nx = ∑𝑛𝑥=0 𝑒 tx(𝑛𝑥 ) pxqnx =∑𝑛𝑥=0( 𝑝𝑒 t)x(𝑛𝑥) qnx =(𝑞 + 𝑝𝑒𝑡 )𝑛 13/1
Mean and Variance of Binomial Distribution through mgf: Mean=E(X) 𝜕
= MX(t)t=0 𝜕𝑡
𝜕
= (𝑞 + 𝑝𝑒 𝑡 )𝑛 t=0 𝜕𝑡
=n (𝑞 + 𝑝𝑒 𝑡 )𝑛− 1pett=0 =np
Also,E(X2)=
𝜕2
𝜕𝑡2
=
𝜕2
𝜕𝑡2
MX(t)t=0
(𝑞 + 𝑝𝑒𝑡 )𝑛t=0 𝜕
= {𝑛(𝑞 + 𝑝𝑒 t)n1pet}t=0 𝜕𝑡
=np{(𝑞 + 𝑝𝑒 t)n1et+et(n1) (𝑞 + 𝑝𝑒 t)n2pet}t=0
=np+n2p2np2
Variance =V(X) =E(X2)[E(X)]2 =npnp2 =npq
14/1
Poisson Distribution: If in Binomial Distribution n is very large i.e. n →∞ and p is very small i.e. p→0 then Binomial Distribution tends to Poisson Distribution. A discrete random variable is said to have a Poisson distribution with parameter ⋋(>0) if its probability mass function is given by: 𝑒 −⋋ ⋋𝑥
P(x;⋋)=
𝑥!
; x=0,1,2, …. ;⋋ >0
Moment Generating Function of Poisson Distribution: The mgf of Poisson Distribution is given as:
Mx(t)= E[etx] tx =∑∞ 𝑥=0 𝑒 p(x)
𝑒 −⋋ ⋋𝑥
tx =∑∞ 𝑥=0 𝑒
𝑥! 2
⋋
t
=e [1+⋋e +
=𝑒⋋(𝑒
(⋋𝑒𝑡 ) +⋯ 2!
]
𝑡 −1)
Mean and Variance of Poisson Distribution through mgf: Mean =E(X) 𝜕
= Mx(t)t=0 𝜕𝑡 𝜕
= [𝑒 ⋋ ( 𝑒 𝜕𝑡
𝑡 −1)
]t=0
15/1
=[𝑒 ⋋(𝑒 Also, E(X2)=
𝑡 −1)
𝜕2
𝜕𝑡2
=
𝜕2
𝜕𝑡2
⋋(et0)t=0=⋋
Mx(t)t=0 𝑡 −1)
𝑒 ⋋ (𝑒
=⋋{𝑒 ⋋(𝑒
𝑡 −1)
t=0
⋋(et0)et+𝑒 ⋋(𝑒
𝑡 −1)
et}t=0
=⋋2+⋋ Variance=V(X) =E(X2)[E(X)]2 =⋋2+⋋⋋2 =⋋ Geometric Distribution: A discrete random variable x is said to have a geometric distribution if its probability mass function is defined as: P(X)=qxp ; x=0,1,2,….; 0
𝑡𝑥
𝑝(𝑥)
𝑡𝑥 x =∑∞ 𝑥 =0 𝑒 q p
16/1
=p[1+qet+(qet)2+….] =
𝑝 1−𝑞𝑒𝑡
Mean and Variance of Geometric Distribution through mgf: Mean=E(X) 𝜕
= MX(t)t=0 𝜕𝑡
𝜕
𝑝
𝜕𝑡
1−𝑞𝑒𝑡
= [
]
=p(1 − 𝑞 )−2 𝑞𝑒 𝑡 t = 0 𝑞
=
𝑝
E(X2)=
𝜕2
𝜕𝑡2
=
𝜕2 𝜕𝑡2
[
Mx(t)t=0 𝑝
1−𝑞𝑒𝑡
]t=0
=pq[(1qet)2et+(2)(1qet)3et(qet)]t=0 𝑞
2𝑞 2
𝑝
𝑝2
= +
Variance=V(X) =E(X2)[E(X)]2 𝑞
=
𝑝
2𝑞 2 𝑞 2
+
𝑝2

𝑝2
17/1
𝑞
𝑞
𝑝
𝑝
= (1 + )
=
𝑞
𝑝2
Hyper geometric Distribution: A discrete random variable x is said to have hypergeometric distribution with parameters N ,M and n if its probability mass function is defined as:
(𝑀 )(𝑁−𝑀 ) 𝑥 𝑛−𝑥
P(x;N,M,n)=
;x=0,1,2,3,….,n
(𝑁 ) 𝑛
Mean and Variance of Hypergeometric Distribution: Mean=E(X) =∑𝑛𝑥=0 𝑥 𝑝(𝑥) =∑𝑛𝑥=0 𝑥
(𝑀 )(𝑁−𝑀 ) 𝑥 𝑛−𝑥 (𝑁 ) 𝑛
𝑀(𝑀−1)(𝑀−2 )! 𝑥( 𝑥−1 )!( 𝑀−𝑥)! 𝑛 𝑥=1 𝑁( 𝑁−1) ! 𝑛( 𝑛−1!)( 𝑁−𝑛) !
=∑
(𝑁−𝑀 ) 𝑛−𝑥
𝑀−1 𝑁−𝑀 𝑀 ( 𝑛−1 )( 𝑛−𝑥 ) 𝑛 =∑𝑥 =1 𝑛 (𝑁−1 ) 𝑁 𝑛−1
18/1
Put x1=y ⇒ 𝑥 = 𝑦 + 1 ; when x=1 , y=0 when x=n , y=n1 E(X) =
E(X)=𝑛
𝑀
𝑛 ∑𝑛−1 𝑦=0 𝑁
𝑀
∑𝑛−1 𝑁 𝑦=0
𝑁−𝑀 ) ( 𝑀−1 )(𝑛−𝑦−1 𝑦
(𝑁−1 ) 𝑛 −1
𝑁−𝑀 ) ( 𝑀−1 )(𝑛−1−𝑦 𝑦
(𝑁−1 ) 𝑛−1
𝑎 𝑏 𝑎+𝑏 Using property of combinations:∑𝑚 𝑖=0 ( 𝑖 ) ( 𝑚−𝑖 ) =( 𝑚 ) ,we get
E(X) =𝑛
𝑀 𝑁
[
(𝑀−1+𝑁−𝑀 ) 𝑛−1 (𝑁−1 ) 𝑛−1
]
𝑁−1 𝑀 (𝑛 −1 ) =𝑛 (𝑁−1) 𝑁 𝑛 −1
Also, E(X2)=∑𝑛𝑥=0 𝑥2p(x)
=∑𝑛𝑥 =0[𝑥 (𝑥 − 1) + 𝑥 ] 𝑝(𝑥)
=∑𝑛𝑥 =0 𝑥 (𝑥 − 1)𝑝(𝑥) +∑𝑛𝑥=0 𝑥𝑝(𝑥)
=∑𝑛𝑥=0 𝑥(𝑥 − 1)𝑝(𝑥) + 𝑛
𝑀 𝑁
Consider ∑𝑛𝑥=0 𝑥 (𝑥 − 1) 𝑝(𝑥) 19/1
=∑𝑛 𝑥=0 𝑥 (𝑥
=∑𝑛𝑥=2
− 1)
(𝑀 )(𝑁−𝑀 ) 𝑥 𝑛−𝑥 (𝑁 ) 𝑛
𝑀(𝑀−1)(𝑀−2)! 𝑥(𝑥−1) (𝑥−2) !(𝑀−𝑥)! 𝑁(𝑁−1)! ( 𝑛 𝑛−1)(𝑛−2)!(𝑁−𝑛)!
𝑀 (𝑀 −1)𝑛(𝑛−1)
=
𝑁 (𝑁−1)
(𝑁−𝑀 ) 𝑛−𝑥
(𝑀−2 )(𝑁−𝑀) 𝑛 𝑥−2 𝑛−𝑥 ∑𝑥=2 (𝑁−2) 𝑛−2
Put x2= y⇒ x= y+2; when x=2 , y=0 when x=n , y= n2
∑𝑛𝑥=0 𝑥(𝑥
𝑀 (𝑀−1) 𝑛(𝑛−1)
− 1) 𝑝(𝑥)=
𝑁 (𝑁−1)
∑𝑛−2 𝑦 =0
𝑁−𝑀 (𝑀−2 𝑦 )(𝑛−𝑦−2)
(𝑁−2 ) 𝑛−2
𝑛−2
𝑁−𝑀 (𝑀−2 𝑀(𝑀 − 1)𝑛(𝑛 − 1) 𝑦 ) (𝑛−2−𝑦) = ∑ 𝑁(𝑁 − 1) (𝑁−2 𝑛−2 ) 𝑦=0
𝑎 𝑏 𝑎+𝑏 Using property of combinations:∑𝑚 𝑖=0 ( 𝑖 ) ( 𝑚−𝑖 ) =( 𝑚 ) ,we get
𝑛−2
𝑀−2+𝑁−𝑀 𝑀(𝑀 − 1)𝑛(𝑛 − 1) ( 𝑛−2 ) ∑ 𝑥 (𝑥 − 1) 𝑝(𝑥 ) = 𝑁(𝑁 − 1) (𝑁−2 ) 𝑛−2 𝑦=0
𝑀 (𝑀 −1 )𝑛(𝑛−1)
=
𝑁(𝑁−1)
20/1
𝑀 (𝑀 −1 )𝑛(𝑛−1)
𝑀
𝑁(𝑁−1)
𝑁
∴E(X)2=
+𝑛
Variance = V(X) =E(X2) –[E(X)]2 𝑀 (𝑀−1 )𝑛(𝑛−1)
=
𝑁(𝑁−1)
=𝑛
𝑀 𝑁
[
+ 𝑛
(𝑀 −1 ) (𝑛−1) (𝑁−1)
𝑀 𝑁

𝑛2 𝑀2 𝑁2 𝑀
+1𝑛 ] 𝑁
𝑛𝑀(𝑁−𝑀)(𝑁−𝑛)
=
𝑁2 (𝑁−1)
Rectangular Distribution:
A continuous random variable x is said to have a rectangular distribution over an interval (a,b) ; ∞
f(x;a,b) =
1 𝑏−𝑎
;
a
Moment Generating Function of Rectangular Distribution: The moment generating function of rectangular distribution is as: MX(t)= E(etx) 𝑏
=∫𝑎 𝑒𝑡𝑥 𝑓 (𝑥 )𝑑𝑥
21/1
𝑏
=∫𝑎 𝑒 𝑡𝑥
1
𝑑𝑥
𝑏−𝑎 𝑏𝑡 𝑎𝑡 =𝑒 −𝑒 , 𝑡 (𝑏−𝑎)
𝑡≠0
Mean and variance of rectangular distribution through mgf: Mean= E(x) 𝜕
= Mx(t)t=0 𝜕𝑡
𝑒𝑏𝑡 −𝑒𝑎𝑡 ]t=0 𝜕𝑡 𝑡 (𝑏−𝑎) 𝜕
= [ 𝜕
1 ( 𝜕𝑡 𝑡 𝑏−𝑎)
= { 𝜕
1
= {
[𝑒 𝑏𝑡 − 𝑒𝑎𝑡 ]}t=0
[(1 + 𝑡𝑏 + )
𝜕𝑡 𝑡(𝑏−𝑎
(𝑡𝑏) 2 2!
+
(𝑡𝑏) 3 3!
+ ⋯ )(1 + 𝑡𝑎 +
(𝑡𝑎) 2 2!
+
(𝑡𝑎) 3 3!
+ … )}t=0
(𝑏+𝑎)
=𝜕
𝜕𝑡
[1+
2!
(𝑏2 +𝑎𝑏+𝑎2 ) 2 𝑡 + …..]t=0 3!
𝑡+
𝑏+𝑎
=
E(x2)=
𝜕2
𝜕𝑡2
=
2
Mx(t)t=0
𝜕2 𝜕𝑡
(𝑏+𝑎)
[1+ 2
2!
( 𝑏2 +𝑎𝑏+𝑎2 ) 2 𝑡 + …..]t=0 3!
𝑡+
𝑏2 +𝑎𝑏+𝑎2
=
3
22/1
Variance=V(x) = E(x2)[E(x)]2 𝑏2 +𝑎𝑏+𝑎2
=
3
− [
𝑏+𝑎 2 ] 2
(𝑏−𝑎) 2
=
12
Exponential Distribution: A continuous random variable x is said to have exponential distribution with parameter 𝜃(> 0) if its probability distribution function is given as:
f (x;𝜃) = 𝜃𝑒 −𝜃𝑥 ;x≥0
Moment Generating Function of exponential Distribution: The moment generating function of exponential distribution is as: MX(t)= E(etx) ∞
=∫0 𝑒 𝑡𝑥 𝑓(𝑥 )𝑑𝑥 ∞
=∫0 𝑒𝑡𝑥 𝜃𝑒 −𝜃𝑥 𝑑𝑥 ∞
=∫0 𝜃𝑒−𝑥(𝜃 −𝑡) 𝑑𝑥 =
𝜃
𝜃−𝑡
[𝑒−∞ +𝑒0 ] 23/1
𝑡 −1
=(1 − ) 𝜃
Mean and variance of exponential distribution through mgf: Mean = E(x) 𝜕
= Mx(t)t=0 𝜕𝑡
𝜕
𝑡 −1
𝜕𝑡
𝜃
= (1 − ) t=0 𝑡 −2
1
=(1 − )
(− )t=0
𝜃
𝜃
1
=
𝜃 𝜕2
E(x2)=
𝜕𝑡2
=
Mx(t)t=0
𝜕2
𝜕𝑡2
= 2
𝜃2
𝑡 −1
(1 − ) t=0 𝜃
𝑡 −3 t=0 − ) 𝜃
(1
= 2
𝜃2
Variance =V(X) = E(x2)[E(x)]2 2
=
𝜃2
=
1
[ ]2 𝜃
1
𝜃2
24/1
Normal Distribution : A random variable x is said to follow normal distribution with parameters 𝜇 and 𝜎2 if its probability density function is defined as : 1
f(x;𝜇 , 𝜎)=
𝑒
𝜎√2𝜋
−(𝑥−𝜇)2 2𝜎2 ;∞ < 𝑥 < ∞, −∞ < 𝜇 < ∞ , 𝜎 > 0
Moment Generating Function of Normal Distribution: The moment generating function of normal distribution is as: MX(t)= E(etx) ∞
=∫−∞ 𝑒 𝑡𝑥 𝑓(𝑥 )𝑑𝑥 ∞
=∫−∞ 𝑒
1
𝑡𝑥
𝜎 √2𝜋 ∞
1
∫ 𝑒 𝜎√2𝜋 −∞
=
𝑥−𝜇
Put z= 𝑑𝑧 =
1 𝜎
𝜎
𝑒
−( 𝑥−𝜇) 2 2𝜎2
𝑑𝑥
2 𝑡𝑥− (𝑥−𝜇) 𝑑𝑥 2𝜎2
⇒ x = 𝜇 + 𝜎𝑧 when z=∞ , x= ∞
𝑑𝑥⇒𝑑𝑥 = 𝜎𝑑𝑧when z=∞ , x= ∞
∞
1
𝑡(𝜇 +𝜎𝑧) −
𝑧2 2
∫
𝑒
𝑒𝜇𝑡
∫ 𝑒 √2𝜋 −∞
∞
[
−1 2 (𝑧 −2𝑡𝜎𝑧) ]𝑑𝑧 2
𝑒𝜇𝑡
∞
[
−1 2 (𝑧 −𝑡2 𝜎2 −2𝑡𝜎𝑧+𝑡2 𝜎2 )]𝑑𝑧 2
MX(t)=
𝜎 √2𝜋 −∞
=
=
∫
√2𝜋 −∞
𝑒
𝜎𝑑𝑧
25/1
𝜎2 𝑡2 𝜇𝑡+ 2
𝑒
=
√2𝜋
−1
∞
∫−∞ 𝑒 2
2 𝑡2
=𝑒 𝜇𝑡+𝜎
[
(𝑧−𝜎𝑡) 2
−1
∞
1
∫ 𝑒2 √2𝜋 −∞
𝑑𝑧
(𝑧−𝜎𝑡) 2
𝑑𝑧]
Put𝑧 − 𝜎𝑡 = 𝑦⇒𝑧 = 𝑦 + 𝜎𝑡when z=∞ , y=∞ 𝑑𝑧 = 𝑑𝑦when z=∞ , y=∞
MX(t)=𝑒
=𝑒
𝜇𝑡 +
𝜇𝑡 +
𝜎2 𝑡2 2
=𝑒
𝜎 2 𝑡2 2
[
∞
1
−1 2 𝑦
∫ 𝑒2 √2𝜋 −∞
𝑑𝑦]
[1]
𝜇𝑡+
𝜎2 𝑡2 2
Mean and variance of Normal distribution through mgf: Mean=E(x) 𝜕
= Mx(t)t=0 𝜕𝑡 𝜕
𝜎2 𝑡2 2 t=0
𝑒
𝜇𝑡 +
={𝑒
𝜇𝑡 +
=
𝜕𝑡
=𝜇

𝜎2 𝑡2 2
(𝜇 +
2𝜎2 𝑡 2
)} t=0 26/1
E(x2) =
=
𝜕2
𝜕𝑡2
𝜕2 𝜕𝑡2
[𝑒
𝜕
= {𝑒
𝜎2 𝑡2 2
𝜇𝑡 +
𝜇𝑡 +
𝜕𝑡
𝜕
= {𝜇 𝑒
= [𝜇 𝑒
𝜇𝑡+
𝜕𝑡
=𝜇 𝑒
𝜇𝑡+
𝜎 2 𝑡2 2
𝜎2 𝑡2 2
𝜎2 𝑡2 2
𝜎2 𝑡2 2
𝜇𝑡 +
𝜕𝑡
𝜕
Mx(t)t=0 ]t=0
(𝜇 +
2𝜎2 𝑡
2
2
+𝜎 𝑡 𝑒
𝜕
2
)} t=0
𝜇𝑡 +
]t=0 + [𝜎 𝑡 𝑒
𝜇𝑡+
𝜕𝑡
2
𝜎2 𝑡2 2
2
}
𝜎 2 𝑡2 2
(𝜇 + 𝜎 𝑡)t=0 +𝜎 {t [𝑒
]t=0
𝜎2 𝑡2 2
𝜇𝑡 +
2
(𝜇𝜎 𝑡)] + 𝑒
𝜇𝑡 +
𝜎2 𝑡2 2
(1)}t=0
= 𝜇𝑒0 𝜇+ 𝜎 2 (0+𝑒0 )
=𝜇 2+𝜎 2
Variance = V(x) = E(x2)[E(x)]2 =𝜇 2+𝜎 2 𝜇 2 =𝜎 2
27/1