Looking for a Tutor Near You?

Post Learning Requirement »
x
x

Direction

x

Ask a Question

x

Hire a Tutor

Now Probability Is Easier Than Before

Loading...

Published in: Mathematics
767 Views

Let's learn how to judge probability.

Rajni M / Dubai

0 year of teaching experience

Qualification: Masters in Mathematics, Bachelors in Education (Exp. >30 years)

Teaches: Chemistry, Mathematics, Physics, Statistics, Maths, Science

Contact this Tutor
  1. Now PROBABILITY IS EASIER THAN BEFORE ROBABILIT In our daily life we often use the words like "probably", "l doubt"," most probably and chances are there "when we are not sure that some event is going to occur or not. This uncertainty is measured numerically by When we perform experiments to assure, e.g., tossing a coin, throwing a dice etc., the result of these are called "outcomes", and probability so is tEXEEÄiMENiÄtäEEMEiÄiQ" These are only estimates. ase Tossing of coins 1. 2. 3. Each toss of a coin is called a So a trial is an action which results in one or many outcomes. A coin is assumed to be means it is symmetric also equal chances are there to have a head or a tail. Random toss means the coin is allowed to fall freely. we dismiss the probability of landing of coin on its edge. So coming a head or a tail Note are equally likely events. When the experiment is done a number of times, we get several outcomes and we can calculate exact (theoriritical also called classical) probability. Theoritical probability of an event is An event having only one OUtCOme is called an The sum Of probabilities of all the elementary events of an experiment is always 1. It means in an experiment P(E) +P(not E) = 1. We denote (not E ) by F so P(E) + Event E is called compliment of event E. E and are compliment to each other. Probability of an event which is impossible to occur is "O", and this event is called event. Similarly probability of an event which is sure to occur is "1' and it is called a event. So Now we define some terms often used in probability
  2. n RANDOM EXPERIMENT -while performing experiments, the results may not be the same although they are repeated under same conditions • We have more than one outcome. It is not possible to predict the outcome in advance. • These are called Random experiments. . SAMPLE SPACE The collection of all possible outcomes of a random experiment is called and it is denoted byß Each element of sample space is called" Sample space serves as a Universal set for all problems concerned with that experiment. , and n(S) = 2 where n(S) means number of elements in the sample space. Simultaneously S: T), (T, H), (H, H), (T , T)} and = 4 E: and n (S) Simultaneously (1,1) (2,1) (3,1) (4,1) (5,1) (6,1) (1,2) (1,3) (2,2) (2,3) (3,2) (3,3) (2,4) (3,4) (1,5) (2,5) (3,5) (1,6) (2,6) (3,6) (4,6) (5,6) (6,6) N(S) = 36. We have read that a simple event has only one sample point. If an event has more than one sample points, it is called a compound event. E.g. tossing a coin thrice the event E getting exactly one head = {THH, THT, T T H} is a compound event. RE EN Just as in sets, union of sets 'A 'or "B "is denoted by AU B ={x:x e Aor x also union of events A and B is AUB = {x: x e A or x e B or x e both} e Borx e both}. Here
  3. Like sets here also events A and B means A n B, Again as in sets A —B =AnB utua x usive vent AnB= x e A and X e B} Two events A and B are called mutually exclusive if occurrence of event A excludes occurrence of B and e.g. rolling of a dice S (sample space) = vice versa. A =event of getting an odd number ={1, 3, 5} {2, 4, 6}so here A and B are mutually exclusive. B = event of getting an even number = x austlve vent If S is a sample space of some experiment and A, B, C are events in this sample space and if AU B U C =S then A, B, C are called exhaustive events. i.e. A, B, C are exhaustive if at least one of them necessarily Further if for events A n i.e. they are mutually exclusive and AU b i.e. They are occurs. exhaustive also then A and B are mutually exclusive and exhaustive. Axiomatic approach to probability Axiomatic approach is another way of describing probability of an event. Here also some rules are depicted 1. For any event E, P (E) 2 0 2. P(S) = l.i.e. probability has the range [0, 1] 3. If E and F are mutually exclusive events then P( EUF)= Probability of an events not A, P (not A) = 1- P (A). on Itiona Pro a lit = and E n If E and F are two events and event F has already occurred then this is taken as a condition and the probability of is is by number of elementary events favourable to (E nF)) number of elementary events favorable to F. Because occurrence P(EnF)
  4. ROPERTIE ONDITION ROB B Let E and F are the events of the sample space of an experiment then P(SIF)= (FIF) =1 1. If A and B are two events of a sample space S and F is an event of S ,such that P(F Ø. U B Al P(BIF)- P(AnBlF). SO if A and B are disjoint then A n B)=Ø And AU B Al F) BIF) 3.P(Ecomp I I F). Let E and F are two events in a sample space S , then E nF means that both E and F have occurred simaltaneously. E nF is also written as EF. In problems dealing with drawing of two cards one after the other like " a king or a queen a P(EnF) jack or an ace "here conditional probability should be used. We know that P(EI F) = 0. Here the condition is that F has occurred. so (OF ) F) . P(EIF) (1) Again P(FIE)=P(F nE ) /P(E) in this the condition is on E so nE (2)but intersection is commutative hence here P( E) Oand P(F) This is multiplicative rule of probability. For more than two events we take it as this If E and F are independent events then we use P(E n F)=P(E).P(F) Here we have to take care that independent " and mutually " exclusive" events do not have the same meaning , reason is that independent is defind in terms of " probability of events" and 'mutually exclusive "is defind in terms of subset of sample space. When events are mutually independent then M) nG) = If at least one of them is not true then these 3 events are not independent . Often we solve the problems like there are two bags having balls of different colours and we have to find the probability of getting some given colour ball and beside this we also find the bag from which it is drawn i.e. we have to find both conditional probability and reverse probability. Then we use Baye,theorem. First we discuss the partition of sample space. A set of events El,E2,E3 En in a sample space represents a partition of sample space if they are l.pairwise disjoint i. e. ( El n Q) n E3) 2.exhaustive i.e. El U Q UE3 ......U S
  5. 3. have non zero probabilities i.e. P(Ei)>0 ,i= 1,2,...n Let { El,E2 E }be a partition of sample space S, and A be any event associated with S then P(AI E2)+ En). Now we come back to define BAYE'S THEOREM , If El E are events in the sample space satisfying the above three conditions and A is the event associated then P(Ei i: 12 n. while using Baye's theorem ,the events El,E2 E are called "Hypothesis" and P(Ei) is called prior probability of this hypothesis, and conditional probability P(EilA) is called a posterior probability of this hypothsis Ei. Baye's theorem is called theorem of causes as it gives the probability of a particular event Ei when event A has occurred. ND B BIL We have already defied the random variables. The probability distribution of a random variable X is the system of numbers X: ,....x n pn and Pi>0 Epi - -1 i=12 n. Let "X" be a random variable whose possible values are x . Occur with probabilities p .respectively . The mean of X is denoted byg 12 n PI,P2 xi pi Mean of random variable X is also called as "expectation" of X denoted by E(X) = + .....+XnPn. Random variables with different probability distributions can have equal means , As discussed above u = E(X) is the mean of X denoted by var(X) or is defined as 2=var(x)= (Xi-g)2 p(Xi). axis called standard deviation. One more simplified i=12 n. BERNOULLI-J TRIALS. In many experiments such as tossing a coin rolling a dice the outcomes are not very sure so we call them trials. The outcome of any trial is independent of any other trial, so we just name the outcome favourable to us as "success" the other one obviously be a "failure". Such independent trials which have only two outcomes named "success" or "failure" are called BERNOULI'S TRIALS . These trials have the following properties. 1. The trials should be finite in numbers. 2. The trials should be independent. 3.Each trial has exactly two outcomes as success or failure 4. The probability of success or failure remains constant. Let us think about an experiment made up of n" Bernouli trials wth probabilities "p " for success and q " for failure then binomial expansion of (q +P)n gives us the probability distribution. Distribution of number of successes X can be given as x 1 2 x n
  6. nCo qn nC1 qn 1 pi nC2 p2 nCx qnx PX nCn pn This probability distribution is called binomial distribution with parameters n and p. In general P(X)= nCrq p x where q=l-p This P(X) is" probability function "of the binomial distribution . This binomial distribution is also denoted by B(n, p).