Relative Frequency Approach
In the one thousand eight hundred’s British statisticians interested in theoretical foundations for calculating risk of losses in life insurance and commercial insurance, began defining probabilities from a statistical data collected on births and deaths. Today, this approach is called relative frequency of occurrence.
The classical approach is difficult or impossible to apply as soon as we deviate from the fields
of coins, dice, cards and other simple games of chance. Secondly the classical approach may
not explain actual results in certain cases.
For example: if a coin is tossed 10 times, we may get 6 heads and 4 tails. The probability of head is thus 0.6 and that of a tail is 0.4.
However if the experiment is carried out a larger number of times we should expect approximately equal number of heads and tails. As n increases, that are approaches to infinity, we find that the probability of getting a head or tail approaches 0 .5. The probability of an event can thus be defined as the relative frequency with which it occurs in n, that is, indefinitely large number of trials.
Consider a firm that is preparing to a market a new product. In order to estimate the probability that a customer will purchase the product, a test market evaluation has been set up wherein sales people call on potential customers.
Each sales call conducted has two possible outcomes: The customer purchases the product or the customer does not purchase the product. With no reasons to assume that the two experimental outcomes are equally likely the classical method of assigning probabilities is inappropriate.
Suppose that in the test market evaluation of the product, four hundred potential customers were contacted, hundred purchased the product, but three hundred did not. In fact we have repeated the experiment of contacting the customers four hundred times and found that the product was purchased hundred times.
Thus we might decide to use the relative frequency of number of customers that purchased the product as an estimate of the probability of a customer making a purchase. We could assign a probability of hundred by four hundred equal to point two five to the experimental outcome of purchasing the product. This approach to assigning probabilities is referred to as the relative frequency method.
If an event occurs m times out of n, its relative frequency is m by n, that is, the value which is approached by m by n when n becomes infinity is called the limit of the relative frequency.
Symbolically
P(A)= Limit of n tends to infinity m/n
Theoretically we can never obtain the probability of an event as given by the above limit.
However in practice we can try to have a close estimate of P(A) based on large number of observations that is n.
In the relative frequency definition the fact that the probability is the value which is approached by m/n when n becomes infinity, emphasizes a very important point, that is, probability involves the long term concept. This means that if we toss a coin only 10 times we may not get exactly 5 heads and 5 tails. However, if an experiment is carried out larger and larger number of times, say a coin is thrown ten thousand times , we can expect the outcome of heads and tails to be very close to fifty percent
Limitations of Relative Frequency Approach:
The relative frequency approach though useful in practice has difficulties from the
mathematical point of view
1) An actual limiting number may not really exist. Quite often people use this approach without evaluating a sufficient number of outcomes. For example: Mr Kohli pointed out that his father and mother (aged seventy-five and seventy) both had a serious heart problem in Jan-Feb nineteen ninety-nine and hence in winter people above seventy have high probability of heart attack. His friends took it seriously and started to give special attention to their parents.
2) On deep thinking we may find that there is not enough evidence of establishing a relative frequency of occurrence of probability.
3) It may be observed that the empherical probability can never be obtained and one can only attempt at a close estimate of P(A) by taking n sufficiently large.
For this reason modern probability theory has been developed axiomatically in which probability is an undefined concept much the same as point and line are undefined in Geometry.