The law of large numbers is a well-known concept in the fields of statistics and probability. The law of large numbers states that, as the number of trials of a random experiment increases, the percentage difference between the average of the results and the expected value gets closer to zero.
To take an example, say that we flipped a (fair) coin several times. Let's say that, after 50 tosses, we got 30 "heads" and 20 "tails". We would expect 25 of each, so the percentage difference is 5 ÷ 50 × 100% = 10%. If we conducted several more trials, the percentage difference almost certainly become less than 10%. As we conducted more and more trials, the percentage difference would approach 0%.
The law of large numbers is often misunderstood. It does not mean that the absolute difference between the expected and actual results will tend toward zero. Take our coin-flipping example above. After 50 tosses, the absolute difference between "heads" and "tails" is 10. The law of large numbers doesn't say that the absolute difference will shrink to 0 (in fact, after, say, 1,000 tosses the absolute difference is as likely to be 20 as it is to be 0). It refers to percentage differences. Let's say that we tossed the coin 1,000 times and there are now 20 more "heads" than "tails." Even though the absolute difference has gone up from 10 to 20, the percentage difference has gone down from 10% to 1%. So, in practical terms, if you, say, flip a coin and heads comes up a lot, it does not mean that tails is "due"; the odds of a tail coming up on any given toss are still ½.