Classical probability is one of three main intrepretations of the concept of probability (with statistical probablity and subjective probability being the other two). In classical probability, there is a sample space consisting of all possible events or outcomes. These events are *mutually exclusive* and *exhaustive*; in other words, if we run an experiment once, exactly one outcome will occur. For example, if we roll a die, perhaps a 1 will appear, or perhaps a 6 will appear, or perhaps some other number will appear, but we will never see two numbers at once or no numbers on the die.
The classical interpretation of probability deals with events for which there is no reason to believe that any one is more or less likely than any other. By the principle of indifference, the probability of each event occurring is ^{1}⁄_{n}, where `n` is the number of events.

Classical probability works well for analyzing games of chance, the original subject material for the field of probability. We can model experiments such as flipping coins, tossing dice or dealing cards using classical probability. So, for example, the probability of rolling a certain number on a single die is ^{1}⁄_{6}, the probability of tossing a head is ½, and so on.

For some experiments, not all outcomes are equally likely; for example, the probability of rolling a total of 12 with two dice is less than the probability of rolling a 7 with two dice. In this case, we can start to analyze the events by breaking them down into more elementary events that are all equally likely to occur.

Two events are *mutually exclusive* if there is no way that both could occur in a single experiment. For example, it is impossible for a coin to land both heads and tails at the same time. To find the probability of one of a certain number of mutually exclusive events occurring, simply add the probabilities together. So, the probability of a 1 or 2 or 3 or 4 occurring on a roll of a die is ^{1}⁄_{6} + ^{1}⁄_{6} + ^{1}⁄_{6} + ^{1}⁄_{6} = ⅔.

Two events are called *independent* if the occurrence of one event does not affect the probability of the other. For example, if you roll two dice, the number on the first die does not affect the number on the second. To find the probability of two independent events occurring, multiply the probabilities together. So, for example, the probability of rolling two dice and having 5 come up on both is ^{1}⁄_{6} × ^{1}⁄_{6} = ^{1}⁄_{36}.

The opposite or *complement* of an event is the probability of that event not happening. The probability of the complement of an event happening is simply 1 − the probability of the event happening.

We can use these concepts to answer many questions in probability. Let's consider two problems posed by the Chevalier de Méré in the 1650s that inspired Pascal and Fermat to develop probability theory. The Chevalier's first problem was to determine the odds of throwing a 12 in 24 tosses of a pair of dice. Now, the only way to throw a 12 on a toss of a pair of dice is if both dice show a 6. There are six numbers on a die, and there's no reason to believe that any one is more likely to appear than any other. So, we'll assign a probability of 1⁄6 to the event of each appearing on a single roll. If we roll two dice, those two events are independent, so the probability of rolling two sixes is 1⁄6 × 1⁄6 = 1⁄36. Now, to calculate the probability of getting a 12 in 24 rolls of two dice, it's probably easier to find the probability of the complement of that event; in other words, the probability that a double six will *not* be thrown, and then subtract that probability from 1. The probability that a 12 will not be thrown on a single roll is 35⁄36, so, since each roll is independent of the next, the probability that no 12 will be thrown in 24 rolls is (35⁄36)^{24}, or about 50.86%. We can then subtract that value from 1 to get the the probability of throwing a 12 in 24 rolls of two dice, which is about 49.14%.

The Chevalier's second problem could be summarized as follows: Two players are playing a game of chance in which each player is equally likely to score a point. The first player to score three points takes the pot. If the game is abandoned with the first player leading 2-1, how should the pot be split? This question is easier to answer than the previous one. The only way that the second player can win the game is if he gets the next two points. The probability of that is ½ × ½ = ¼. Therefore, the second player should get ¼ of the pot and the first player should get ¾ of the pot.

Classical probability works best when dealing with experiments that have only a finite number of outcomes that are known to be equally likely. If this isn't the case, it may be difficult to correctly analyze an experiment with classical probability. The St. Petersburg paradox and the Bertrand paradox illustrate some limitations of classical probability.