Any time a sample statistic is not equal to a population parameter, there are two potential explanations for the difference. (Well, technically there are three, since a mismatch can result from mistakes in the sampling process. For our purposes, though, as mentioned earlier, we are assuming correct research methods and simple, random samples.) First, the inequality could be the product of inherent random fluctuations in sample statistics (i.e., sampling error). In other words, the disparity might simply be a meaningless fluke. If you flipped a fair coin six times, you would expect the coin to land tails side up three times. If, instead, you got four tails, you would not think that there was anything weird happening; the next set of six trials might result in two tails. This is sampling error—variation that is like white noise in the background.
The second possible explanation for the difference is that there is a genuine discrepancy between the sample statistic and the population parameter. In other words, the disparity could represent a bona fide statistical effect. If you flipped a coin 20 times and got 19 tails, you would suspect there was something wrong with the coin because this is an extremely unlikely outcome. Perhaps the coin is weighted on one side, which would mean that it is different from the ordinary quarter or dime you might have in your pocket. Large discrepancies between observed and expected outcomes are sufficiently improbable to lead us to conclude that there is something genuinely unique about the empirical outcome we have in front of us.