Many philosophers have become worried about the use of standard real numbers for the probability function that represents an agent's credences. They point out that real numbers can't capture the distinction between certain extremely unlikely events and genuinely impossible ones—they are both represented by credence 0, which violates a principle known as “regularity.” Following Skyrms 1980 and Lewis 1980, they recommend that we should instead use a much richer set of numbers, called the “hyperreals.”
This essay argues that this popular view is the result of two mistakes. The first mistake, which this essay calls the “numerical fallacy,” is to assume that a distinction that isn't represented by different numbers isn't represented at all in a mathematical representation. In this case, the essay claims that although the real numbers do not make all relevant distinctions, the full mathematical structure of a probability function does. The second mistake is that the hyperreals make too many distinctions. They have a much more complex structure than credences in ordinary propositions can have, so they make distinctions that don't exist among credences. While they might be useful for generating certain mathematical models, they will not appear in a faithful mathematical representation of credences of ordinary propositions.