The value of 00 is controversial.
Zero to the power of zero, denoted by 00, is a mathematical expression with no agreed-upon value. The most common possibilities are 1 or leaving the expression undefined, with justifications existing for each, depending on context.
Mathematica and WolframAlpha refuse to compute the value.
Some textbooks on mathematical analysis, when defining exponentiation, explicitly leave 00 undefined as an exception.
On the other hand, the IEEE 754 floating point standard specifies that 0.00.0 = 1.0 and as a result, most programming languages implement it that way.
Spoiler: I will argue that the value of 1 is clearly “correct”. Of course it’s a matter of definition, one can in theory define the operation to do anything, but it is “correct” in the sense that it is the only sensible value, consistent with all applications, and moreover, it is a very important value. It is also implicitly assumed to be 1 in various formulas even by those people who insist it should not be 1.
I will also show that the argument against defining 00 essentially relies on a mistake: an incorrect algorithm for computing limits that is unfortunately often taught in schools. Refusal to define 00 is a futile attempt to salvage the correctness of the algorithm, which however does not actually solve the problem in general.
The simplest way to resolve the issue seems to be to start with a definition, plug in zeroes, and see what we get.
A semigroup is a set of objects with an associative multiplication operation. This is a very general concept: it can be natural numbers, real numbers, square matrices, linear operators, all kinds of things.
In any semigroup we can define exponentiation to any positive integral power:
Often a semigroup has an identity element such that . For numbers, it’s just the number 1. For matrices, it’s the identity matrix. Such a semigroup is called a monoid.
In a monoid we expand and simplify the definition of exponentiation to include the 0 exponent:
Well, now just plug in x=0 into that definition and what do you get: 00 = 1.
This definition can then be extended to negative exponents, rational exponents, even irrational exponents. But since we’re only concerned with 00 here, we’re not going to go further.
The 0n function
Since 0n = 0 for all n > 0, one might think that the most natural thing to expect is that it would also be true for n = 0. However we see from the definition that it is not so.
We have here used the Iverson bracket notation.
This seems like a strangely complicated formula for 0n, but we will see that it is in fact a very nice and useful function.
What is the number of sequences of n letters, selected from an alphabet of size A? It’s An.
What if the alphabet is empty, A=0? Then the number of sequences is:
Does this make sense? Yes! If n > 0, we can’t form a sequence, because we will get stuck when trying to write the first letter. But when n=0, there is no problem! We don’t have to write any letters, so it’s fine if the alphabet is empty. There is exactly one way to do it: write an empty sequence of letters.
The exp function
The exponential function has the following basic property, often taken to define exp in the first place:
Let’s plug in x=0:
The 0n function played an essential role in this calculation.
The binomial distribution
The binomial distribution is a probability distribution of the number of successes in n independent trials, each successful with probability p. The formula is:
What if p=0?
This makes sense! k=0 successes is certain, any other outcome is impossible.
Given a set of n elements, how many more even-cardinality subsets are there than odd-cardinality subsets?
We can calculate it like this:
And indeed, for n=0 we have 1 even-cardinality subset (the empty set), and no odd-cardinality subsets, while for n>0 there are as many even as odd cardinality subsets.
One important property of it concerns sums over divisors of a positive integer n:
It can be shown that since μ is multiplicative, S is also multiplicative.
Also for prime p and α > 0:
Let’s factor n into prime numbers:
and then we have:
What about the 0x function for real (rather than natural) exponents ?
Some people argue that while the case for 0n=[n=0] is convincing, the case for 0x=[x=0] is less convincing, and 00 should only be defined for the integral exponent 0, and left undefined for the real exponent 0.0.
I have three ways to answer that.
Natural numbers are real numbers
A ubiquitous convention in mathematics is that natural numbers are a subset of integer numbers, which in turn are a subset of rational numbers, which are a subset of real numbers.
This lets us mix and match integers with rational numbers and irrational numbers in expressions without having to worry about converting between these types.
If so, it makes no sense to say that 00=1 but 00.0 is undefined, because the natural number 0 is the same number as the real number 0.0.
One reason to doubt this is how numbers are constructed from sets in set theory. Natural numbers are constructed first. Then integers are constructed as equivalence classes of pairs of natural numbers. Similarly rational numbers are then constructed as equivalence classes of pairs of integers. Finally real numbers are constructed from rational numbers using Dedekind cuts or Cauchy sequences.
If we literally follow such a construction, then indeed the natural number 0, the integer 0, the rational number 0, and the real number 0 will be four different objects. However, there is an easy fix. When constructing integers as certain equivalence classes of pairs of natural numbers, we can simply replace the non-negative integers with the actual natural numbers. Similarly, we can replace the “integral rationals” with actual integers, and “rational reals” with the actual rationals. After we do that, the 0 number is the same object belonging to all four sets.
Consistency is good
Even if one were to treat integers as disjoint from reals, it would be nice to know that if the notation ab means something for integers a and b, then it also means the equivalent thing for the real equivalents of a and b. Technically what it means is that it would be nice if the integer-to-real mapping was a homomorphism for the ab operation.
Otherwise, if the notation changed meaning between the “integer context” and “real context”, we would have to be extremely careful about which context we are in! And it wouldn’t be clear from notation such as x0. It would be a mess. We don’t want notation to be ambiguous.
0x is sometimes useful for fractional exponents
What is the (right-sided) derivative of at x = 0 for ? Let’s calculate:
And indeed this is correct! The derivative at x = 0 is 1 for p = 1, and 0 for p > 1. The derivative at 0 discontinuously “jumps” from 1 to 0 as soon as we increase the exponent p even slightly above 1.
The naive limit algorithm
Given all these nice uses of 00 = 1, why do some people resist defining it like this?
The only reason I have seen has to do with what I call the “naive limit algorithm”.
Suppose we want to calculate this limit:
The argument goes that somebody could calculate it like this:
Which would give an incorrect answer. The correct answer is 1/3.
However, the mistake is not in the step 00 = 1. The mistake already happened in the previous step, where we simplified the limit to 00.
A common (incorrect) way of thinking about this is: we allow calculating limits separately for sub-expressions only if the resulting expression makes sense. If it does not make sense, then doing that is not allowed. If only we declare that 00 is not a valid expression, the reduction to 00 will not be allowed, so it solves the problem. If however we do define 00 to mean something, the reduction would be allowed.
That’s what I call the “naive limit algorithm”. It doesn’t work.
Let’s apply the same algorithm to a different limit:
There is an error here. The correct value of the last limit is not 10, it is 11. But this time we can’t fix it the same way: we can’t say “let’s just leave undefined”. Everybody agrees that is a valid expression and has to be defined!
The naive limit algorithm simply doesn’t always work.
In general, the algorithm can be described as follows. If:
and is a valid expression, then:
Is this true? It’s not always true! What we wrote here is precisely the definition of continuity of f at the point (a, b, c, …). Some functions are not continuous!
Therefore the appropriate condition shouldn’t have been “ is a valid expression”, it should have been “f is continuous at (a, b, c, …)”.
Well, xy is simply not continuous at (0, 0). As we saw, even 0x is not continuous at 0. It’s inherently so, it reflects deep mathematical reality.
Refusing to define the operation there doesn’t really help the situation at all. If we don’t define it at (0, 0), it’s still not going to be continuous there, it would not even be defined there, which is worse! We can’t use the naive limit algorithm at that point either way.
I think we should just all agree that:
It follows directly from definitions, and it’s a nice and consistent and useful property of exponentiation. There is no convincing reason to make an exception.
Let me know what you think!