I’m away on vacation this week, taking my kids to Disney World. Since I’m not likely to

have time to write while I’m away, I’m taking the opportunity to re-run an old classic series

of posts on numbers, which were first posted in the summer of 2006. These posts are mildly

revised.

This post originally came about as a result of the first time I participated in

a DonorsChoose fundraiser. I offered to write articles on requested topics for anyone who donated above a certain amount. I only had one taker, who asked for an

article about zero. I was initially a bit taken aback by the request – what could I write

about *zero*? This article which resulted from it ended up turning out to be one of the all-time reader-favorites for this blog!

### History

We’ll start with a bit of history. Yes, there’s an actual history to zero!

Early number systems had no concept of zero. Numbers really started out as very practical

tools, primarily for measuring quantity. They were used to ask questions like “How much grain

do we have stored away? If we eat this much now, will we have enough to plant crops next

season?” In that context, a “measurement”" of zero doesn’t really mean much; even when math

is applied to measurements in modern math, leading zeros in a number – even if they’re

*measured* – don’t count as significant digits in the measurement. (So if I’m measuring some rocks, and one weighs 99 grams, then that measurement has only two significant digits. If I use the same scale to weigh a very slightly larger rock, and it weighs 101 grams, then my measurement of the second rock has *three* significant digits. The leading zeros don’t count!)

Aristotle is pretty typical of the reasoning behind why zero wasn’t part of most early number systems – he connected the ideas of zero and infinity, and considered them

nothing more than pure *ideas* related to numbers, but not actual numbers

themselves. After all, you can’t really *have* 0 of anything; zero of something isn’t *anything*: you *don’t have* any quantity of stuff. And by Aristotle’s reasoning, zero had another property in common with infinity – you can’t ever really *get to* zero as he understood it. If numbers are quantity, then you can start with one of something. Then you can cut it in half, and you’ll have half. Cut that in half, you’ll

have a quarter. Keep going – and you can cut stuff in half forever. You’ll get closer and

closer to the concept represented by zero, but you’ll never actually get there.

The first number system that we know of to have any notion of zero is the babylonians; but they still didn’t really quite treat it as a genuine number. They had a base-60 number system, and for digit-places that didn’t have a number, they left a space: the space was the zero. (They later adopted a placeholder that looked something like “//”.) It was never used *by itself*; it just kept the space open to show that there was nothing there. And if the last digit was zero, there was no indication. So, for example, 2 and 120 looked exactly the same – you needed to look at the context to see which it was.

The first real zero came from an Indian mathematician named Brahmagupta in the 7th

century. He was quite a fascinating guy: he didn’t just invent zero, but arguably he also

invented the idea of negative numbers and algebra! He was the first to use zero as a real

number, and work out a set of algebraic rules about how zero, positive, and negative numbers

worked. The formulation he worked out is very interesting; he allowed zero as a numerator or

a denominator in a fraction.

From Brahmagupta, zero spread both east (to the Arabs) and west (to the Chinese and

Vietnamese.) Europeans were just about the last to get it; they were so attached to their

wonderful roman numerals that it took quite a while to penetrate: zero didn’t make the grade

in Europe until about the 13th century, when Fibonacci (he of the series) translated the

works of a Persian mathematican named al-Khwarizmi (from whose name sprung the word

“algorithm” for a mathematical procedure). As a result, Europeans called the new number

system “arabic”, and credited it to the arabs; but as I said above, the arabs didn’t create

it; it originally came from India. (But the Arabic scholars, including the famous poet Omar

Khayyam, are the ones who adopted Brahmagupta’s notions *and extended them* to include

complex numbers.

### Why is zero strange?

Even now, when we recognize zero as a number, it’s an annoyingly difficult one. It’s

neither positive nor negative; it’s neither prime nor compound. If you include it in the set

of real numbers, then they’re not a group – even though the concept of group is built on

multiplication! It’s not a unit; and it breaks the closure of real numbers in algebra. It’s a

real obnoxious bugger in a lot of ways. One thing Aristotle was right about: zero is a kind

of counterpart to infinity: a concept, not a quantity. But infinity, we can generally ignore

in our daily lives. Zero, we’re stuck with.

Still, it’s there, and it’s a real, inescapable part of our entire concept of numbers.

It’s just an oddball – the dividing line that breaks a lot of rules. But without it, a lot of

rules fall apart. Addition isn’t a group without 0. Addition and subtraction aren’t closed

without zero.

Our notation for numbers is also totally dependent on zero; and it’s hugely important to

making a polynomial number system work. To get an idea of how valuable it is,

just wait ’till later this week, when I’ll be re-posting an article about multiplication

in roman numerals – multiplication is vastly easier in the decimal system with zero!

Because of the strangeness of zero, people make a lot of mistakes involving it.

For example, there’s one of my big pet peeves: based on that idea of zero and infinities are relatives, a lot of people believe that 1/0=infinity. It doesn’t. 1/0 doesn’t equal *anything*; it’s meaningless. You *can’t* divide by 0. The intuition behind this fact comes from the Aristotelean idea about zero: concept, not quantity. Division is a concept based on quantity: Asking “What is x divided by y” is asking “What quantity of stuff is the right size so that if I take Y of it, I’ll get X?”

So: what quantity of apples can I take 0 of to get 1 apple? The question makes no sense;

and that’s exactly right: it *shouldn’t* make sense, because dividing by zero makes no sense: *it’s meaningless*

.

Zero is also at the root of a lot of silly mathematical puzzles and

tricks. For example, there’s a cute little algebraic pun that can show that 1 = 2, which is based on hiding a division by zero.

- Start with “x = y”.
- Multiply both sides by x: “x
^{2}= xy”. - Subtract “y
^{2}” from both sides: “”x^{2}– y^{2}= xy – y^{2}“. - Factor: “(x+y)(x-y) = y(x-y)”.
- Divide both sides by the common factor “x-y”: “x + y = y”.
- Since x=y, we can substitute y for x: “y + y = y”.
- Simplify: “2y=y”.
- Divide both sides by y: “2 = 1″.

The problem, of course, is step 5: x-y = 0, so step five is dividing by zero. Since that’s a meaningless thing to do, everything based on getting a meaningful result from that step is wrong – and so we get to “prove” false facts.

Anyway, if you’re interested in reading more, the best source of information that I’ve found is an online article called “The Zero Saga”. It covers not just a bit of history and random chit-chat like this article, but a detailed presentation of everything you could ever want to know, from the linguistics of words meaning zero or nothing to cultural impacts of the concept, to detailed mathematical explanation of how zero fits into algebras and topologies.