The nothing in mathematics

The nothing in mathematics

Although today we are confronted with the empty set already in school, its definition and its place in mathematics and philosophy has been disputed for centuries

It was in 1939, on the eve of the Second World War, when a mathematical concept reached something like full age and its final name: it is the empty set, represented by Ø, letter of the Danish and Norwegian alphabets. The new symbol became at one stroke the standard notation in set theory.

The proposal came from Andre Weil (1906-1998), a scientist from Strabburg and one of the most important members of the Bourbaki Group, an association of like-minded French mathematicians who took upon themselves the task of reformulating all of mathematics in a new and inexorably rigorous way. In the very first book on calculus, in the introduction of set theory1, Ø is defined as the empty part of a set, in order to explain the notation of sets "once and for all".2 generations of students at universities have since then fought a losing battle against the dryness and rigor of the all-too-many Bourbaki gangs.

In the 21. century it all seems like the distant past. And still: Although we are confronted with the empty set already in school, its definition and its place in mathematics and philosophy has been disputed for centuries. Nothing is more complicated than nothing, at least from the point of view of philosophy.

When we talk about numbers, we need a starting number like 1. By successively increasing by one unit, we move to 2, 3, 4, etc. away. The initial number can be arbitrary, one had also z.B. to start with zero. But the zero as a symbol was not available to all cultures. In the Roman notation for numbers there is no sign for it and so we number the years since the birth of Christ from the year one. Only with the spread of the positional systems there was the necessity to operate symbolically with the zero as well.

But back to set theory. When dealing with sets of objects, there are two basic ways to form them. There is the pradicative way, where we simply specify linguistically which elements a set contains (like z.B., when we say, "let T be the set of all Telepolis readers"), or the constructive way, where new sets are constructed from given sets. Historically, the pradicative path was taken first and brought to maturity in the nineteenth century – but then something terrible happened: the Russell paradox entered the mathematical stage.

In the paradox, the set M of all sets that do not contain themselves is explored. The definition of M was, linguistically seen, a completely legitimate thing. But then it is asked whether this set M is also an element of itself. If it was, we had a contradiction, since M as an element of M must not contain itself. But if M is not an element of itself, it fulfills the condition to be an element of M – again a contradiction and thus an inconsistency in the science of abstract structures.

In 1901 the British mathematician and philosopher Bertrand Russell gave his paradox to what is known today as the "paradox" "naive set theory" the stob of death. The German theorist Ernest Zermelo had discovered the inconsistency a year earlier, and Georg Cantor in Halle much earlier, but both were prudent enough to keep their mouths shut until a solution for the system could be found.

The problem here is actually that we cover far too much ground with the language and z.B. He could talk about quantities that eat themselves, like Ouroboros, the Egyptian snake that bites itself by the tail. It is also like the paradox of the barber who cuts all the people in the village who do not cut their own hair. The poor barber then does not know whether he is allowed to cut his own hair or not.

But no one was more drunk on Russell’s discovery than the father of German logic, Gottlob Frege, who had just published his book "The basic laws of arithmetic" for the second edition of 1903 when Russell told him about the contradiction in his book. Frege wrote to Russell:

Your discovery of the contradiction has surprised and, I would almost say, dismayed me, because it has removed the basis on which I had prepared the arithmetic for the second edition of 1903 … Build thought, faltered. (…) I need to think further about the matter. It is all the more serious, because with the disappearance of my law V not only the basis of my arithmetic, but the only possible basis of arithmetic at all seems to be sinking.

Frege to Russell, 22. 6. 1902.

George Boole and the first empty set

Thus, although mathematicians have operated implicitly with sets of objects for centuries, there was no real theoretical formalization of them until well into the nineteenth century. Frege himself was in a certain sense only a pioneer, since he created the logical language, with which one could work mathematically cleanly from then on (the so-called pradicate logic).

Frege’s opponent was the British mathematician George Boole, whom we understand today as a kind of forerunner of the computer because of his Boolean algebra. He was apparently the first to assign an explicit symbol to the empty set. This happened in his book "Mathematical Analysis of Logic" from 1847.

The step to an explicit symbol for the empty set is by no means trivial. We can always start the nothingness purely linguistically, but if we start to operate with quantities, we might also have quantities as results. If we unite two sets, like {1,2} with {3,4}, we get the set of numbers from 1 to 4. But if we now combine both sets "intersect", d.h. looking at the common elements, there are none and the result is "empty". We could then continue linguistically, but not symbolically. Much more convenient than saying "A and B have no common elements" is therefore to write "A∩B=∅" (d.h. the intersection of A and B is the empty set) and we can then continue to work algebraically.

It is then that Boole used the symbol 0 for the empty set. This double use of the symbol for the number zero was convenient for him, because Boole developed his logical operations with the help of the truth values 0 and 1. Many other mathematicians then adopted Boole’s notation, i.e.h. in approximately almost the present, but without the dash through the vowel O.

All this basic research was joined by an Italian mathematician, Giuseppe Peano, who wanted to develop arithmetic without words. Instead of long sentences, he wanted to use only chains of symbols to keep the whole formalism free of intuitive, unverifiable linguistic biases. Peano’s notation for the empty set was highly original and it is actually a pity that we no longer use it today: Following Georg Cantor, who used the coarse letter O (instead of zero like Boole) as a symbol for the empty set, Peano decided in 1888 to use a black circle to represent the universal set (everything) and an empty circle to represent the empty set or. the nothingness (see fig.)

The nothing in mathematics

Peano’s symbols for the universal and empty set (Calcolo Geometrico, 1888, page 2).

It was probably difficult to get these symbols accepted by the stubborn typesetters and so Peano changed only one year later to an inverted lambda (coarse) as a symbol for the empty set. Peano, as you can see, thought a lot about the right symbolic language for mathematics – but also about the language for human beings. He invented in passing "Interlingua", a simplified Latin without declensions, which as a universal language should enable boundless communication.

In Great Britain, Whitehead and Russell, who continued Peano’s program, adopted Peano’s symbol for the empty set in their grandiose work "Principia Mathematica", which perhaps no mathematician has ever really read from the beginning to the end. So impenetrable and difficult is the whole treatise. The reader moves at a snail’s pace through a notational desert until, after hundreds of pages, 1+1=2 is proved.

Leave a Reply

Your email address will not be published. Required fields are marked *