Is the Mathematical World Real?

Philosophers cannot agree on whether mathematical objects exist or are pure fictions

Brook VanDevelder

When I tell someone I am a mathematician, one of the most curious common reactions is: “I really liked math class because everything was either right or wrong. There is no ambiguity or doubt.” I always stutter in response. Math does not have a reputation for being everyone's favorite subject, and I hesitate to temper anyone's enthusiasm. But math is full of uncertainties—it just hides them well.

Of course, I understand the point. If your teacher asks whether 7 is a prime number, the answer is definitively “yes.” By definition, a prime number is a whole number greater than 1 that is only divisible by itself and 1, such as 2, 3, 5, 7, 11, 13, and so on. Any math teacher, anywhere in the world, anytime in the past several thousand years, will mark you correct for stating that 7 is prime and incorrect for stating that 7 is not prime. Few other disciplines can achieve such incredible consensus. But if you ask 100 mathematicians what explains the truth of a mathematical statement, you will get 100 different answers. The number 7 might really exist as an abstract object, with primality being a feature of that object. Or it could be part of an elaborate game that mathematicians devised. In other words, mathematicians agree to a remarkable degree on whether a statement is true or false, but they cannot agree on what exactly the statement is about.

One aspect of the controversy is the simple philosophical question: Was mathematics discovered by humans, or did we invent it? Perhaps 7 is an actual object, existing independently of us, and mathematicians are discovering facts about it. Or it might be a figment of our imaginations whose definition and properties are flexible. The act of doing mathematics actually encourages a kind of dual philosophical perspective, where math is treated as both invented and discovered.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


This all seems to me a bit like improv theater. Mathematicians invent a setting with a handful of characters, or objects, as well as a few rules of interaction, and watch how the plot unfolds. The actors rapidly develop surprising personalities and relationships, entirely independent of the ones mathematicians intended. Regardless of who directs the play, however, the denouement is always the same. Even in a chaotic system, where the endings can vary wildly, the same initial conditions will always lead to the same end point. It is this inevitability that gives the discipline of math such notable cohesion. Hidden in the wings are difficult questions about the fundamental nature of mathematical objects and the acquisition of mathematical knowledge.

Invention

How do we know whether a mathematical statement is correct or not? In contrast to scientists, who usually try to infer the basic principles of nature from observations, mathematicians start with a collection of objects and rules and then rigorously demonstrate their consequences. The result of this deductive process is called a proof, which often builds from simpler facts to a more complex fact. At first glance, proofs seem to be key to the incredible consensus among mathematicians.

But proofs confer only conditional truth, with the truth of the conclusion depending on the truth of the assumptions. This is the problem with the common idea that consensus among mathematicians results from the proof-based structure of arguments. Proofs have core assumptions on which everything else hinges—and many of the philosophically fraught questions about mathematical truth and reality are actually about this starting point. Which raises the question: Where do these foundational objects and ideas come from?

Often the imperative is usefulness. We need numbers, for example, so that we can count (heads of cattle, say) and geometric objects such as rectangles to measure, for example, the areas of fields. Sometimes the reason is aesthetic—how interesting or appealing is the story that results? Altering the initial assumptions will sometimes unlock expansive structures and theories, while precluding others. For example, we could invent a new system of arithmetic where, by fiat, a negative number times a negative number is negative (easing the frustrated explanations of math teachers), but then many of the other, intuitive and desirable properties of the number line would disappear. Mathematicians judge foundational objects (such as negative numbers) and their properties (such as the result of multiplying them together) within the context of a larger, consistent mathematical landscape. Before proving a new theorem, therefore, a mathematician needs to watch the play unfold. Only then can the theorist know what to prove: the inevitable, unvarying conclusion. This gives the process of doing mathematics three stages: invention, discovery and proof.

The characters in the play are almost always constructed out of simpler objects. For example, a circle is defined as all points equidistant from a central point. So its definition relies on the definition of a point, which is a simpler type of object, and the distance between two points, which is a property of those simpler objects. Similarly, multiplication is repeated addition, and exponentiation is repeated multiplication of a number by itself. In consequence, the properties of exponentiation are inherited from the properties of multiplication. Conversely, we can learn about complicated mathematical objects by studying the simpler objects they are defined in terms of. This has led some mathematicians and philosophers to envision math as an inverted pyramid, with many complicated objects and ideas deduced from a narrow base of simple concepts.

In the late 19th and early 20th centuries a group of mathematicians and philosophers began to wonder what holds up this heavy pyramid of mathematics. They worried feverishly that math has no foundations—that nothing was grounding the truth of facts like 1 + 1 = 2. (An obsessive set of characters, several of them struggled with mental illness.) After 50 years of turmoil, the expansive project failed to produce a single, unifying answer that satisfied all the original goals, but it spawned various new branches of mathematics and philosophy.

Some mathematicians hoped to solve the foundational crisis by producing a relatively simple collection of axioms from which all mathematical truths can be derived. The 1930s work of mathematician Kurt Gödel, however, is often interpreted as demonstrating that such a reduction to axioms is impossible. First, Gödel showed that any reasonable candidate system of axioms will be incomplete: mathematical statements exist that the system can neither prove nor disprove. But the most devastating blow came in Gödel's second theorem about the incompleteness of mathematics. Any foundational system of axioms should be consistent—meaning, free of statements that can be both proved and disproved. (Math would be much less satisfying if we could prove that 7 is prime and 7 is not prime.) Moreover, the system should be able to prove—to mathematically guarantee—its own consistency. Gödel's second theorem states that this is impossible.

The quest to find the foundations of mathematics did lead to the incredible discovery of a system of basic axioms, known as Zermelo-Fraenkel set theory, from which one can derive most of the interesting and relevant mathematics. Based on sets, or collections of objects, these axioms are not the idealized foundation that some historical mathematicians and philosophers had hoped for, but they are remarkably simple and do undergird the bulk of mathematics.

Throughout the 20th century mathematicians debated whether Zermelo-Fraenkel set theory should be augmented with an additional rule, known as the axiom of choice: If you have infinitely many sets of objects, then you can form a new set by choosing one object from each set. Think of a row of buckets, each containing a collection of balls, and one empty bucket. From each bucket in the row, you can choose one ball and place it in the empty bucket. The axiom of choice would allow you to do this with an infinite row of buckets. Not only does it have intuitive appeal, it is necessary to prove several useful and desirable mathematical statements. But it also implies some strange things, such as the Banach-Tarski paradox, which states that you can break a solid ball into five pieces and reassemble those pieces into two new solid balls, each equal in size to the first. In other words, you can double the ball. Foundational assumptions are judged by the structures they produce, and the axiom of choice implies many important statements but also brings extra baggage. Without the axiom of choice, math seems to be missing crucial facts, though with it, math includes some strange and potentially undesirable statements.

The bulk of modern mathematics uses a standard set of definitions and conventions that have taken shape over time. For example, mathematicians used to regard 1 as a prime number but no longer do. They still argue, however, whether 0 should be considered a natural number (sometimes called the counting numbers, natural numbers are defined as 0,1,2,3... or 1,2,3..., depending on who you ask). Which characters, or inventions, become part of the mathematical canon usually depends on how intriguing the resulting play is—observing which can take years. In this sense, mathematical knowledge is cumulative. Old theories can be neglected, but they are rarely invalidated, as they often are in the natural sciences. Instead mathematicians simply choose to turn their attention to a new set of starting assumptions and explore the theory that unfolds.

Discovery

As noted earlier, mathematicians often define objects and axioms with a particular application in mind. Over and over again, however, these objects surprise them during the second stage of the mathematical process: discovery. Prime numbers, for example, are the building blocks of multiplication, the smallest multiplicative units. A number is prime if it cannot be written as the product of two smaller numbers, and all the nonprime (composite) numbers can be constructed by multiplying a unique set of primes together.

In 1742 mathematician Christian Goldbach hypothesized that every even number greater than 2 is the sum of two primes. If you pick any even number, the so-called Goldbach conjecture predicts that you can find two prime numbers that add up to that even number. If you pick 8, those two primes are 3 and 5; pick 42, and that is 13 + 29. The Goldbach conjecture is surprising because although primes were designed to be multiplied together, it suggests amazing, accidental relations between even numbers and the sums of primes.

An abundance of evidence supports Goldbach's conjecture. In the 300 years since his original observation, computers have confirmed that it holds for all even numbers smaller than 4 × 1018. But this evidence is not enough for mathematicians to declare Goldbach's conjecture correct. No matter how many even numbers a computer checks, there could be a counterexample—an even number that is not the sum of two primes—lurking around the corner.

Imagine that the computer is printing its results. Each time it finds two primes that add up to a specific even number, the computer prints that even number. By now it is a very long list of numbers, which you can present to a friend as a compelling reason to believe the Goldbach conjecture. But your clever friend is always able to think of an even number that is not on the list and asks how you know that the Goldbach conjecture is true for that number. It is impossible for all (infinitely many) even numbers to show up on the list. Only a mathematical proof—a logical argument from basic principles demonstrating that Goldbach's conjecture is true for every even number—is enough to elevate the conjecture to a theorem or fact. To this day, no one has been able to provide such a proof.

The Goldbach conjecture illustrates a crucial distinction between the discovery stage of mathematics and the proof stage. During the discovery phase, one seeks overwhelming evidence of a mathematical fact—and in empirical science, that is often the end goal. But mathematical facts require a proof.

Patterns and evidence help mathematicians sort through mathematical findings and decide what to prove, but they can also be deceptive. For example, let us build a sequence of numbers: 121, 1211, 12111, 121111, 1211111, and so on. And let us make a conjecture: all the numbers in the sequence are not prime. It is easy to gather evidence for this conjecture. You can see that 121 is not prime, because 121 = 11 × 11. Similarly, 1211, 12111 and 121111 are all not prime. The pattern holds for a while—long enough that you would likely get bored checking—but then it suddenly fails. The 136th element in this sequence (that is, the number 12111...111, where 136 “1”s follow the “2”) is prime.

It is tempting to think that modern computers can help with this problem by allowing you to test the conjecture on more numbers in the sequence. But there are examples of mathematical patterns that hold true for the first 1042 elements of a sequence and then fail. Even with all the computational power in the world, you would never be able to test that many numbers.

Even so, the discovery stage of the mathematical process is extremely important. It reveals hidden connections such as the Goldbach conjecture. Often two entirely distinct branches of math are intensively studied in isolation before a profound relation between them is uncovered. A relatively simple example is Euler's identity, e + 1 = 0, which connects the geometric constant π with the number i, defined algebraically as the square root of –1, via the number e, the base of natural logarithms. These surprising discoveries are part of the beauty and curiosity of math. They seem to point at a deep underlying structure that mathematicians are only beginning to understand.

In this sense, math feels both invented and discovered. The objects of study are precisely defined, but they take on a life of their own, revealing unexpected complexity. The process of mathematics therefore seems to require that mathematical objects be simultaneously viewed as real and invented—as objects with concrete, discoverable properties and as easily manipulable inventions of mind. As philosopher Penelope Maddy writes, however, the duality makes no difference to how mathematicians work, “as long as double-think is acceptable.”

Real or unreal?

Mathematical realism is the philosophical position that seems to hold during the discovery stage: the objects of mathematical study—from circles and prime numbers to matrices and manifolds—are real and exist independently of human minds. Like an astronomer exploring a far-off planet or a paleontologist studying dinosaurs, mathematicians are gathering insights into real entities. To prove that Goldbach's conjecture is true, for example, is to show that the even numbers and the prime numbers are related in a particular way through addition, just like a paleontologist might show that one type of dinosaur descended from another by showing that their anatomical structures are related.

Realism in its various manifestations, such as Platonism (inspired by the Greek philosopher's theory of Platonic forms), makes easy sense of mathematics' universalism and usefulness. A mathematical object has a property, such as 7 being a prime number, in the same way that a dinosaur might have had the property of being able to fly. And a mathematical theorem, such as the fact that the sum of two even numbers is even, is true because even numbers really exist and stand in a particular relation to each other. This explains why people across temporal, geographical and cultural differences generally agree about mathematical facts—they are all referencing the same fixed objects.

But there are some important objections to realism. If mathematical objects really exist, their properties are certainly very peculiar. For one, they are causally inert, meaning they cannot be the cause of anything, so you cannot literally interact with them. This is a problem because we seem to gain knowledge of an object through its impact. Dinosaurs decomposed into bones that paleontologists can see and touch, and a planet can pass in front of a star, blocking its light from our view. But a circle is an abstract object, independent of space and time. The fact that π is the ratio of the circumference to the diameter of a circle is not about a soda can or a doughnut; it refers to an abstract mathematical circle, where distances are exact and the points on the circle are infinitesimally small. Such a perfect circle is causally inert and seemingly inaccessible. So how can we learn facts about it without some type of special sixth sense?

That is the difficulty with realism—it fails to explain how we know facts about abstract mathematical objects. All of which might cause a mathematician to recoil from his or her typically realist stance and latch onto the first step of the mathematical process: invention. By framing mathematics as a purely formal mental exercise or a complete fiction, antirealism easily skirts problems of epistemology.

Formalism, a type of antirealism, is a philosophical position that asserts that mathematics is like a game, and mathematicians are just playing out the rules of the game. Stating that 7 is a prime number is like stating that a knight is the only chess piece that can move in an L shape. Another philosophical position, fictionalism, claims that mathematical objects are fictions. Stating that 7 is a prime number is then like stating that unicorns are white. Mathematics makes sense within its fictional universe but has no real meaning outside of it.

There is an inevitable trade-off. If math is simply made up, how can it be such a necessary part of science? From quantum mechanics to models of ecology, mathematics is an expansive and precise scientific tool. Scientists do not expect particles to move according to chess rules or the crack in a dinner plate to mimic Hansel and Gretel's path—the burden of scientific description is placed exclusively on mathematics, which distinguishes it from other games or fictions.

In the end, these questions do not affect the practice of mathematics. Mathematicians are free to choose their own interpretations of their profession. In The Mathematical Experience, Philip Davis and Reuben Hersh famously wrote that “the typical working mathematician is a Platonist on weekdays and a formalist on Sundays.” By funneling all disagreements through a precise process—which embraces both invention and discovery—mathematicians are incredibly effective at producing disciplinary consensus.

MORE TO EXPLORE

Logicomix: An Epic Search for Truth. Apostolos Doxiadis and Christos H. Papadimitriou. Art by Alecos Papadatos and Annie Di Donna. Bloomsbury USA, 2009.

Where Proof, Evidence and Imagination Intersect. Patrick Honner in Quanta Magazine. Published online March 14, 2019.

FROM OUR ARCHIVES

Why Isn't 1 a Prime Number? Evelyn Lamb; ScientificAmerican.com, published online April 2, 2019.

Kelsey Houston-Edwards is a mathematician and journalist. She formerly wrote and hosted the online show PBS Infinite Series.

More by Kelsey Houston-Edwards
Scientific American Magazine Vol 321 Issue 3This article was originally published with the title “Numbers Game” in Scientific American Magazine Vol. 321 No. 3 (), p. 35
doi:10.1038/scientificamerican0919-35