We've been wrong about math for 2300 years
A radical conceptualist take on the foundations of mathematics
It’s catchy title, but there’s something to it: since the time of Euclid, we’ve been operating under flawed assumptions on the nature of math. Everyone knew something was wrong, and everyone was confused. The good news is that we may finally be in a position to fix this.
If you’re not familiar with the issue, please take a look at this quote from Wikipedia:
There is no general consensus among mathematicians about a common definition for their academic discipline.
Just think about it: billions of kids are forced to study math, yet we can’t even agree on what it’s supposed to be. And guess what? It doesn’t go well.
The stakes are crystal clear: we’re facing a weird animal, a metaphysical issue with actual societal repercussions. If mathematicians hadn’t skipped Marketing 101, they would have taken the problem more to heart: you can’t teach a subject you can’t define, just like you can’t market a product you can’t explain.
Historically, there have been two competing approaches to defining mathematics:
Through what it studies: numbers, shapes, structures,…
Through how it functions: axioms, theorems, proofs,…
These two approaches align with the two prevailing “philosophical schools”:
Platonism: mathematical objects “exist” in the ethereal realm of ideas.
Formalism: mathematics is a mechanical game of syntactic deduction with zero transcendent semantics.
I put quotation marks around “philosophical schools”, because this framing is a bit artificial. Indeed, both positions are known to be equally untenable. They’re more like “polarities” between which mathematicians have to navigate, as perfectly captured by this brilliant quip from Reuben Hersh (from his great 1979 article1, a recommended read):
the typical “working mathematician” is a Platonist on weekdays and a formalist on Sundays.
There’s something really weird going on here: you can’t do math without imagining that the cryptic symbols on the page “mean” something, that they “represent” actual “objects” that “exist” and “have properties”; yet you can’t ground this activity on anything else than meaningless formalism.
Like many of my fellow mathematicians, I’ve experienced this “philosophical plight” first hand and found it deeply unsettling. My personal inability to make sense of math was a key driver of my interest in it and, in a way, I became a research mathematician because I couldn’t figure out what math was really about.
At some point, I even studied the arcane field called “philosophy of mathematics”, which left me even more frustrated. What Hersh writes about it resonates with my own experience:
Mathematicians themselves seldom discuss the philosophical issues surrounding mathematics; they assume that someone else has taken care of this job. We leave it to the professionals.
But the professional philosopher, with hardly any exception, has little to say to the professional mathematician. Indeed, he has only a remote and inadequate notion of what the professional mathematician is doing.
In the end, Hersh continues, mathematicians tend to give up and just “do math”:
Thus, if we teach our students anything at all about the philosophical problems of mathematics, it is that there is only one problem of interest… and that problem seems totally intractable.
Nevertheless, of course, we do not give up mathematics. We simply stop thinking about it. Just do it.
Active “philosophers of mathematics” may very well object, and rightfully point to the meaningful developments that took place in the 45 years since Hersh’s article. Yet the prevailing attitude of career mathematicians hasn’t changed much, and many of them continue to live by these practical guidelines:
Math is hard to define and it’s a fact of life.
Philosophy of math is for cranks and losers.
We don’t really need to define math, because we know what we do when we do math.
Many of them don’t even realize that the last item contains a strong, latent philosophical stance as well as an implicit definition of math: “mathematics is what mathematicians do.”
Hersh’s conclusion is that this third approach is the way out of the fake Platonism v. formalism debate:
The alternative of Platonism and formalism comes from the attempt to root mathematics in some nonhuman reality. If we give up the obligation to establish mathematics as a source of indubitable truths, we can accept its nature as a certain kind of human mental activity.
Thurston takes a similar path in his celebrated essay On proof and progress in mathematics2 (another great read).
While I agree with both Hersh and Thurston, I do think that we need to go one step further and actually characterize this “certain kind of mental activity” (which, in my view, none of them has convincingly done.)
An essential starting point is the latent consensus within the mathematical community as to what it means to do math and what it feels like.
I have attempted to document this consensus in a general audience book3, based on the writings of Descartes, Grothendieck, and Thurston, and also my own experience as a mathematician. While the book is certainly not perfect (and not everyone agrees with all of it), the overwhelmingly positive feedback (from Abel Prize winners and Fields medallists to high school teachers and students) is a strong indicator of the presence of this consensus.
Before I repackage the book’s core message into a tentative definition of math, let me make a digression through one of the hottest philosophical debates in medieval Europe: the Quarrel of Universals.
In a nutshell, “universals” are concepts, abstractions that are detached from any particular individual: beauty, roundness, youth… On which level of reality do these things “exist”? The debate is as old as philosophy itself and, in a way, it mirrors the Platonism v. formalism debate:
Realism is akin to Platonism: it asserts that universals really exist, in the ethereal realm of idealities.
Nominalism is akin to formalism: it asserts that universals are mere conventions of language.
These positions have been documented since at least Plato and Aristotle, but the debate was revived in the 12th century with the emergence of a third position:
Conceptualism asserts that universals are products of human cognition.
Conceptualism was introduced by polymath & firebrand Peter Abelard, “the Descartes of the 12th century”, and later championed by William of Ockham.
While conceptualism is often portrayed as a flavor of nominalism (because it rejects the realist/Platonist belief in transcendent entities), it does bring a major departure from vanilla nominalism: words are more than mere utterances, more than ink on a page, they actually map to things that exist in our brains.
I was personally unable to make sense of math until my personal encounter with deep learning, latent features and sparsity turned me into a radical conceptualist.
Let me explain what this viewpoint brings to the debate on the foundations of math.
If you want to characterize math as a “certain kind of mental activity”, you have to account for all the specifics of this activity, which everyone tends to agree upon:
the reliance on formal deduction systems,
the notion that statements are either true or false, and proofs must be 100% bullet-proof, in a curiously rigid manner that doesn’t match any “real-world” property of language,
a constructive approach to crafting new concepts from existing one, as if playing Lego with definitions.
For millennia, the smartest people around have struggled to make sense of the entire combo. In addition, there was this weird “Platonic feel” that mathematical concepts (numbers, shapes, structures, etc.) are actual “objects” that “exist”.
This messy situation has fostered a variety of interpretations, all coming with their own specific blindspots — from Descartes’ dualism to Galileo’s take that “math is language of the universe”, from Frege’s logicism to Brouwer’s intuitionism, from Hilbert’s formalism to Ramanujan’s mysticism.
My central claim is that we’re at a turning point in this story, not because we’re smarter, but because recent progress in neurology and machine learning help us remove centuries-old stumbling blocks. Once you view cognition as a dynamic learning process through which concepts “emerge” in the brain, everything becomes clearer.
Without further ado, here’s my proposed “conceptualist consensus” on what math is really about:
Math is a human mental activity based on playing the “Game of Truth”: when we do math, we make as if notions had precise definitions that were perfectly stable over time, as if statements had binary “truth” values, as if one could play Lego and deduce new true statements from existing ones.
It is an imaginary activity supported by symbolic writing systems encoding the rules of the Game of Truth.
As an imaginary activity, math is a driver of neuroplasticity. This explains the indisputable “Platonic feel” of math: when we spend enough time imagining mathematical abstractions, we end up “perceiving” them as if they really existed.
Interestingly, this take is both 100% consensual (no-one challenges the specifics of what it means to do math) and 100% provocative (most people have a knee-jerk reaction to the notion that math is an “imaginary” activity.)
Many will object that math can’t be just that, and that it must have some sort of “transcendent” dimension. My intention isn’t to troll anyone, but rather to bring clarity on a number of aspects which, in my view, can only be resolved through a radical conceptualist stance. Let me add these important precisions:
It’s not derogatory to say that math is based on imagination. People like Einstein, Descartes, Grothendieck, and Thurston have all insisted on the importance of imagination in math, and they meant it in a positive way.
Likewise, it’s not derogatory to say that “meaning” and “truth” are cognitive phenomenons. It’s not woke either. It’s just a pragmatic, science-driven conceptualist view on human cognition. Language has two sides, a symbolic side (utterances or ink on a page) and a semantic side. At the core, a conceptualist is someone who interprets semantics as a cognitive process rather than a shamanic access to the world of ideas (thus breaking from the traditional interpretation of semantics by Platonists, realists and spiritualists).
Imaginary doesn’t mean arbitrary: the rules of the Game of Truth aren’t random and they might actually be the greatest invention in history. When we follow these rules, our mental images consolidate and crystallize into a coherent worldview that is insanely powerful.4
While the philosophical (and controversial) part of the conversation is fascinating, it shouldn’t become a distraction. The pragmatic (and consensual) part is where the real substance lies, as anticipated by Hersh in this visionary remark:
The problems of truth and meaning are not technical issues in some recondite branch of logic or set theory. They confront anyone who uses or teaches mathematics. If we wish, we can ignore them… It would be surprising if this had no practical consequences.
If you agree with me on the specifics of what “doing math” actually entails, you may start to notice the elephant in the room: while we were stuck in the pointless Platonism v. formalism debate, we failed to communicate that active imagination was an essential step of mathematical comprehension.
What if the people who “suck at math”, or “have no intuition”, were simply those who never had a chance to practice the right imagination techniques, the ones that stimulate neuroplasticity in the right way? Or who gave up too early, not knowing that it’s a slow process, that requires a certain degree of insistance, even if you feel that you’re totally lost at sea (an open secret among career mathematicians)?
Along the way, we also failed to articulate the most compelling value proposition of mathematics: math is an imagination technique that makes you smarter.
This is what really struck me as I was writing my book: once you read Descartes, Grothendieck, and Thurston in parallel, you start to notice that they speak almost only of imagination. Each describes the use of imagination in new modalities, discovered by accident and breaking with what they’d been taught. They all view this as the secret to their success. If we don’t communicate this dimension of math, we’re missing an essential piece.
Now that we’re teaching machines the secrets of intelligence, it’s about time we start teaching humans.
Reuben Hersh, Some proposals for reviving the philosophy of mathematics, Advances in Mathematics, Volume 31, Issue 1, January 1979, Pages 31-50, https://doi.org/10.1016/0001-8708(79)90018-5
William Thurston, On proof and progress in mathematics, Bulletin of the American Mathematical Society, Volume 30, Number 2, April 1994, Pages 161-177, https://arxiv.org/pdf/math/9404236
D.B., Mathematica: a Secret World of Intuition and Curiosity, Yale University Press, 2024.
I actually suspect that the “Unreasonable Effectiveness of Mathematics” follows from a profound theorem in machine learning, which I’m unable to fully articulate but would say something along these lines: logic is an effective way to generate vast amounts of synthetic data in order to accelerate and deepen the training of deep learning systems while improving their predictive fitness with respect to real-world data.
Excellent post, David! You’re absolutely right to highlight the dual nature of language—its symbolic and semantic sides—a distinction that lies at the heart of Grothendieck toposes. Regarding footnote 4, I tend to view logic primarily as a tool for validating data/assertions derived from other sources, rather than as a means of generating it. Interestingly, unlike language models, the human mind often operates effectively with minimal data (though, admittedly, this can sometimes lead to less-than-ideal outcomes...).