Prime numbers — 2, 3, 5, 7, 11, 13, … — are the building blocks of all integers. Every number is a unique product of primes. Yet their distribution among the integers appears erratic, almost random. Is there a hidden order?
In 1859, Bernhard Riemann found the key. He showed that the distribution of primes is intimately connected to the zeros of a particular function of a complex variable, now called the Riemann zeta function:
The sum on the left converges for $\operatorname{Re}(s) > 1$. The product on the right — Euler's product, running over all primes — encodes the fundamental theorem of arithmetic into analysis. Riemann showed how to extend $\zeta(s)$ to the entire complex plane (except for a simple pole at $s = 1$).
The extended function has zeros at the negative even integers $-2, -4, -6, \ldots$ (the "trivial zeros"). All other zeros lie in the critical strip $0 \leq \operatorname{Re}(s) \leq 1$. Riemann's Hypothesis is:
All non-trivial zeros of the Riemann zeta function have real part equal to $\frac{1}{2}$.
Equivalently: every non-trivial zero lies on the critical line $\operatorname{Re}(s) = \frac{1}{2}$. This has been verified computationally for the first $10^{13}$ zeros. No counterexample has ever been found. Yet a proof has eluded mathematicians for over 165 years.
Riemann's 1859 paper gave an exact formula for the prime counting function $\pi(x)$ (the number of primes up to $x$) in terms of the zeros of $\zeta$. Each zero contributes a wave-like correction. When all zeros lie on the critical line, these corrections are as small as possible, and the primes are distributed as regularly as they can be:
Without RH, we can only prove $\pi(x) = \operatorname{Li}(x) + O(x\, e^{-c\sqrt{\log x}})$ — a far weaker error bound. The Riemann Hypothesis is the statement that primes are distributed as uniformly as the structure of $\zeta$ allows.
The equality $\sum n^{-s} = \prod_p (1-p^{-s})^{-1}$ is not a coincidence — it's a direct encoding of unique factorization. Each prime $p$ contributes one factor. The analytic properties of $\zeta$ (growth, zeros, value distribution) are determined by the multiplicative structure of the integers. This is the deep reason why a statement about a complex-analytic function controls the distribution of primes.
Any approach to RH must, at some level, exploit this multiplicative structure. It's not enough to treat $\zeta$ as a generic analytic function. The proof, if it exists, must see the primes inside the analysis.
The Riemann Hypothesis is not merely a curiosity in pure mathematics. Its truth or falsehood would have profound consequences across mathematics, computer science, and physics.
Over a thousand published results begin with "Assume the Riemann Hypothesis." Many of the deepest theorems in number theory — on the gaps between consecutive primes, on the distribution of primes in arithmetic progressions, on the behavior of arithmetic functions — are conditional on RH. A proof would instantly upgrade all of them to unconditional results.
Modern public-key cryptography (RSA, Diffie-Hellman) relies on the difficulty of factoring large numbers and computing discrete logarithms. The Generalized Riemann Hypothesis (GRH) implies efficient deterministic primality testing (Miller's algorithm). More broadly, GRH gives the tightest known bounds on the distribution of primes in arithmetic progressions, which underpins the security analysis of many cryptographic systems.
In the 1970s, Hugh Montgomery and Freeman Dyson discovered a remarkable connection: the statistical distribution of spacings between zeros of $\zeta$ matches the spacing of eigenvalues of large random matrices from the GUE (Gaussian Unitary Ensemble). This is the same distribution that governs energy levels of heavy atomic nuclei. The connection between prime numbers and quantum chaos remains one of the deepest unexplained phenomena in mathematical physics.
Hilbert placed the Riemann Hypothesis on his famous 1900 list of problems. The Clay Institute selected it again in 2000 as a Millennium Prize Problem. It has guided the development of complex analysis, analytic number theory, random matrix theory, spectral geometry, and mathematical physics for over a century. As Bombieri wrote: "The failure of the Riemann hypothesis would create havoc in the distribution of prime numbers."
From Euler's product formula through the modern era of random matrix theory and zero-density estimates, the Riemann Hypothesis has shaped the development of analytic number theory.
Euler product formula. Euler proves $\sum n^{-s} = \prod_p (1-p^{-s})^{-1}$, connecting analysis to prime numbers for the first time.
Riemann's paper. In a single 8-page paper, Riemann extends $\zeta(s)$ to the complex plane, derives the exact formula for $\pi(x)$ in terms of the zeros, and conjectures that all non-trivial zeros have real part $\frac{1}{2}$.
Prime Number Theorem. Independently, Hadamard and de la Vallée Poussin prove that $\pi(x) \sim x/\log x$ by showing $\zeta(s) \neq 0$ on the line $\operatorname{Re}(s)=1$.
Infinitely many zeros on the line. Hardy proves that infinitely many zeros of $\zeta$ lie on the critical line $\operatorname{Re}(s) = \frac{1}{2}$.
Positive proportion on the line. Selberg proves that a positive proportion of all zeros lie on the critical line.
Montgomery pair correlation. Montgomery conjectures — and partially proves — that the spacings between zeros follow GUE statistics. Dyson immediately recognizes the connection to random matrix theory. This opens an entirely new front.
Levinson's method. More than one-third of all zeros lie on the critical line.
40% on the line. Conrey improves Levinson's method to show that at least 40% of zeros lie on the critical line.
Millennium Prize. The Clay Mathematics Institute designates the Riemann Hypothesis as one of seven Millennium Prize Problems, with a $1,000,000 prize.
10 trillion zeros verified. Gourdon and Demichel verify RH computationally for the first $10^{13}$ zeros using the Odlyzko-Schönhage algorithm.
Moment conjecture via random matrix theory. Keating and Snaith give precise conjectural asymptotics for all moments of $\zeta$ on the critical line, including the leading constants.
Sharp conditional moment bounds. Harper proves that under RH, $\int_T^{2T} |\zeta(1/2+it)|^{2k}\,dt \ll T(\log T)^{k^2}$ — matching the predicted $k^2$ growth rate.
Guth-Maynard breakthrough. New large-value estimates for Dirichlet polynomials yield the first improvement to zero-density estimates in over 50 years. Published in the Annals of Mathematics.
Moment lower bounds. Arguin and Creighton prove new unconditional lower bounds for fractional moments and large deviations of $\zeta$ on the critical line.
The Riemann Hypothesis remains open, but the last decade has seen significant progress on zero-density estimates, moment bounds, and random matrix universality. The problem is attacked from multiple angles by groups around the world.
The Riemann Hypothesis is approached from several distinct angles:
Zero-density estimates. Rather than proving all zeros lie on the line, one can bound how many zeros can lie off the line. Guth-Maynard (2024) made the first breakthrough here in half a century, but the estimates are still far from what RH predicts.
Moment bounds. Harper, Soundararajan, Radziwiłł, and others study how large $|\zeta(1/2+it)|$ can get when averaged. The predicted $k^2$ growth exponent (from the Keating-Snaith conjecture) has been verified conditionally and partially unconditionally. Understanding moments at all orders would have deep implications.
Random matrix universality. The Montgomery-Dyson connection suggests that zeros of $\zeta$ behave like eigenvalues of random unitary matrices. This has been verified to extraordinary precision numerically (Odlyzko) but remains unproved. A proof of full GUE universality would imply RH.
Function field analogues. Over finite fields, the analogue of RH was proved by Deligne (1974). The challenge is to transfer insights from the geometric setting to the number field case.
We are pursuing a value distribution route to the Riemann Hypothesis, formalized in Lean 4. Instead of analyzing zeros directly, we study how the values of $\zeta$ on the critical line distribute — and show that the multiplicative structure of $\zeta$ (its Euler product) forces a rigidity that pins the zeros.
Algebraic core (SGT): Machine-verified in Lean 4 with zero axioms and
zero sorry. The rearrangement gap $\geq 1$ for all $\sigma \neq \text{id}$
in $S_{n+1}$ is unconditional.
Full chain: Compiles in Lean 4 with 18 named axioms encoding standard
analytical infrastructure (polygamma values, zeta positivity, moment asymptotics).
Zero sorry.
Sole remaining gap: QPD at $\sigma = 1/2$ for $k \geq 3$, equivalent
to the shifted divisor problem for higher-order divisor functions — a concrete,
incrementally attackable question in analytic number theory.
Not claimed: A complete proof of the Riemann Hypothesis. What we have
is a verified conditional reduction and a new algebraic mechanism (SGT) that is
unconditional. The gap is explicit and mathematically precise.
Most approaches to RH study the zeros of $\zeta$ directly. We study the values instead, exploiting the fact that $\zeta$ is not an arbitrary function but one with a rigid multiplicative structure. The Euler product forces the moments to grow at exactly rate $k^2$ — this is a theorem, not a conjecture — and we prove that $k^2$ growth algebraically forces the positivity condition that determines the zero locations.
The key advantage: the hardest step (SGT) is algebraic, not analytic, and is machine-verified. The remaining analytical steps are classical (Hamburger theorem, moment asymptotics) and incrementally attackable. Each moment order that is closed extends the zero-free region by a quantifiable amount.
The history of claimed proofs of the Riemann Hypothesis is long and cautionary. Machine verification eliminates an entire category of errors: if the Lean code compiles, the logical chain from axioms to conclusion is correct. The remaining question is only whether the axioms themselves hold. Our axioms are named, counted, and each constitutes a precise, falsifiable mathematical claim.
This is not a black box. Anyone can read the axioms, verify the proofs, and identify exactly where the gaps are. We believe this is what mathematical transparency should look like in 2026.