A team at Los Alamos National Laboratory has completed a mathematical framework for human color perception that Nobel Prize-winning physicist Erwin Schrodinger first sketched more than a century ago. The new work corrects a foundational error in the three-dimensional geometric model Schrodinger published in 1920, showing that the space humans use to perceive hue, saturation, and lightness does not follow the curved-surface geometry scientists had assumed for decades. The fix centers on a property called the neutral axis and relies on a non-Riemannian approach to shortest paths between colors, a shift that could reshape how screens, cameras, and artificial intelligence systems render color.
What Schrodinger Built in 1920
Between 1918 and 1920, Schrodinger produced a series of papers that attempted to map how people experience color onto a rigorous mathematical surface. His work, published in Annalen der Physik in 1920, drew on three intellectual traditions: Hermann von Helmholtz’s experimental measurements of color matching, Hermann Grassmann’s algebraic rules for mixing lights, and Bernhard Riemann’s geometry of curved spaces. Schrodinger’s insight was that the distances people perceive between colors, such as how “far apart” red and orange feel compared to red and blue, could be described by a metric, a mathematical ruler draped over a curved manifold.
A modern English translation of Schrodinger’s color-theory writings shows that he framed the problem in two competing ways: projective geometry, which treats color mixtures as ratios, and a Euclidean-style line element, which measures tiny differences in perceived color. That tension between projective and metric descriptions sat unresolved for decades, and later researchers largely defaulted to Riemannian geometry, the mathematics of smoothly curved surfaces, as the standard tool. The resulting picture was elegant and internally consistent, but it rested on assumptions about how perceptual distances should behave, assumptions that were rarely tested against the full complexity of human vision.
Why Riemannian Geometry Fell Short
The assumption that perceived color space is Riemannian became so widespread that it shaped international color standards still in use. In 1974, H.L. Resnikoff proposed a formal model that added two extra conditions, homogeneity and self-duality, to Schrodinger’s original axioms. Resnikoff’s framework, analyzed in detail in a mathematical neuroscience study, predicted that the geometry should look the same everywhere in color space and that brightness and darkness should mirror each other symmetrically. Those conditions sound appealing, but they do not match how people actually see.
Human color vision depends on three types of cone cells in the eye, each tuned to a different band of wavelengths. When researchers tested Riemannian predictions against real discrimination data, including the well-known MacAdam ellipses that chart the smallest color differences a trained observer can detect, the fit was poor. A diploma thesis at the University of Vienna, which re-examined Schrodinger’s 1920 color metric in the light of Helmholtz’s experiments and MacAdam’s measurements, confirmed that Riemannian metrics could not fully account for the shape, size, and orientation of discrimination regions across the color plane. The theory forced symmetries and smoothness that the data simply did not support.
These shortcomings were not just technicalities. If color space were truly Riemannian, then the perceived difference between two colors would depend only on their positions, not on the path taken between them, and small differences would add up in a straightforward way. Psychophysical tests, however, hinted that the visual system behaves differently depending on whether a change involves mostly hue, mostly lightness, or a mixture of both. That path dependence is precisely what a purely Riemannian model cannot capture.
The Non-Riemannian Correction
Roxana Bujack, the lead researcher at Los Alamos, and her colleagues attacked the problem directly. In a paper published in the Proceedings of the National Academy of Sciences titled “The non-Riemannian nature of perceptual color space,” the team demonstrated that the shortest perceptual path between two colors does not behave the way Riemannian geometry requires. In Riemannian space, the shortest path between two points is always a smooth curve called a geodesic, and combining two such paths should yield a predictable third path. Bujack’s team showed that this additive property breaks down for perceived color differences, meaning the mathematical scaffolding scientists had trusted since Schrodinger’s era was structurally wrong.
The researchers embedded results from previous color-science experiments in CIERGB color spaces and found that equal-hue surfaces do not match the way a Riemannian model predicts they should. That empirical failure pointed to a deeper geometric reality: the space of perceived colors requires a framework where distances can behave asymmetrically or where path-addition rules differ from those of ordinary curved surfaces. Instead of a single smooth manifold with one metric, the visual system appears to operate more like a patchwork of locally adapted rules, especially near regions where saturation or brightness changes dominate.
To formalize this, the Los Alamos team built a non-Riemannian model in which the cost of moving through color space depends on direction and context. In such a space, the shortest route between two colors might bend away from intermediate hues that the eye finds harder to discriminate, or it might favor changes along dimensions where the cones provide finer resolution. The resulting geometry is still coherent and mathematically well-defined, but it no longer obeys the strict constraints that Riemannian geometry imposes.
Defining the Neutral Axis
One of the most concrete outcomes of the new work is a completed mathematical definition of the neutral axis, the line running through color space from black through gray to white that represents zero saturation. Schrodinger’s original framework acknowledged this axis but never pinned down its geometry with full precision. The Los Alamos team’s non-Riemannian approach handles the neutral axis naturally, because phenomena along that line, where hue vanishes and only lightness varies, do not obey the same distance rules as movements that change hue or saturation.
This distinction matters practically. Color-difference formulas used in manufacturing, printing, and display calibration all rely on accurate distance calculations near the neutral axis. A paint company deciding whether two batches of “eggshell white” look identical to customers, or a display engineer tuning gray-scale uniformity on a medical imaging monitor, depends on a color metric that behaves correctly in that low-saturation zone. If the underlying geometry is wrong, the tolerance thresholds are wrong too, leading to products that technically pass inspection but look off to the human eye.
Psychophysicist Keith Niall, whose work on visual perception has examined how observers judge brightness and contrast, has emphasized that the visual system treats achromatic variations differently from chromatic ones. The explicit modeling of the neutral axis in the new framework aligns with that view, giving color scientists a tool that can separate gray-scale judgments from hue-based judgments without forcing them into the same metric mold.
What Changes for Screens and Algorithms
Most current color standards, including the CIE color spaces that govern everything from television broadcast to web design, trace their mathematical lineage back to Schrodinger’s and Resnikoff’s Riemannian assumptions. The Los Alamos results do not immediately invalidate those standards, but they do show that incremental tweaks to existing formulas (such as the succession of CIELAB and CIEDE color-difference updates) may never fully capture human perception if they remain confined to Riemannian geometry.
For display and camera makers, the new framework suggests that perceptual uniformity targets should be recalibrated, especially in near-neutral regions and along complex hue gradients like skin tones and foliage. Instead of enforcing equal numerical steps in a Riemannian color space, engineers could design lookup tables or rendering intents that follow non-Riemannian geodesics, ensuring that each step corresponds more closely to a just-noticeable difference for viewers. That could reduce banding in dark scenes, improve the realism of high-dynamic-range content, and make color grading tools more predictable.
Artificial intelligence systems that analyze or generate images could also benefit. Many computer-vision algorithms currently measure similarity between colors using Euclidean distances in spaces such as sRGB or CIELAB. If those distances misrepresent human perception, then clustering, segmentation, and generative models may group colors in ways that look unnatural. By adopting a non-Riemannian metric tuned to psychophysical data, AI models could better preserve subtle distinctions that matter to people, from the warmth of indoor lighting to the health cues encoded in skin coloration.
Over the longer term, the completed framework for perceptual color space may prompt standards bodies to revisit how they define uniform color spaces and tolerance formulas. That process will require extensive validation across different populations, lighting conditions, and viewing tasks. But the conceptual shift is already clear: color perception cannot be fully captured by draping a single smooth metric over a curved surface. Instead, it demands a geometry flexible enough to honor the idiosyncrasies of the human visual system, from the special role of the neutral axis to the asymmetric ways we perceive changes in hue, saturation, and lightness.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.