Image Credit: 昼落ち - CC BY-SA 4.0/Wiki Commons

For more than a century, photographers have had to choose which part of a scene should be tack sharp and which parts would blur away. A new generation of optics and algorithms is quietly rewriting that rule, letting a single camera hold multiple depths in crisp focus at the same time without the usual refocus dance. The result is not just a clever trick, but a shift in how I can think about storytelling, scientific imaging, and even everyday snapshots.

From split diopter glass that physically bends light into two focal planes to experimental “computational lenses” that treat focus as software, the tools now exist to keep foreground and background equally clear in a single exposure. The technology is still fragmented across cinema rigs, lab benches, light‑field cameras, and AI pipelines, but the direction of travel is unmistakable: focus is becoming something we decide after the fact, not a constraint we fight on set.

Why traditional lenses struggle to keep two depths sharp

Conventional lenses are built around a simple tradeoff: the wider you open the aperture, the shallower the depth of field, and the more the scene collapses into a single razor‑thin plane of focus. Techniques like stopping down and using wide‑angle glass can stretch that zone, but they cannot make two widely separated distances equally sharp without compromise. Even when I lean on hyperfocal distance tricks, I am really just accepting “acceptably sharp” blur rather than true dual‑plane clarity.

Classic depth‑of‑field methods such as the hyperfocal “double distance” approach deliberately place focus so that both the subject and background are reasonably crisp, but as guides like The Double Distance Method make clear, this relies on small apertures and wide lenses, not magic. At the other end of the spectrum, compact metasurface optics show how far traditional glass can be miniaturized, with one study describing how Optical systems such as cameras are usually built from discrete lenses, gratings and filters. Even there, the physics of a single focus plane still dominates, which is why researchers are now looking beyond conventional glass to solve the two‑depth problem.

The split diopter trick that made two planes famous in cinema

Long before computational photography, filmmakers hacked around depth‑of‑field limits with a deceptively simple accessory: the split diopter. By placing a half‑lens in front of the main optic, they could bring a close subject and a distant background into simultaneous sharpness, separated by a soft seam. Guides that unpack What, Split Diopter Shot explain how one half of the frame effectively sees through a magnifying filter while the other half looks through plain glass, creating that distinctive dual‑focus look that directors like Brian De Palma turned into a signature.

Modern accessories refine the same idea for today’s rigs. Products like 150mm handheld split diopter glass let creators hand‑hold a curved element in front of the lens to bend light from a second distance into focus, while dedicated cinema filters such as diopter filters slot into matte boxes for repeatable setups. Tutorials such as the Nov walkthrough by Michael the Maven show how a split element can “cheat” depth of field in camera, but they also highlight the limitations: a visible blur line, restricted framing, and the need for careful blocking so actors do not cross the optical boundary.

From Schneider glass to handheld prisms: dual‑focus as a product

What used to be a niche rental item is now a small ecosystem of off‑the‑shelf tools built specifically to keep two depths sharp. At the higher end, The Schneider 138mm Diopter Split +2 Gen2 is sold explicitly as a way to “capture a sharp image of both the foreground and background,” using a semi‑circular close‑up element that covers part of the frame. Retail listings for the same The Schneider glass emphasize that it is a lens attachment rather than a full lens, which keeps it compatible with a wide range of cinema primes and zooms.

On the more experimental side, creators gravitate toward handheld options that trade precision for spontaneity. The Prism Lens FX Handheld Split Diopter 100mm is marketed with the promise that it is AVAILABLE IN 150MM & 100MM SIZES, urging buyers to Choose the perfect size for their setup. The same handheld split diopter is also listed through broader shopping portals as a Prism Lens FX tool for “cinematic in‑camera FX,” underscoring how dual‑focus has become part of the creative filter kit alongside streaks and flares rather than a purely technical fix.

Light‑field cameras: refocus after the shot, not during it

While split diopters bend light into two zones on the sensor, light‑field cameras attack the problem by capturing far more information than a conventional frame. Instead of recording a flat image, they sample the direction of incoming rays so that focus can be changed later in software. Academic work on depth from defocus and correspondence notes that Light‑field cameras have made it to the consumer market, with arrays of micro‑lenses capturing enough angular data that multiple depth cues “are available simultaneously in a single capture.” In practice, that means I can shoot once and then slide a virtual focus plane through the scene afterward, pulling different subjects into clarity without touching a focus ring on set.

Early commercial attempts like the Lytro Illum leaned into this promise. Reviews of the updated model point out that in the Lytro Desktop software, You can adjust the aperture from f/1 to f/16 after the fact, change the point of focus, shift perspective and even generate 3D images from a single photo. Product listings for the Lytro Illum V2 describe its Sensor with the tagline “Capture a deeper picture,” stressing that the same capture lets you change focus points and depth of field later. A separate listing for the Sensor driven camera repeats that you can Capture a deeper picture, which is marketing shorthand for the same computational refocus that makes dual‑depth sharpness possible long after the shutter clicks.

Multi‑lens arrays and the Light L16 experiment

Another path to multi‑depth sharpness is to use many small cameras at once and merge their views. The Light L16 was the most ambitious example, packing sixteen separate modules behind a smartphone‑sized front plate. A detailed engineering account notes that the first version of the Light camera used five 35‑mm equivalent lenses, five 70‑mm equivalent lenses and six 150‑mm equivalent lenses, all firing in carefully chosen combinations. A separate review boiled the behavior down to basics, explaining that Here is what happens when you press the shutter: They capture data from up to ten of the sixteen sensors, which software processes into a single super‑high‑resolution image with adjustable depth of field.

On the resale market, the same idea shows up in listings that describe the Light L16 Multi‑lens Camera as a black, like‑new unit with fewer than 150 shots on the counter. Those listings stress that the Used body still Features a 52 M MEGAPIXEL resolution, 10x zoom and a light‑field style “select focus” mode. In other words, the L16 tried to bring the lab idea of capturing many viewpoints at once into a pocketable device, using overlapping focal lengths and heavy computation to keep multiple depths usable in a single frame.

Carnegie Mellon’s computational lens that keeps everything in focus

Where split diopters and light‑field arrays work around the limits of traditional optics, a team at Carnegie Mellon University is trying to erase those limits altogether. Their experimental setup combines a phase‑only spatial light modulator with a microscope so that, as the university describes it, they can use this device to control how light bends at each pixel and keep multiple parts of a sample in focus at once. A detailed write‑up notes that By combining this setup with advanced computation, the researchers can reconstruct images where structures at different depths are simultaneously sharp, something that would be impossible with a conventional objective.

Coverage of the same work for a broader audience frames it as a potential end to the one‑plane rule. One report opens with the observation that For as long as cameras have existed, they could only focus on one depth plane at a time, and then explains how the Carnegie Mellon lens breakthrough eliminates that constraint optically, not just in software. A separate piece on the same project describes how the team built a “computational lens” by pairing a Lohmann lens, essentially two curved cubic lenses that shift angular information, with a sensor and reconstruction algorithms so that the resulting camera can focus on everything at once. That report notes that the Lohmann based system is still a lab prototype and may take years to reach the market, if ever, but it shows that multi‑depth sharpness can be baked into the optics themselves rather than faked after capture.

AI, NeRFs and all‑in‑focus scenes from blurry inputs

Even without exotic lenses, advances in neural rendering are starting to treat focus as a parameter that can be dialed in later. One recent framework describes itself as the first system capable of synthesizing an all‑in‑focus neural radiance field from inputs that are not all sharp. In that work, the authors explain in the Abstract that their dual‑camera all‑in‑focus NeRF outperforms strong baselines both quantitatively and qualitatively, effectively reconstructing a 3D scene where every depth slice is crisp even if the original views suffered from shallow depth of field.

Consumer devices are already nibbling at the edges of this idea under the banner of computational photography. Guides to premium smartphones describe how The Power of Computational Photography Explained The core concept is to process multiple data points from an image, combining frames, depth maps and learned priors to produce results that no single exposure could deliver. In practice, that means AI can rescue slightly missed focus, extend depth of field in macro shots, or simulate dual‑plane sharpness by blending multiple captures, all without the user ever touching a focus ring.

When “never refocus” is already here in niche cameras

Outside of research labs and cinema sets, there are already specialized systems that quietly promise to end manual refocusing in everyday workflows. In digital pathology, for example, the FS‑Live Telepathology System is marketed with the bold claim that users Never need to manually refocus thanks to a dedicated camera and unique algorithms. Under the hood, that means the system sweeps through focal planes, builds a focus map and then presents a composite view where tissue structures at different depths appear simultaneously sharp, a life‑saving convenience when a pathologist is scanning for subtle anomalies.

Even mainstream camera bodies are inching toward similar behavior, albeit in less dramatic form. Canon’s APS‑C mirrorless EOS R7 is not an all‑in‑focus camera, but its deep learning autofocus and subject tracking reduce the need to refocus manually as subjects move through the frame. In the accessories world, third‑party optics such as specialty lenses and close‑up attachments, along with diopters like the +2 filters, give photographers more control over how much of the scene stays sharp without constant focus adjustments.

What dual‑depth sharpness means for the next wave of cameras

Put together, these strands point toward a future where keeping two depths sharp is not a special effect but a default option. In cinema, I expect split diopters like the Schneider Diopter Split and handheld prisms to remain creative tools, while computational lenses and NeRF‑style reconstructions quietly handle the heavy lifting in the background. In scientific imaging and telepathology, systems that promise users they will Never need to refocus are likely to spread as labs demand both speed and accuracy.

For everyday photographers, the more immediate impact will come from phones and compact cameras that quietly borrow ideas from light‑field sensors, multi‑lens arrays and AI pipelines. Listings for niche gear like the Lytro Illum and the Light L16 Multi camera show that the hardware has already been built, even if those specific products did not go mainstream. As computational photography matures and research like the Dec lens experiments and Dec light‑field reviews filter down, the idea that a camera can keep two depths sharp at once without refocus will feel less like a party trick and more like a basic expectation.

More from MorningOverview