Researchers at the University of Missouri announced on March 1, 2026, that they are developing what they describe as a rewritable DNA hard drive, a system designed to store digital information in DNA-based molecules that can be edited and reused rather than locked into a single archival state. The project, led by Li-Qun “Andrew” Gu, uses a technique called frameshift encoding paired with nanopore sensor readout, sidestepping the expensive DNA synthesis step that has bottlenecked earlier approaches. If the claims hold up under broader scrutiny, the work aims to address a gap that has kept most DNA storage efforts closer to write-once, archival-style applications.
How Frameshift Encoding Replaces Synthesis
Most DNA data storage systems work by chemically synthesizing new strands of DNA for every piece of information they record. That process is slow, costly, and effectively makes the medium write-once. Gu’s team at the University of Missouri took a different route. Their PNAS Nexus study describes a synthesis-free and enzyme-free rewritable DNA memory that encodes binary data using microstaples of different lengths, a method the authors call frameshift encoding. Instead of building new DNA from scratch each time data changes, the system adjusts the physical arrangement of these staples along an existing scaffold, allowing bits to be rewritten without resynthesizing the underlying molecule.
Reading the stored data back relies on a nanopore sensor device rather than traditional DNA sequencing. The approach draws on established MspA nanopore technology, which can detect DNA at single-nucleotide resolution by measuring ionic current as strands pass through a protein pore. By combining frameshift encoding on the write side with nanopore detection on the read side, the Mizzou team eliminates two of the most expensive steps in conventional DNA storage: chemical synthesis and enzymatic processing. That combination is what distinguishes this work from prior rewritable concepts that still depended on one or both of those bottlenecks.
Earlier Rewritable Concepts and What Changed
The idea of rewriting data stored in DNA is not new. A 2015 preprint proposed an architecture for random-access, rewritable DNA storage that used coding theory and DNA editing methods to modify information in discrete blocks, with a proof-of-concept demonstration involving text fragments. That work established the theoretical framework but relied on editing enzymes and synthesis steps that kept it firmly in the conceptual stage. A separate 2022 study published in Nature Communications demonstrated rewritability through a different mechanism altogether, using topological modifications called nicks, along with ligation and enzymatic nicking, to rewrite and erase metadata on two-dimensional DNA nanostructures. That team also showed end-to-end reconstruction of stored images using machine learning post-processing.
Both of those earlier efforts proved that rewriting DNA-based data was physically possible, but neither eliminated the dependence on enzymatic reactions or complex biochemical workflows. Gu’s system, by contrast, claims to be both synthesis-free and enzyme-free for the encoding step, which would represent a meaningful reduction in operational complexity. The distinction matters because, in many DNA-storage workflows, enzyme-dependent steps and turnaround times add cost and complexity that can make scaling difficult. Whether the Mizzou approach can maintain data fidelity across many rewrite cycles, however, is a question the available reporting does not yet answer with published error-rate data.
Why Rewritability Matters for the Data Explosion
DNA’s appeal as a storage medium rests on its extraordinary information density. When Atlas Data Storage announced a commercial system ahead of an official launch in May 2025 in Baltimore, Maryland, the company described it as 1,000 times denser than magnetic media and encoded in a universal, time-tested format. But the Eon 100, like virtually every other commercial or near-commercial DNA storage offering, is designed for archival use. Data goes in and stays there, potentially for thousands of years, but it cannot be cheaply updated or overwritten.
That limitation confines DNA storage to cold archives, the digital equivalent of a vault. For the technology to compete with hard drives or solid-state storage in active data environments, it needs the ability to rewrite. The University of Missouri announcement frames this gap explicitly: while many research groups are advancing DNA storage, the Mizzou team’s stated goal is to move the field closer to a practical, rewritable system. If DNA can be written, read, erased, and rewritten at reasonable speed and cost, it could eventually handle workloads that archival-only systems cannot touch, from database updates to operating system swap files. That is a large “if,” but the engineering direction is clear.
Nanopore Readout and the Path to Practicality
The choice of nanopore sensing as the readout mechanism is not incidental. Traditional DNA sequencing methods, while accurate, are slow and expensive for repeated read operations. Nanopore devices pass a DNA strand through a tiny protein channel and measure changes in electrical current to identify each base. Some nanopore signal-processing approaches described in the literature (including methods such as Duplex Interrupted sequencing) add distinct signal features that can help track progression through a strand and resolve repeated sequences, a common source of errors in nanopore-based systems. Gu’s group leverages this type of single-molecule resolution not to read biological genomes, but to discriminate between different staple configurations along a scaffold, effectively turning structural differences into digital ones and zeros.
In principle, nanopore readout also offers a route to miniaturization and lower operating costs. Compact nanopore devices are already used in field genomics, and the same hardware class could eventually be adapted for DNA data storage kiosks or rack-mounted units. The current Mizzou prototype remains a laboratory setup, and the announcement does not specify read latency, throughput, or bit error rates. Still, by aligning their encoding strategy with a sensor technology that has a clear industrial roadmap, the researchers are positioning their rewritable DNA hard drive concept within a broader ecosystem of nanopore innovation rather than betting on a bespoke reader that might never scale.
Open Science Infrastructure Behind DNA Storage Research
The trajectory of rewritable DNA storage has also depended heavily on open dissemination of early-stage ideas. The 2015 coding-theoretic proposal for block-wise rewriting appeared first on arXiv’s open-access platform, where preprints can be read freely by researchers and industry engineers worldwide. That early visibility allowed other groups to iterate on concepts like random access and error-correcting codes without waiting for lengthy journal review cycles. In the case of DNA storage, where engineering and molecular biology intersect, rapid sharing of designs and failure modes can be as important as polished final results.
Maintaining that infrastructure requires ongoing institutional support. arXiv is operated by Cornell University with a network of member organizations that contribute to its operating budget, and the service also relies on individual donations from readers who value immediate access to technical work. Its help pages outline submission and moderation policies that shape what appears on the site; for DNA storage researchers, those guidelines and tools define how quickly new encoding schemes, error models, or nanopore signal-processing techniques can be shared with the broader community. Gu’s rewritable DNA hard drive builds on a decade of such openly circulated ideas, even as it pushes the field toward more practical, hardware-aligned implementations.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.