Morning Overview

Scientists reveal stunning breakthrough that could remake future tech devices

A team at Fudan University has produced a working flash memory chip that integrates two-dimensional materials on a commercial 0.13 micrometer CMOS fabrication line, achieving a reported 94.34% yield. Separately, MIT engineers have demonstrated that heat itself can serve as a computing signal, performing matrix math through sculpted silicon rather than conventional transistors. Together, these results suggest that the physical foundations of future electronics could look very different from today’s mainstream devices, though both advances are still at the research-demonstration stage.

A 2D Flash Chip That Actually Works at Scale

For years, two-dimensional materials like molybdenum disulfide have tantalized chip designers with the promise of thinner, more energy-efficient transistors. The gap between lab curiosity and factory-ready product, however, has been wide. Fudan University researchers narrowed that gap by integrating a tapeout-verified 2D NOR flash memory chip on a commercial 0.13 micrometer CMOS platform they call ATOM2CHIP. The chip is not a proof-of-concept wafer fragment; it passed full-chip testing with a reported yield of 94.34%, a notably high figure for a novel materials integration demonstration.

What makes the result especially significant is its feature set. The chip supports instruction-driven operations with 32-bit parallelism, meaning it can read and write data the way a standard microcontroller expects. Fudan has positioned it as the world’s first full-featured 2D flash chip enabled by system integration. For consumers, this matters because NOR flash sits inside everything from car dashboards to wireless earbuds. If 2D materials can slot into existing fabs without requiring exotic new equipment, the path from research paper to product shelf shortens dramatically.

Computing With Heat Instead of Electrons

While Fudan’s work refines a familiar device category, MIT engineers are questioning whether electrons need to carry the computational load at all. In a proof-of-concept described in an arXiv preprint, researchers designed metastructures where input data is encoded as temperatures applied to one side of a silicon block. Heat conducts through an inverse-designed internal geometry, and in the proof-of-concept the output is read out via electrical power collected at terminals. The topology of the structure itself performs the math, specifically matrix-vector multiplication, a core operation in machine learning.

Senior author Giuseppe Romano and researcher Caio Silva explained the approach in an MIT announcement. The structures are optimized using a differentiable thermal transport solver, which means the design process is automated through gradient-based algorithms rather than manual trial and error. For anyone who has felt a laptop overheat during a heavy workload, the irony is sharp: instead of fighting thermal buildup, this method treats heat as the signal itself. If scaled, thermal analog processors could handle specific AI inference tasks while sidestepping the energy costs of moving billions of electrons through conventional logic gates.

Why These Two Advances Belong in the Same Conversation

At first glance, a flash memory chip and a heat-based calculator seem unrelated. But both share a thesis: the materials and physics we use to process and store information are not fixed. Fudan’s 2D chip proves that atomically thin semiconductors can meet industrial yield standards on legacy fabrication nodes. MIT’s thermal computing shows that even the waste product of electronics, heat, can be repurposed as a computational medium. Each project attacks a different bottleneck. Memory density and energy consumption constrain edge devices like smartwatches and IoT sensors. Matrix multiplication throughput constrains AI accelerators. Solving both simultaneously could reshape how engineers design systems from the ground up.

A reasonable critique is that neither result is ready for mass deployment. The Fudan chip uses a 0.13 micrometer node, which is decades behind the cutting edge in logic chips. MIT’s thermal structures have been demonstrated in simulation and small-scale prototypes, not in data centers. Still, the 94.34% yield figure from Fudan is not trivial. Many novel material platforms struggle to break 50% yield in academic settings. And MIT’s use of standard silicon as the thermal medium, rather than exotic alloys, lowers the barrier to eventual fabrication. The question is less whether these ideas can work in principle than how quickly they can climb from proof-of-concept demonstrations to practical products.

Parallel Efforts in Quantum Materials and Liquid Electronics

These two projects do not exist in isolation. Researchers at Northeastern University reported that a discovery in quantum materials could make electronics dramatically faster by toggling materials between a conducting state and an insulating state using controlled heating and cooling. Meanwhile, at UCLA, Jun Chen assembled magnetic particles into a three-dimensional structure using a strong magnetic field, exploring liquid bioelectronics that could one day conform to soft tissue rather than rigid circuit boards. Both lines of research reinforce the same broader trend: the next generation of devices may rely on physical phenomena that current chip architectures largely ignore.

The convergence is striking. Two-dimensional flash, thermal computing, quantum switching, and ferrofluid-based circuits each emerged from different labs solving different problems. Yet they all point toward a future where device performance is no longer gated solely by how small we can etch silicon transistors. The dominant strategy for decades has been to shrink features on a flat wafer. These projects suggest that adding new dimensions, whether through atomically thin layers, sculpted three-dimensional heat paths, or reconfigurable liquid structures, can unlock behaviors that conventional CMOS scaling cannot reach.

Rethinking Heat, Materials, and the Design Pipeline

One reason these advances are emerging now is that tools for handling heat and materials at fine scales have matured. Work at MIT on nanomaterials that steer heat inside chips has shown that thermal transport can be engineered almost as deliberately as electrical current. The same mindset underpins the thermal computing metastructures: instead of treating heat as an uncontrollable side effect, designers can channel it through optimized geometries. Institutions such as the MIT Institute for Soldier Nanotechnologies have helped normalize this kind of cross-disciplinary work, where materials science, device physics, and application needs are considered together rather than in isolation.

The design and dissemination pipelines are also shifting. Romano’s group relied on open-access dissemination through preprints and institutional news, and tools like MIT’s guidance on finding open research articles make it easier for engineers in industry to track such developments in real time. As more device concepts are shared early, peers can stress-test them, adapt them to new use cases, or integrate them into hybrid systems that mix conventional logic, 2D memories, and thermal analog blocks. Over the next decade, the most interesting products may come not from a single breakthrough, but from carefully engineered combinations of these unconventional building blocks, each exploiting a different corner of the physics that underlies information processing.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.