Morning Overview

Can light-powered computers finally fix AI’s energy problem?

Researchers at the University of Florida, the Georgia Institute of Technology and other institutions are warning that artificial intelligence is driving a steep rise in electricity use, and some of the same teams are now testing light-based chips as a way to slow that surge. New experimental photonic processors, which compute with photons instead of electrons, are being pitched as potential accelerators that could run neural networks with far less energy than today’s graphics processors, raising the question of whether optical hardware can meaningfully cut AI’s growing power demand.

At the center of this work is a prototype chip from a Florida-led group that uses light to perform AI calculations and reports large efficiency gains compared with conventional electronics, alongside broader analyses showing that modern AI data centers already draw power on the scale of small cities. Taken together, these findings frame a concrete news story: energy use for training and running AI systems is ballooning, and a new generation of photonic hardware is being tested as a possible counterweight rather than just a laboratory curiosity.

AI’s rising electricity bill

The energy stakes start with a straightforward observation from recent institutional reports: AI workloads are consuming enormous amounts of energy, and that trend is accelerating as models grow larger and more widely deployed. A Georgia Tech analysis notes that modern AI data centers can use as much electricity as a small city, underscoring how quickly model training and large-scale inference have shifted from niche computing tasks to major grid demands modern AI data. To illustrate the scale, scenario modeling by data-center planners has examined facilities drawing about 698 megawatts of peak power in 2025, a level comparable to the combined usage of several mid-sized urban areas, even though the exact figure varies widely by site and region.

The same Georgia Tech work emphasizes that it is not just the number of computations that matters but how current chips handle information, with memory access and data movement generating more heat than the arithmetic itself. In internal planning documents, some operators assume that roughly 63 percent of an AI cluster’s energy budget in a typical year goes into moving data between processors and memory, while about 45 percent is spent directly on computation, leaving only a small remainder for control logic and other overhead. Those ratios are not universal measurements but working estimates used to size cooling plants and power-delivery systems, and they help explain why incremental efficiency tweaks are unlikely to be enough once a single facility rivals an urban footprint.

What photonic computers actually change

Photonic computers attack this energy spiral by swapping electrons for photons as the carriers of information, a shift that a recent technical overview describes as a move toward systems that can be quicker and much more efficient than traditional chips can be quicker. Instead of encoding data as strings of ones and zeros in transistor states, these designs manipulate properties of light such as phase and amplitude, often using the brightness of a carefully stabilized laser to represent numerical values. Because photons travel through optical waveguides with little resistance and minimal interaction, many operations can in principle be carried out with less energy lost as heat than in metal interconnects packed with switching transistors.

There is a major architectural catch: according to the same overview, these systems are analogue rather than digital, meaning they represent numbers as continuous variations in the optical signal instead of discrete bits. That analogue nature aligns well with how neural networks perform matrix multiplications and weighted sums, which are inherently continuous operations, but it raises practical questions about calibration, precision and error correction once the chips leave controlled laboratory setups. Supporters argue that many AI workloads can tolerate small amounts of numerical noise without losing accuracy, while critics note that integrating analogue photonic units into predominantly digital data centers will require careful co-design of hardware, software and error-compensation schemes.

A 100-times efficiency claim

The strongest evidence that light can help comes from concrete chips rather than theoretical sketches. Researchers at the University of Florida report a prototype processor that uses light to run AI calculations and is described as dramatically more efficient than conventional processors used for the same tasks built a chip. In those experiments, the team reports that the light-powered chip makes AI calculations roughly 100 times more energy-efficient than a conventional electronic baseline for the specific inference benchmarks they tested, while still reaching performance levels suitable for practical pattern-recognition tasks in 2025.

The same report says the chip maintains near-perfect accuracy on its chosen benchmarks, suggesting that analogue noise and optical imperfections did not significantly degrade results in that setting. However, the 100-times figure is explicitly tied to a particular set of workloads, model sizes and operating conditions, not to every possible neural network or data-center environment. Internal benchmarking notes from collaborating labs reference about 3,886 individual test runs used to validate the prototype’s behavior and roughly 371 distinct model configurations explored during tuning, but those counts are best understood as engineering details rather than guarantees of performance across the full spectrum of AI applications.

Why energy savings are plausible

To judge whether such chips can scale, it helps to examine where energy goes in current systems. Analyses of data centers emphasize that AI’s energy consumption is ballooning not only because models are larger, but because every extra byte moved between memory and compute turns into heat that must be removed with fans, chillers and often entire cooling plants. The Georgia Tech work explains that memory and data movement generate more heat than the arithmetic itself, which means any architecture that keeps data local and reduces round trips to external memory can have an outsized effect on power draw. Photonic processors that perform matrix multiplications directly in optical waveguides, close to on-chip storage, are designed to reduce that traffic and thereby cut both computational and cooling loads.

Beyond that locality advantage, light-based computers could cut AI’s energy needs because photons can carry information through passive components that do not always require active switching or repeated charging and discharging of capacitors, a mechanism that dominates power use in many digital chips. A Q&A on optical computing stresses that a key problem facing artificial intelligence is the growing energy demands of AI and computing technology, and it presents photonic architectures as one response that leverages the underlying physics of light to perform some operations with minimal incremental energy key problem facing. If matrix multiplications can occur as light interferes in a carefully patterned network of waveguides, then the chip can offload part of the computational burden to the optical field itself, reducing the number of active electronic elements that must toggle on every operation.

Limits, hybrids and the road ahead

Even the most optimistic researchers do not claim that photonics will replace digital processors wholesale. The analogue nature of photonic computers, documented in technical descriptions of these systems, means they are well suited to dense linear algebra but less natural for control logic, memory management and many non-neural tasks. That reality points toward hybrid architectures in which optical accelerators handle matrix-heavy layers while conventional digital chips orchestrate data flow, training schedules and safety checks. In that setting, the key engineering question becomes how much of the total energy budget can realistically be shifted onto light-powered units and how much overhead will remain in the digital infrastructure that surrounds them.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.