Yichen Shen thinks light can do what transistors increasingly cannot. The CEO of Lightelligence, a startup born from his doctoral research at MIT, has projected that photonic processors could account for roughly 30 percent of data center chips as artificial intelligence workloads overwhelm conventional electronic hardware. Shen laid out the case in public remarks and company communications in early 2025, arguing that the physics of light gives optical accelerators structural advantages in speed and energy efficiency that electronics cannot match at scale.
It is a bold number. No independent analyst has publicly endorsed it, and the incumbents building today’s AI chips have not signaled they see photonics capturing that kind of share anytime soon. But the prediction lands at a moment when the data center industry is spending north of $100 billion a year on AI infrastructure and running into hard physical limits on how much more performance it can wring from silicon transistors alone.
The science behind the claim
Photonic chips use light instead of electrical current to perform computations. In practice, that means encoding data onto laser beams and exploiting the interference patterns of photons to execute the matrix multiplications at the heart of neural networks. Because photons travel without generating the resistive heat that plagues dense electronic circuits, and because multiple data streams can ride the same waveguide simultaneously through wavelength multiplexing, optical processors can theoretically deliver results faster and with less energy per operation.
Lightelligence has moved beyond theory. A team led by Shen published peer-reviewed work demonstrating a large-scale photonic accelerator that achieves ultralow latency on AI inference tasks. The study, which appeared in the Journal of Optical Communications and Networking, details how integrated photonic circuits execute linear operations directly in the optical domain, reducing the costly back-and-forth conversions between photons and electrons that have historically limited optical computing.
Independent researchers have validated the broader approach. A 2025 nanophotonics review published by Nature examined photonic strategies for scaling AI data centers and identified co-packaged optics and electro-absorption modulators as two of the most promising paths forward. The survey names Lightelligence among the companies working to break through the bandwidth and thermal bottlenecks that electronic interconnects face as chip densities climb. Crucially, the review treats photonic accelerators not as speculative concepts but as functioning prototypes with measurable performance characteristics.
Why the timing matters
The pressure on conventional chip architectures is intensifying. Training a single frontier AI model now requires thousands of GPUs running for weeks, consuming megawatts of electricity. Inference demand is growing even faster as companies deploy large language models in consumer and enterprise products. Electrical interconnects inside data centers are approaching physical bandwidth ceilings, and the energy cost of moving data between chips is becoming a larger fraction of total power consumption than the computations themselves.
Shen’s argument, rooted in his academic work at MIT’s Department of Physics, is that these trends create an opening for photonics that did not exist five years ago. When AI models were smaller and power budgets more forgiving, the complexity of integrating optical components into data center racks was hard to justify. Now, with hyperscale operators scouring the supply chain for any efficiency gain, the calculus is shifting.
What stands between prediction and reality
Working prototypes and peer-reviewed papers are necessary but not sufficient. Several significant gaps separate Lightelligence’s lab results from the future Shen describes.
Manufacturing at scale. Silicon photonics must integrate cleanly with existing semiconductor fabrication lines. Yield rates for optical components produced in volume have not been publicly documented by Lightelligence or its direct competitors, which include Lightmatter, Celestial AI, and Ayar Labs. The Nature nanophotonics review acknowledges that co-packaged optics face integration hurdles when moving from laboratory prototypes to volume production, including alignment tolerances, thermal management, and packaging costs.
System-level integration. A photonic accelerator does not operate in isolation. It must interface with electronic memory, control logic, and networking hardware that remain transistor-based. Orchestrating data movement between optical cores and electronic subsystems is a nontrivial engineering challenge, especially for large language models that depend on sophisticated memory hierarchies. The published literature demonstrates building blocks but does not yet provide a full blueprint for end-to-end photonic systems deployed at hyperscale.
Software ecosystem compatibility. Established cloud providers tend to prefer incremental upgrades over disruptive architectural shifts that require rewriting software stacks. Photonic accelerators will likely need to prove compatibility with existing machine learning frameworks such as PyTorch and JAX, demonstrate predictable performance across diverse workloads, and fit into standard server form factors before they can win large procurement contracts. None of the available sources provides evidence of such customer commitments or large-scale pilot deployments.
Incumbent resistance. NVIDIA, Intel, and AMD have invested tens of billions of dollars in electronic architectures optimized for AI. None has publicly endorsed a timeline in which photonics displaces a large fraction of their products. NVIDIA’s roadmap through its Blackwell and Rubin GPU generations doubles down on electronic scaling paired with high-bandwidth optical links, not optical compute. Until photonic chip makers can demonstrate that their products outperform or undercut GPUs on a cost-per-inference basis, the switching incentive for hyperscalers remains unclear.
The energy question
Energy efficiency is central to the photonics pitch. The physical principles suggest lower power consumption per operation, and data center operators facing utility constraints and sustainability commitments are eager for any technology that bends the power curve. But no peer-reviewed study reviewed here quantifies exact energy reductions at data center scale. Translating component-level efficiency into concrete megawatt savings requires system-level modeling that accounts for cooling, conversion losses, and the electronic subsystems that still surround every photonic core. That modeling has not yet appeared in the primary literature.
Where photonics stands in spring 2026
The verified record supports a measured optimism. Photonic accelerators work. They have been demonstrated in peer-reviewed settings with real latency advantages on AI-relevant tasks. Independent technical reviews confirm that the problems photonics addresses, bandwidth limits, thermal constraints, and interconnect bottlenecks, are genuine and worsening. Shen’s credentials are serious: his trajectory from MIT optical neural network research to a venture-backed startup is well documented and grounded in published science.
What the record does not yet support is a specific market-share figure on a specific timeline. The 30 percent projection is best understood as an aspirational scenario from a founder whose company’s valuation depends on that future arriving. It is not backed by independent financial analysis, manufacturing data, or procurement commitments from major cloud providers. Industry analyst firms such as LightCounting and Yole Group track the photonics market closely, and their public estimates for optical compute adoption remain more conservative, though the gap between bullish startup forecasts and cautious analyst projections has been narrowing as AI power demands escalate.
For readers weighing the claim, the distinction that matters is between what the peer-reviewed literature confirms, working photonic accelerators and credible pathways to higher bandwidth and lower latency, and what remains unproven: a future in which light-based processors sit alongside today’s dominant electronic AI chips in the racks of the world’s largest data centers, capturing nearly a third of the market. The physics is real. The engineering path is plausible. The commercial outcome is still an open question.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.