
Bill Gates is backing a bold bet that the next leap in computing power will not come from squeezing more transistors onto silicon, but from replacing electrons with light. The startup Neurophos is promising optical chips that could push performance and efficiency far beyond what traditional processors can deliver, reviving the spirit of Moore’s law just as conventional scaling runs out of road. If it works, the technology could reshape how artificial intelligence models are trained and deployed, from cloud data centers to edge devices.
Instead of chasing ever smaller features on standard CMOS, Neurophos is building a new kind of processor that uses photons to perform the heavy linear algebra at the heart of modern AI. The company’s leaders argue that this approach can deliver orders of magnitude better energy efficiency, potentially changing the economics of everything from large language models to real-time inference in cars and industrial systems. I see this as one of the clearest attempts yet to turn decades of photonics research into a commercial platform aimed squarely at the AI boom.
Bill Gates, big money and a Texas photonics bet
The scale of the funding behind Neurophos signals how seriously investors are taking optical computing as a way to keep performance climbing. A fund backed by Bill Gates has led a 110 million dollar investment into the company, giving the young business both capital and validation at a moment when AI hardware is crowded with contenders. That level of backing for a still-emerging architecture suggests that major technology investors now see the limits of conventional chips as an opportunity rather than a constraint, and are willing to underwrite riskier bets that promise step changes in efficiency.
Neurophos is based in The Austin area, positioning itself inside one of the United States’ fastest growing semiconductor and AI corridors. The company describes its core product as an optical processing unit, or OPU, a chip that relies on photons instead of electrons to carry out the matrix multiplications that dominate neural network workloads, and it is pitching this device as a future alternative to power hungry accelerators in large data centers. The recent funding round, led by the Bill Gates backed vehicle, values the effort to build this OPU architecture at a multibillion dollar level, according to funding disclosures.
From invisibility cloaks to AI accelerators
What makes Neurophos unusual is the path it took to reach AI hardware. The company is a photonics startup spun out of Duke University and Metacept, an incubator run by Smith that has its roots in metamaterials research, including work on so called invisibility cloaks that can bend light in exotic ways. That background in manipulating electromagnetic waves at fine scales is now being redirected toward building dense arrays of optical elements that can implement the linear transformations used in neural networks. I see this as a classic example of a deep tech pivot, where tools developed for one frontier application are repurposed for a much larger commercial market.
Today, Austin based Neurophos is focused on tiny optical processors for AI inferencing, arguing that its photonic cores can slot into existing systems while offloading the most power hungry parts of the computation. The company’s leaders emphasize that photonic chips are nothing new in research labs, but they claim their approach to integrating these devices into practical, programmable accelerators is what sets them apart. The link between the Duke University and Metacept heritage and the current AI push is explicit in company descriptions, which trace the journey from cloaking devices to optical processors.
Inside the optical processing unit
At the heart of the Neurophos pitch is the idea that an optical processing unit can execute matrix operations with far less energy than a conventional GPU. Instead of switching billions of transistors on and off, the OPU routes light through a mesh of waveguides and phase shifters that naturally perform the required multiplications and additions as photons interfere. This is not a general purpose CPU replacement, but a specialized engine for the linear algebra that dominates AI workloads, and it is designed to sit alongside more traditional logic that handles control and non linear functions. In principle, this architecture allows the chip to process vast numbers of operations in parallel without the resistive losses that plague electronic interconnects.
Neurophos describes its device as a photon based Optical Processing Unit that can eventually replace GPUs in AI training, not just inference, if the technology scales as planned. The company’s materials stress that the OPU is intended to plug into existing accelerator slots and software stacks, so that data center operators can treat it as another class of AI card rather than a completely alien system. That ambition is reflected in coverage that highlights how the Austin based Neurophos team is positioning its Optical Processing Unit as a drop in replacement for some GPU roles in large scale AI clusters.
Chasing Moore’s law with photons
Neurophos is not shy about its ambition to revive the trajectory that Gordon Moore described, even as transistor scaling slows. The company’s leaders argue that by moving the core math of AI into the optical domain, they can keep effective performance per watt doubling at a rapid clip, even if traditional CMOS improvements stall. Neurophos CEO Patrick Bowen has framed this as a way to keep the spirit of Moore’s law alive by changing the underlying device physics, rather than by forcing ever smaller features onto silicon. In his view, the industry has reached a point where new materials and modalities are needed to sustain the exponential growth that AI demand now expects.
Bowen has described how Neurophos is developing a massive optical transistor array that can be fabricated using processes compatible with CMOS today, which is crucial if the technology is to scale beyond bespoke prototypes. He has also emphasized that the company’s optical transistors are intended to integrate with existing manufacturing ecosystems, rather than requiring entirely new fabs, a point that matters for both cost and adoption. The ambition to bend Moore’s law back upward using such optical transistors is laid out in detail in technical briefings that quote Patrick Bowen directly.
Performance numbers, customer interest and the road ahead
For any new chip architecture, bold claims only matter if they translate into measurable performance, and Neurophos has begun to share early figures. A photonic AI test chip from the company has reportedly hit 300 tera operations per second per watt, or 300 TOPS per watt, a level of power efficiency that far exceeds what mainstream electronic accelerators deliver today. That metric is particularly important for inference at the edge, where power budgets are tight, but it also matters in data centers where electricity and cooling are major operating costs. If those numbers hold up in production devices, they would give Neurophos a compelling story for cloud providers and large AI users who are struggling with the energy footprint of current models.
Those efficiency figures are already drawing attention from potential customers, who have engaged with Neurophos through early access and evaluation programs at the company’s Texas headquarters. The interest is not limited to a single sector, since any organization running large neural networks stands to benefit from higher performance per watt, whether in recommendation systems, language models or computer vision. Reports on the 300 TOPS per watt test chip highlight this growing customer traction, noting that the power efficiency numbers are a key driver of those conversations.
More from Morning Overview