A team at the University of Sydney has built a photonic artificial intelligence chip small enough to fit on a standard silicon wafer yet fast enough to process data at the speed of light. The prototype performs calculations on picosecond timescales and was tested on a biomedical image classification task, which signals a potential shift in how AI hardware handles the growing computational demands of healthcare diagnostics. The work, fabricated entirely in-house at the Sydney Nano Hub, represents one of the smallest functioning photonic neural network accelerators reported to date, according to the university’s own announcement.
Light Instead of Electrons
Traditional AI chips rely on electronic transistors to shuttle data through layers of a neural network. Each electrical signal generates heat and encounters resistance, which limits both speed and energy efficiency. Photonic computing sidesteps those constraints by encoding information in light pulses that travel through optical waveguides rather than copper traces. The result is processing that happens at optical speeds with far less energy dissipated per operation.
What distinguishes the Sydney device from other photonic AI efforts is its emphasis on extreme miniaturization. According to the author-access portal for the underlying research, the chip operates at picosecond timescales, meaning individual inference steps complete in trillionths of a second. That speed advantage matters most in time-sensitive clinical settings where a diagnostic delay of even milliseconds can compound across thousands of scans processed in a single hospital shift. In principle, a photonic accelerator embedded directly in imaging equipment could analyze data as it is acquired, reducing the lag between scan and result.
Inverse Design on Silicon-on-Insulator
The chip’s compact footprint stems from a technique called inverse design. Rather than hand-drawing optical components and hoping they perform well, inverse design starts with a desired output, such as a specific light-scattering pattern, and uses optimization algorithms to work backward to the physical structure that produces it. The approach can yield irregular, non-intuitive geometries that outperform conventional photonic layouts in a fraction of the space.
According to the peer-reviewed article in Nature Communications, Joel Sved and colleagues, working with Xiaoke Yi at the University of Sydney, experimentally demonstrated inverse-designed nanophotonic neural network accelerators on a silicon-on-insulator platform. Silicon-on-insulator is a well-established fabrication base already used across the semiconductor industry, which lowers the barrier to eventual manufacturing scale-up. The choice of platform is deliberate: it allows the photonic chip to piggyback on decades of existing foundry infrastructure without requiring exotic materials.
An earlier preprint submitted in June 2025 provides a window into the project’s timeline and early technical framing. The gap between that initial disclosure and the peer-reviewed publication in March 2026 suggests the team refined experimental results and responded to reviewer feedback, a standard but telling indicator that the core claims survived external scrutiny. While the preprint laid out the theoretical design and preliminary measurements, the final paper focuses on reproducible performance and robustness across different operating conditions.
Testing Against Medical Imaging Benchmarks
Speed means little without accuracy, so the researchers chose a well-known benchmark to validate their chip. MedMNIST v2 is described in a data descriptor as a large-scale lightweight benchmark for 2D and 3D biomedical image classification that spans multiple medical imaging modalities, from retinal scans to chest X-rays. It is widely used in the machine learning community precisely because its standardized structure makes results comparable across different hardware and software approaches.
By running classification tasks from MedMNIST on the photonic chip, the Sydney team could measure how well an optical neural network handles real medical data rather than synthetic test patterns. This is a meaningful distinction. Many photonic computing demonstrations rely on simple digit-recognition tasks that do not reflect the complexity of clinical images. Choosing a biomedical benchmark signals that the researchers are aiming at practical healthcare applications, not just laboratory curiosities.
The reported experiments involved mapping learned neural network weights onto the optical elements of the chip, so that interference and phase shifts between light paths implemented the core matrix multiplications. Classification accuracy was then compared to an equivalent electronic implementation. While the optical system is still a prototype, achieving competitive performance on MedMNIST indicates that photonic accelerators can do more than toy problems; they can tackle noisy, heterogeneous medical data where errors carry real-world consequences.
Where the Chip Was Made
The device was fabricated at the Research and Prototype Foundry, located inside the Sydney Nanoscience Hub. That facility is a member of the Australian National Fabrication Facility network, which provides micro- and nano-fabrication capabilities to researchers across the country. The ANFF network includes nodes with specialized photonics expertise, giving Australian research teams access to tooling that would otherwise require partnerships with overseas foundries.
Building the chip in-house carries strategic significance beyond convenience. It means the design-to-fabrication feedback loop is tight: the same institution that conceived the inverse-designed structures also etched them into silicon, tested them optically, and validated them against the benchmark dataset. That vertical integration accelerates iteration and reduces the risk of design intent being lost in translation between a university lab and a contract manufacturer. It also helps protect intellectual property and positions the university as a potential partner for local startups looking to commercialize photonic AI hardware.
How This Fits the Broader Photonic AI Race
The Sydney chip is not the only photonic AI accelerator making headlines. A separate study published in Nature describes a large-scale photonic accelerator with ultralow latency that takes a different architectural approach, prioritizing scale and integration density over the ultra-compact form factor that defines the Sydney work. The contrast is instructive. One camp in photonic computing bets that bigger optical circuits with more components will deliver the raw throughput needed for large language models and other parameter-heavy workloads. The Sydney group, by contrast, bets that shrinking individual photonic neurons through inverse design can deliver useful AI inference in a package small enough for edge deployment, such as a portable diagnostic device in a rural clinic.
Neither approach has yet proven dominant, and the two strategies may ultimately serve different markets. Large-scale photonic accelerators could compete with GPU clusters in data centers, where power consumption and cooling are major cost drivers. Ultra-compact chips like the Sydney prototype could target point-of-care medical devices, autonomous sensors, or satellite payloads where size, weight, and power consumption are hard constraints. In those environments, even modest neural networks that run instantly and sip power can be transformative.
There are still hurdles to clear before photonic AI moves beyond the lab. Integrating light-based accelerators with conventional electronic control logic remains a challenge, as does building reliable optical input and output interfaces for real-world systems. Training neural networks directly in the optical domain is another open problem; most current demonstrations, including the Sydney chip, rely on training in software and then porting weights to hardware. Nonetheless, the combination of inverse design, silicon-on-insulator fabrication, and rigorous benchmarking against medical datasets marks this work as a significant step toward practical, light-powered AI.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.