A hybrid power converter chip citing 96.2% peak efficiency has entered the patent record as multiple semiconductor companies race to cut the enormous electricity losses that plague AI data centers. Texas Instruments and Amber Semiconductor are each pushing distinct but complementary approaches to shrink the gap between the power that enters a server rack and the power that actually reaches an AI processor. The stakes are straightforward: every percentage point of efficiency recovered at scale translates into millions of dollars in saved electricity and reduced strain on already stretched power grids.
What is verified so far
The 96.2% efficiency figure traces to a specific patent filing. U.S. patent application 17/123,417, titled “Power converter,” cites publications describing a hybrid dual-path converter that achieved that peak efficiency mark. The design combines two conversion paths in a single topology to handle the steep voltage drop from high-voltage bus lines down to the sub-1V levels that modern AI chips demand. The public record for this application, available through the USPTO’s Global Dossier, confirms the cited research and the claimed efficiency but does not itself contain independent lab validation of the number in a full data center environment.
Separately, Texas Instruments announced a complete 800 VDC power architecture developed alongside NVIDIA’s reference design. In a release distributed via PR Newswire, TI describes an 800 V-to-6 V bus converter with a stated peak efficiency figure, targeting future-generation AI data centers. The 800 VDC approach is framed as a way to reduce the number of conversion stages between utility power and the GPU rack, cutting losses at each step and simplifying cabling and distribution inside the facility.
Amber Semiconductor, Inc. has also entered this space with a direct 50 VDC-to-0.8 VDC solution designed to operate at very high current. According to a Business Wire statement, the company aims to improve rack-to-chip efficiency by reducing conversion steps and moving high-performance conversion closer to the AI processor itself. AmberSemi plans to demonstrate the solution at APEC 2025, positioning it as a potential building block for future high-density racks.
These three data points form a clear pattern. The patent-cited hybrid converter addresses the theoretical ceiling for a single conversion stage. TI’s 800 VDC architecture tackles the system-level problem of getting high-voltage DC power from the building’s electrical infrastructure to the rack. AmberSemi’s 50 V-to-0.8 V converter targets the final, most loss-prone step between the rack bus and the chip itself. Together, they represent a layered attack on the same problem: too much electricity is wasted before it ever performs a useful computation.
What remains uncertain
The 96.2% figure, while present in the patent citation trail, has not been independently verified in a production data center setting based on available sources. Patent filings routinely cite prior academic and industry publications to establish technical context, but a citation is not the same as a validated benchmark under real-world thermal and load conditions. No primary research paper with full test methodology and peer review has been identified in the available record to confirm the number outside of patent documentation, and the patent itself does not include a full-scale deployment study.
NVIDIA’s role also requires careful framing. TI’s announcement connects the 800 VDC architecture to NVIDIA’s reference design, indicating some level of collaboration on how power is delivered to GPUs. However, no direct statement or official record from NVIDIA on integration testing with the hybrid dual-path converter has surfaced in the sources reviewed. The relationship appears to be a design partnership around system architecture rather than a confirmed product endorsement of any specific efficiency claim, including the 96.2% figure from the patent trail. Readers should therefore treat the NVIDIA connection as evidence of ecosystem alignment, not as independent validation.
For AmberSemi, the 50 VDC-to-0.8 VDC solution remains pre-commercial. The company has provided claimed baselines and efficiency targets for rack-to-chip delivery, but no regulatory filings, volume shipping dates, or third-party test reports have been disclosed in the referenced material. Whether the solution can scale to the hundreds-of-kilowatt rack densities that next-generation AI training clusters require is an open question. Likewise, no primary source quotes from the patent holders on scalability limits of the dual-path design for hyperscale environments appear in the available record, leaving a gap between lab-scale demonstrations and full data center deployment.
A broader uncertainty is the absence of any independent study comparing these approaches head-to-head under identical conditions. Each company’s efficiency claims use different input voltages, output voltages, and load profiles, making direct comparison difficult without standardized testing protocols. For example, an 800 V-to-6 V bus converter operating at one power level cannot be cleanly compared to a 50 V-to-0.8 V point-of-load device at another power level without a common methodology. The lack of such a comparison means that any claim about combined end-to-end efficiency gains from stacking these technologies remains speculative.
How to read the evidence
The strongest evidence here is structural, not experimental. A patent filing with application number 17/123,417 cites specific publications on the hybrid dual-path converter and its 96.2% peak efficiency. That citation trail confirms that the number exists in technical literature and that the design has been considered credible enough to anchor a patent application. However, patent citations serve a legal function, establishing prior art and scope of claims, rather than a scientific one. The efficiency figure should therefore be treated as a plausible engineering target or lab result under controlled conditions, not as a guaranteed specification in the demanding, thermally constrained environment of an AI data center.
Corporate announcements require similar caution. TI’s release is a primary source for what the company has publicly committed to build and how it frames its collaboration with NVIDIA. It establishes that an 800 VDC architecture is part of TI’s roadmap and that it is being positioned for future AI data centers. But as with most corporate communications distributed through services like PR Newswire’s media platform, the document is written from the company’s perspective and is not a substitute for independent testing or regulatory review.
AmberSemi’s statement falls into the same category. The Business Wire release shows that the company is developing a 50 V-to-0.8 V converter and intends to demonstrate it at a major power electronics conference, which is a meaningful signal of technical ambition. Yet until third-party labs, hyperscale operators, or standards bodies publish test data, the performance of that converter under realistic workloads remains a claim rather than an established fact. Internal benchmarks, even when shared with partners under non-disclosure agreements, do not carry the same weight as publicly verifiable results.
Another layer of context comes from how these announcements are disseminated and accessed. PR distribution and access tools, including services such as online portals for corporate communications, make it easier for companies to broadcast technical claims directly to investors, journalists, and potential customers. That convenience can blur the line between marketing language and engineering reality. Readers should distinguish between the existence of an announcement (which is verifiable) and the performance claims within it, which may require independent confirmation.
What makes these developments noteworthy is not any single efficiency number but the convergence of multiple actors attacking the same bottleneck from different angles. Traditional data center power delivery uses a chain of conversions, from AC utility power to high-voltage DC, then to intermediate DC buses, and finally down to the sub-1 V levels that processors need. Each conversion stage introduces losses, often a few percentage points at a time, which compound over the full chain. As AI workloads drive rack power from tens of kilowatts toward hundreds, those compounded losses translate into substantial wasted energy and additional cooling overhead.
In that context, the hybrid dual-path converter can be viewed as an attempt to push the theoretical efficiency of a single stage as high as possible, while TI’s 800 VDC architecture reduces the number of stages required to move power from the building entrance to the rack. AmberSemi’s focus on 50 V-to-0.8 V conversion, meanwhile, targets the last step closest to the chip, where currents are highest and resistive losses are most acute. If even a fraction of the claimed gains from each layer prove out under independent testing, the combined effect could meaningfully reduce the energy footprint of AI infrastructure.
For now, though, the public record stops short of that confirmation. The patent documentation and press releases establish that these technologies exist, that they have attracted corporate investment, and that they are being positioned as solutions for AI data center power delivery. They do not yet establish how the devices perform when exposed to the variability, heat, and uptime demands of production-scale AI clusters. Until standardized benchmarks and third-party evaluations are available, the most responsible reading is to see these efforts as promising steps toward higher efficiency, rather than as settled answers to the power challenges of AI.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.