Morning Overview

Google’s AI chip plans could boost Marvell’s custom silicon role

Google has spent years building its own AI processors, and each new generation of its Tensor Processing Units demands more complex engineering than the last. That trajectory is creating a widening opportunity for Marvell Technology, the chipmaker that has quietly assembled one of the deepest custom silicon design operations in the semiconductor industry. With Marvell’s fiscal year 2026 now on the books and Google pushing ahead on next-generation TPU hardware, the relationship between the two companies is becoming a focal point for investors trying to gauge where AI chip spending goes next.

Marvell’s custom silicon foundation

Marvell filed its Form 10-K with the SEC on March 11, 2026, covering the fiscal year ended January 31, 2026. The filing and its associated exhibits trace a corporate structure that has been deliberately reshaped around data center and AI opportunities over the past several years.

That reshaping did not happen overnight. Marvell’s acquisition of Inphi in 2021 brought high-speed electro-optics critical to data center interconnects. Its earlier purchase of Cavium added Arm-based processor design teams, and the Innovium deal delivered switching silicon for cloud networking. An Exhibit 21 from a prior annual report lists the subsidiaries forming Marvell’s operational backbone, while a subsidiary schedule from 2017 shows the foundational IP portfolio the company held before AI-driven demand reshaped the market. Comparing the two reveals how aggressively Marvell expanded its design capabilities in the intervening years.

A quarterly exhibit from 2019 captures a midpoint in that expansion, documenting Marvell’s growing data center footprint at a time when AI workloads were just beginning to shift procurement decisions at the largest cloud operators. Today, the company’s subsidiary network spans advanced process node design, high-speed interconnects, and complex system-on-chip integration, exactly the toolkit a hyperscaler needs from a long-term custom chip partner.

Why Google’s TPU roadmap matters

Google’s Tensor Processing Units have evolved from internal experiments into production-scale AI accelerators deployed across the company’s cloud infrastructure. The sixth-generation TPU, known as Trillium, began rolling out in late 2024 and represented a significant leap in performance per watt over its predecessor. Industry analysts expect Google to continue this cadence, with future TPU generations requiring increasingly specialized design work at leading-edge process nodes.

Custom silicon, where a chip designer builds processors tailored to a single customer’s workload, has become one of the semiconductor industry’s fastest-growing segments. Research firm Gartner projected that AI chip revenue would surpass $70 billion in 2025, and a meaningful slice of that spending is flowing toward custom ASICs rather than off-the-shelf GPUs. Google, Amazon (with its Trainium and Inferentia chips), and Microsoft (with its Maia accelerator) have all committed to proprietary silicon strategies, each requiring deep partnerships with design houses capable of executing at the bleeding edge.

Marvell has positioned itself squarely in that lane. CEO Matt Murphy has spoken on multiple earnings calls about the company’s “custom AI accelerator programs” with top-tier cloud customers, describing a pipeline that extends several years into the future. While Marvell does not name individual partners in its SEC filings, the company disclosed in its fiscal Q4 2026 earnings call that its data center segment generated record revenue, driven in large part by custom silicon and electro-optics. Murphy has described the custom AI opportunity as a “multi-billion-dollar” revenue stream for Marvell over the coming years.

What the filings confirm and what they don’t

SEC filings carry legal weight that press releases and analyst estimates do not. When Marvell lists a subsidiary in an Exhibit 21 or an officer signs a certification under Sarbanes-Oxley, the company is making a legally binding representation about its corporate structure and the accuracy of its disclosures. Officer certifications associated with Marvell’s filings from recent years confirm executive accountability for strategic disclosures, including statements about partnerships and revenue concentration.

That said, no primary SEC filing from Google directly discloses the terms, scope, or dollar value of any contract with Marvell for custom AI chip design. The connection between the two companies is widely discussed by analysts and in trade press, and Marvell’s own public commentary strongly implies major hyperscaler engagements, but the specific financial relationship has not been confirmed through Google’s regulatory documents. Readers should treat claims about the size or exclusivity of any Google-Marvell arrangement with that caveat in mind.

Revenue concentration is another open question. Annual reports typically flag when a single customer accounts for 10% or more of total sales, and Marvell has acknowledged significant customer concentration in past filings. But the specific breakdowns for fiscal year 2026 are not detailed in the exhibits reviewed here. Without that granularity, it is difficult to measure how exposed Marvell would be if Google shifted more chip design work in-house or turned to a competing partner like Broadcom, which runs its own large custom ASIC business serving hyperscalers including Google.

The competitive landscape

Marvell is not the only company chasing custom AI silicon revenue. Broadcom has been the dominant player in this space, with its custom chip division generating billions in annual revenue from partnerships with Google, Meta, and others. Broadcom CEO Hock Tan has described the company’s custom AI chip pipeline as a major growth driver, and some analysts estimate Broadcom’s addressable market in custom accelerators could reach $60 billion to $90 billion by the end of the decade.

For Marvell, the competitive question is whether it can carve out a durable share of that market or whether Broadcom’s scale and incumbency will limit its upside. Marvell’s advantage lies in its integrated portfolio: the company can offer not just ASIC design but also the optical interconnects, switching silicon, and SerDes (serializer/deserializer) technology that surround a custom chip inside a data center. That bundled capability could make Marvell a more attractive partner for hyperscalers looking to simplify their supply chains.

There is also the question of whether the custom ASIC boom is sustainable. If AI spending normalizes or cloud providers revert to more standardized hardware to control costs, the economics of bespoke chips could shift. A company like Marvell, which has invested heavily in specialized design resources, might face underutilization risk unless it can reassign engineering teams to new programs quickly. The same hyperscalers that embraced custom silicon to reduce dependence on Nvidia’s GPU dominance are unlikely to lock themselves into another concentrated supplier relationship without maintaining optionality.

Where the debate stands for investors

The bull case for Marvell rests on structural positioning. The company has spent years and billions of dollars assembling a design operation purpose-built for the custom AI chip era. Leading-edge ASIC projects take two to three years from design start to production, creating high switching costs that make revenue streams relatively sticky once a program is underway. If Google and other hyperscalers continue expanding their proprietary chip efforts, Marvell’s pipeline should convert into sustained revenue growth through fiscal 2027 and beyond.

The bear case centers on concentration risk and visibility. Marvell’s biggest customers have enormous leverage, and their internal roadmaps remain largely opaque to outside investors. A decision by Google to consolidate more design work with Broadcom, or to build deeper in-house capabilities, could materially alter Marvell’s trajectory. The regulatory record supports the claim that Marvell has been expanding its data center and design capabilities for years, but it does not confirm which cloud providers are driving that growth, how revenue is distributed among them, or how long any individual program is contractually guaranteed to last.

Navigating that tension, between verified structural strength and unresolved customer concentration risk, will define how Marvell’s stock trades as the AI hardware cycle matures through the rest of 2026. For now, the company’s filings paint a picture of deliberate, well-documented preparation for a market that shows no signs of slowing down. Whether that preparation translates into the kind of durable, diversified revenue stream that justifies Marvell’s valuation is the question the next few quarters should begin to answer.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.