Morning Overview

Report: Goldman blocks Anthropic’s Claude for Hong Kong bankers

Goldman Sachs has blocked its Hong Kong staff from using Anthropic’s Claude, cutting off an AI coding assistant that, according to Bloomberg’s late-April 2026 report, had become a daily tool for the bank’s software engineers in the city.

The restriction is geographic, not role-based. Any Goldman employee physically present in Hong Kong loses access to Claude, including engineers who normally use the tool at other offices and happen to be visiting. The bank has not publicly explained the decision, and Anthropic has not commented. No spokesperson for either company responded to requests for clarification from the outlets that have covered the story.

The timing is striking. Hong Kong’s banking regulator, the Monetary Authority (HKMA), has spent the past year actively encouraging financial institutions to adopt generative AI. A regulator-conducted survey published in April 2025 found that 75% of institutions under its oversight had already implemented or piloted at least one generative AI use case. The HKMA report does not disclose the exact number of institutions surveyed or the methodology in detail, so readers should treat the figure as directional rather than precise. That one of Wall Street’s most prominent firms is now pulling a specific AI product out of the same jurisdiction creates an awkward split screen.

What Goldman did and what it did not do

According to Bloomberg’s reporting, the block targets a single product in a single city. Goldman has not banned Claude firmwide, nor has it announced restrictions on other AI tools in Hong Kong. Engineers at the bank’s New York, London, or Bangalore offices can apparently still use Claude as before. Bloomberg’s report does not specify which other AI tools, if any, Goldman continues to permit in Hong Kong, so it is unclear whether staff there retain access to alternatives such as OpenAI’s products or internally built models.

That narrow scope matters. It suggests Goldman’s compliance team flagged something specific to Claude’s operation in Hong Kong rather than a blanket concern about generative AI. But without a public statement from the bank, the precise trigger remains unknown.

Bloomberg’s report does not specify how many Hong Kong-based engineers relied on Claude, what tasks they used it for most, or whether Goldman has approved a substitute tool. For affected staff, the practical fallout is immediate: anyone who depended on Claude for code generation, debugging, or documentation needs a workaround now.

Why Hong Kong, and why now

Goldman has not stated its reasons, and no legal expert, regulator, or company official has spoken on the record about the decision. Several possible explanations have circulated in coverage of the story, though none has been confirmed.

Cross-border data flows. Hong Kong’s Personal Data (Privacy) Ordinance governs how personal data can be transferred outside the jurisdiction. Anthropic’s servers are based in the United States, and the company has not publicly detailed whether it routes or stores data locally for Hong Kong-based enterprise clients. If Goldman’s risk team concluded that Claude’s data-handling architecture posed unacceptable exposure under Hong Kong privacy law, a location-based block would be a straightforward fix. No source has confirmed this reasoning.

U.S. export controls. Washington has steadily tightened restrictions on advanced AI technology reaching China. Hong Kong operates under a separate legal framework, but its proximity to the mainland has drawn increasing scrutiny from U.S. policymakers. No available reporting ties Goldman’s decision to any specific export rule or advisory.

Internal risk appetite. Large banks routinely restrict specific vendor tools in specific jurisdictions based on internal assessments that never become public. Goldman may simply have concluded that the governance framework around Claude in Hong Kong did not yet meet its own standards, independent of any external regulatory pressure.

None of these explanations can be verified with the information currently available. Bloomberg’s report does not attribute the decision to any particular legal or regulatory concern.

What the HKMA’s data actually shows

The HKMA’s 75% figure deserves context. The regulator published its “Financial Services in the Era of Generative AI” report in April 2025, a full year before Bloomberg’s Goldman story. The survey captured a snapshot of adoption at that moment, and the number has likely shifted since. Because the HKMA has not published the survey’s sample size, response rate, or selection criteria, the figure should be read as an indicator of broad momentum rather than a statistically rigorous benchmark. It remains the most authoritative public measure of how deeply generative AI has penetrated Hong Kong’s banking sector.

The HKMA has not commented on Goldman’s specific restriction. Its report addresses the industry broadly, emphasizing “responsible adoption” without singling out any firm or product. Whether the regulator views Goldman’s move as consistent with that framework or as an unwelcome signal to the market is an open question.

What peer banks have said

JPMorgan, Morgan Stanley, Citigroup, and HSBC all operate significant technology and banking teams in Hong Kong. As of May 2026, none has publicly announced a comparable restriction on Claude or any other generative AI tool in the city, and none has publicly commented on Goldman’s decision. That does not mean internal reviews are absent. Large banks rarely telegraph compliance decisions before they take effect, and Goldman’s move could prompt peer institutions to re-examine their own AI vendor arrangements in the jurisdiction. No reporting to date has detailed what any of these banks has said about Claude specifically.

The broader pattern across global finance is one of uneven adoption. Banks have moved quickly to deploy generative AI tools for coding, research summarization, and client communication, but the legal and regulatory landscape governing those tools varies sharply from one market to the next. Data protection rules, model-training transparency requirements, and cross-border transfer restrictions all differ by jurisdiction, creating a patchwork that compliance teams must navigate tool by tool and city by city.

How Goldman’s Claude block tests Hong Kong’s fintech ambitions

For Hong Kong, the episode exposes a vulnerability in its push to become a regional hub for AI-driven financial services. The HKMA can publish frameworks and encourage experimentation, but it cannot control the internal risk calculations of global banks. If more firms quietly restrict specific AI products in the city, Hong Kong’s appeal as an innovation-friendly financial center could erode not because of heavy-handed regulation but because of accumulated caution among the institutions the city most wants to attract.

Goldman’s geographic approach to the restriction also raises a practical question for multinational teams. An engineer who relies on Claude in New York loses the tool the moment a flight lands at Hong Kong International Airport. That kind of fragmented experience complicates collaboration, slows projects, and forces teams to maintain parallel workflows. If the restriction persists, Goldman will need to decide whether to standardize its AI toolkit globally or accept the productivity cost of location-by-location variation.

Until Goldman or Anthropic breaks its silence, the story will be defined as much by what is missing as by what is known. A widely used AI assistant has gone dark for one of the world’s most powerful investment banks in one of Asia’s most important financial capitals. The regulator wants more AI in banking. The bank, for reasons it has not shared, wants less of this particular AI in this particular city. Other institutions, regulators, and technologists are watching that gap closely, because how it closes will say a great deal about where generative AI can and cannot operate in global finance.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.