Matheus Bertelli/Pexels

OpenAI is reportedly gearing up to impose new restrictions on its artificial intelligence model, ChatGPT, that would prevent it from dispensing medical, legal, and financial advice. This move is part of a broader safety initiative, prompted by increasing legal risks and regulatory scrutiny on tech companies. The changes underscore ChatGPT’s recognition of its own limitations, as it seeks to avoid providing specific advice in areas that require professional expertise.

Background on ChatGPT’s Evolution

Since its inception, ChatGPT has seen a rapid rise in popularity, increasingly handling sensitive queries that have led to calls for restrictions on advice in regulated fields such as medicine and law. There have been instances where the AI’s responses blurred the lines between general information and professional advice, setting the stage for the upcoming policy shift.

In contrast to the current focus on limiting advisory outputs, the perspective in 2023 was more about integrating ChatGPT into education without outright bans. This approach highlighted the potential of AI as a tool for learning, but also underscored the need for careful management and oversight, especially when dealing with sensitive topics.[source]

Details of the Proposed Bans

The proposed restrictions on ChatGPT include a specific prohibition on medical advice. Instead of generating health-related recommendations, the AI will redirect users to qualified professionals. This measure is designed to ensure that users receive accurate and reliable health information from certified experts.

The ban also extends to legal advice, with the AI refraining from interpreting laws or offering case-specific guidance. This is to prevent the dissemination of misinformation and to protect users from potential legal repercussions. Financial advice is also included in the restrictions, addressing risks in areas such as investment tips or money management queries.[source]

Reasons Behind the Safety Clampdown

Legal risks are a primary driver behind the new rules, with potential lawsuits from users who rely on inaccurate AI-generated advice in critical areas posing a significant threat. Regulatory pressure from authorities concerned about AI overstepping into professional domains has also played a role in prompting OpenAI to implement these ‘New Rules’.[source]

OpenAI’s acknowledgment of ChatGPT’s inherent limitations in providing reliable, context-specific advice for medical, legal, and financial matters is a key factor in this decision. This move reflects a responsible approach to AI development, prioritizing user safety and regulatory compliance over unrestricted functionality.

Implementation of New Rules

The bans will be technically enforced through prompt engineering and response filters designed to detect and block advisory content. This approach will ensure that the AI does not inadvertently provide advice in restricted areas.

Reports from early November 2025 indicate that the rollout of these new rules is imminent, suggesting that users will soon see changes in how ChatGPT responds to certain queries.[source] User notifications and interface changes will inform people of these restrictions when attempting restricted queries, helping to manage expectations and guide user behavior.

Implications for Users and Developers

Everyday users will need to adapt to these changes, seeking human experts for medical, legal, or financial needs rather than relying on ChatGPT. This shift may require some adjustment, but it ultimately serves to protect users from potentially harmful misinformation.

Developers building on OpenAI’s API will also be affected, with guidelines to avoid integrating banned advisory features. This could influence the development of future AI applications, steering them away from areas that require professional expertise. The school integration example from 2023 illustrates evolving attitudes toward controlled AI use, highlighting the need for careful management and oversight.[source]

Broader Industry Reactions

The industry’s reaction to these changes will be crucial in shaping similar policies at other AI firms facing regulatory scrutiny. Tech competitors may follow suit, adopting similar restrictions to mitigate legal risks and comply with regulatory standards.

Expert opinions on whether these bans enhance safety or overly limit AI’s potential in educational and informational roles will also shape the discourse around AI ethics. Media coverage plays a significant role in this, with outlets framing the story as ChatGPT ‘admitting its limit’ amid growing concerns.[source]

More from MorningOverview