Morning Overview

Google scraps crowdsourced AI health advice feature in Search

Google has pulled the plug on a Search feature that collected and displayed amateur health advice from everyday users, a decision that arrives as the company faces intensifying criticism over the accuracy and safety of AI-generated medical information. The feature, widely known as “What People Suggest,” surfaced crowdsourced tips from users claiming lived experience with health conditions. Google has framed the removal as part of a broader Search page simplification, but the timing aligns with a growing body of evidence that the company’s AI tools are failing users on sensitive health queries.

What the “What People Suggest” Panel Actually Did

The discontinued panel appeared alongside health-related search results and invited users to share personal experiences with medical conditions. Someone searching for advice on managing a chronic illness or interpreting symptoms could encounter suggestions from anonymous contributors who claimed to have dealt with similar issues. The feature essentially turned Google Search into a hybrid of a medical reference tool and an unmoderated health forum, blurring the line between peer support and clinical guidance.

Google told reporters that the removal was part of broader efforts to streamline results, not a direct response to safety concerns. That framing deserves scrutiny. The company did not point to internal data showing the feature was underused or redundant. Instead, the stated rationale sidesteps the more uncomfortable question: whether unvetted crowdsourced medical advice was putting users at risk in the first place.

A Pattern of AI Health Failures

The removal did not happen in a vacuum. Earlier this year, investigations found that Google’s AI Overviews, the automated summaries that appear at the top of search results, were delivering dangerously inaccurate health information. Specific test queries, including searches about liver blood test ranges, returned incorrect values that could mislead patients and clinicians alike. A Google spokesperson acknowledged the errors at the time and referenced “broad improvements” the company was making, though the specifics of those improvements remained vague.

Health organizations responded sharply. The British Liver Trust was among the groups that criticized the outputs, and broader expert analysis described the problem as one of “confident authority,” where AI-generated summaries present uncertain or wrong information with the same visual weight and tone as verified medical guidance. Reporting on how these tools project an aura of certainty has underscored a core danger: users who trust Google as a first stop for health questions may not recognize when the answers are wrong, especially when the interface offers no visible cues that the information is unverified.

Independent Research Confirms the Risks

Academic work has reinforced these concerns with systematic evidence. A preprint on AI search performance titled “Auditing Google’s AI Overviews and Featured Snippets: A Case Study on Baby Care and Pregnancy” applied a structured audit framework to examine how Google’s AI features handle queries in sensitive health-adjacent domains. The researchers, affiliated with Cornell University, found measurable inconsistencies in how AI Overviews and featured snippets presented information about baby care and pregnancy, two areas where inaccurate advice can carry serious consequences for vulnerable populations.

The study’s focus on baby care and pregnancy is telling. These are topics where users are often anxious, time-pressured, and searching for reassurance rather than clinical precision. When an AI system returns inconsistent or poorly sourced answers in that context, the potential for harm is not abstract. A new parent acting on a flawed AI snippet about infant feeding or fever thresholds faces real physical risk. The audit framework the researchers developed offers a replicable method for testing whether AI search tools meet basic accuracy standards in high-stakes domains, and the early results suggest Google’s tools fall short.

The preprint itself sits within a broader ecosystem of open research infrastructure. The repository that hosts it is part of a long-running initiative whose mission statement emphasizes rapid dissemination of scientific work. That system is sustained by a network of institutional partners and individual supporters who contribute funding to keep access free. Its operators publish detailed guidance for authors and readers, helping ensure that even early-stage research can be evaluated and challenged by the wider community. In this case, that openness has allowed external experts to scrutinize how a dominant commercial platform handles health information.

Google’s Explanation Does Not Match the Evidence

The company’s insistence that the “What People Suggest” removal is about page simplification rather than safety raises a credibility gap. Google faces mounting scrutiny over its use of AI to deliver health information to millions of users, and coverage of its recent missteps has highlighted systemic weaknesses rather than isolated glitches. Framing the change as a design cleanup, rather than a correction, allows the company to avoid admitting that a feature it built and shipped was fundamentally flawed in a high-risk domain.

This matters because the underlying problem has not been resolved. Removing the crowdsourced panel eliminates one source of unvetted health advice, but AI Overviews and featured snippets continue to operate across health queries. Those features draw from web sources algorithmically rather than from user submissions, yet the audit evidence and investigative reporting both show they produce their own errors. The difference is that AI-generated summaries carry even greater apparent authority than peer suggestions, because they appear at the top of the page in a format that mimics expert consensus.

Google has emphasized that it is constantly updating its systems and claims that safety filters and quality checks are in place. But neither the public statements around AI Overviews nor the explanation for removing “What People Suggest” have been accompanied by detailed disclosures about error rates, evaluation protocols, or escalation procedures when harmful advice is detected. Without that transparency, it is difficult for outside experts to assess whether the company is meaningfully reducing risk or merely adjusting the interface.

What Changes for People Searching for Health Answers

For the millions of users who turn to Google with health questions every day, the practical effect of this change is narrow. One panel disappears. The AI-generated summaries that sit above traditional search results remain in place, and those summaries have their own documented accuracy problems. Users searching for information about pregnancy symptoms, medication interactions, or diagnostic test results will still encounter AI-curated answers that may or may not reflect current medical evidence.

The deeper issue is one of trust architecture. Google has spent years positioning itself as a reliable first source for health information, integrating knowledge panels, health condition cards, and now AI Overviews into the search experience. Each layer adds convenience but also adds risk, because users rarely cross-check what Google tells them against primary medical sources. When the company quietly removes a feature like “What People Suggest” without acknowledging the safety dimension, it misses an opportunity to be transparent about the limits of AI-mediated health information.

In practice, people searching for help with new symptoms or chronic conditions will still face a complex mix of professional resources, commercial content, and now algorithmically generated prose. The disappearance of amateur suggestions may reduce some of the most obviously informal advice, but it does not solve the core challenge of distinguishing between robust medical consensus and plausible-sounding but incorrect claims. That burden continues to fall on users, who may lack the expertise or time to evaluate the quality of what they see.

The Bigger Question Google Has Not Answered

The most significant gap in Google’s response is not about the crowdsourced panel itself. It is about what responsibility the company is willing to assume for the medical implications of its ranking and generation systems. By presenting health answers with a unified, authoritative design, Google encourages people to treat Search as a quasi-clinical resource. Yet when those answers are wrong, the company falls back on language about experimentation, simplification, and ongoing improvements.

If Google is going to keep AI Overviews and similar features at the center of health search, it will need to confront a set of unresolved questions: How will it measure and publicly report error rates in high-stakes domains? What mechanisms will allow clinicians and researchers to flag harmful outputs and see them corrected? And how will the company communicate uncertainty to users in ways that do not simply bury caveats in fine print?

Shutting down “What People Suggest” may reduce one visible symptom of the problem, but it does not address the diagnosis emerging from independent audits and investigative reporting. As long as the company treats safety issues in health search as design problems to be quietly patched rather than structural risks to be openly managed, the tension between convenience and care will remain unresolved, and users will continue to navigate that uncertainty largely on their own.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.