BTC 71,541.00 -1.82%
ETH 2,213.52 -1.33%
S&P 500 6,816.89 -0.11%
Dow Jones 47,916.57 -0.56%
Nasdaq 22,902.90 +0.35%
VIX 19.23 -1.33%
EUR/USD 1.09 +0.15%
USD/JPY 149.50 -0.05%
Gold 4,787.40 -0.64%
Oil (WTI) 96.57 -1.33%
BTC 71,541.00 -1.82%
ETH 2,213.52 -1.33%
S&P 500 6,816.89 -0.11%
Dow Jones 47,916.57 -0.56%
Nasdaq 22,902.90 +0.35%
VIX 19.23 -1.33%
EUR/USD 1.09 +0.15%
USD/JPY 149.50 -0.05%
Gold 4,787.40 -0.64%
Oil (WTI) 96.57 -1.33%

Google Shuts Down AI Health Feature Following Safety Concerns

| 2 Min Read
Google discontinued its experimental "What People Suggest" Search feature, which leveraged AI to curate health-related insights from online community discussions.

Google has quietly discontinued "What People Suggest," an AI-powered search feature that aggregated health advice from online forums and social media posts. The company framed the decision as part of routine interface streamlining, but the timing tells a more complicated story about the collision between artificial intelligence, medical information, and user trust.

The feature launched just a year ago at Google's March 2025 Check Up event with a seemingly helpful premise: use AI to synthesize scattered online discussions into organized themes. Someone searching for arthritis management tips, for example, might see curated perspectives from patients describing their exercise routines or pain relief strategies. The tool was mobile-only and limited to US users.

The Fundamental Design Problem

The issue wasn't that Google was surfacing community health discussions—forums like Reddit and patient support groups have long served as valuable spaces for people navigating medical conditions. The problem was presentation. By wrapping anecdotal advice in AI-generated summaries within the authoritative context of Google Search, the feature blurred the line between peer support and medical guidance.

When health information appears in a polished, algorithmically organized format on a platform billions trust for factual answers, users may not distinguish between "what worked for someone on a forum" and "what medical evidence supports." That distinction matters enormously when the topic is cancer treatment, chronic disease management, or medication decisions. The feature essentially took the informal, clearly subjective nature of forum posts and repackaged them into something that looked more definitive than it was.

Google's explanation to The Guardian emphasized that the removal was about simplifying search results, not addressing safety concerns. But context matters. The shutdown follows mounting criticism of how the company's AI handles medical queries across its search products.

A Pattern of Medical AI Missteps

In January, The Verge documented troubling examples from Google's AI Overviews feature—the AI-generated answer boxes that appear above traditional search results. One case involved advice telling pancreatic cancer patients to avoid high-fat foods, directly contradicting clinical guidance that these patients often need calorie-dense diets to maintain weight during treatment. Another involved incorrect information about liver function tests.

Google responded by stating it invests heavily in health information quality and updates results when needed. But these incidents reveal a structural challenge: AI systems trained on broad internet data will inevitably encounter conflicting medical information, outdated guidance, and context-dependent advice that doesn't translate well into universal summaries. The models don't inherently know which sources reflect current medical consensus or which recommendations require individualized clinical judgment.

The "What People Suggest" feature amplified this problem by design. It specifically surfaced non-expert perspectives, which can be valuable for understanding patient experiences but dangerous when mistaken for medical recommendations. Someone reading that "many people with arthritis find relief by stopping their medication during flare-ups" might not realize that's anecdotal experience contradicting standard treatment protocols.

Why Health Search Demands Different Standards

AI-assisted search works reasonably well for many domains. When Google's algorithms organize product reviews, travel recommendations, or restaurant suggestions, the stakes are relatively low. A suboptimal hotel choice or mediocre restaurant meal doesn't carry the same consequences as following incorrect medical advice.

Health information operates under different rules. Medical guidance depends on individual circumstances—age, comorbidities, medication interactions, disease stage—that generic AI summaries can't account for. What helps one person's condition might harm another's. Professional medical training exists precisely because these nuances matter.

Regulatory frameworks reflect this reality. The FDA regulates medical devices and diagnostic tools. Professional licensing boards oversee who can provide medical advice. These systems exist because health misinformation carries real harm potential. AI features that present health information need to navigate this landscape carefully, distinguishing between general education, peer support, and actual medical guidance.

Google isn't abandoning AI in health search—the company continues developing AI Overviews and other features that touch medical topics. But the "What People Suggest" removal suggests the company is recalibrating which experiments are worth the risk. Features that aggregate community health discussions may simply be too difficult to implement safely at scale within a search interface that users already trust for authoritative information.

What This Means for AI Health Tools

The broader challenge extends beyond Google. As AI companies race to make their systems more helpful for health queries, they're discovering that medical information doesn't compress neatly into chatbot responses or search summaries. The same AI capabilities that work well for coding assistance or writing help become liability risks when applied to health topics.

This creates a tension for search companies. Health queries represent a significant portion of search traffic—people naturally turn to Google when symptoms appear or after receiving a diagnosis. Ignoring this use case means missing a major user need. But serving it poorly creates both reputational and potential legal exposure.

The solution likely involves clearer boundaries. AI tools can help users find authoritative medical sources, understand complex terminology, or locate relevant clinical trials. They're less suited for synthesizing treatment recommendations or organizing crowdsourced health advice into seemingly authoritative summaries. The line between those use cases isn't always obvious during product development, but it becomes clear when features reach millions of users making real health decisions.

The Road Ahead

Google's decision to pull "What People Suggest" won't resolve the fundamental tension between AI capabilities and medical information standards. The company will continue experimenting with AI in health search, and some experiments will inevitably surface problematic results. The question is whether these incidents lead to more cautious product design or simply reactive feature removals after public criticism.

For users, the episode reinforces an uncomfortable reality: the same search tools we rely on for factual information are increasingly mediated by AI systems that can't always distinguish good medical advice from dangerous misinformation. The polished presentation of AI-generated summaries can make information seem more reliable than it actually is. That gap between appearance and reality is where real harm can occur, and it's a problem the tech industry hasn't yet solved.

Comments

Please sign in to comment.
Newsfandora Market Intelligence