Mind Launches Global Inquiry Into AI Risks Following Guardian Investigation Into Google AI Overviews

Mind Launches Global Inquiry Into AI Risks Following Guardian Investigation Into Google AI Overviews
  • The mental health charity Mind is launching a year-long commission to examine the impact of AI on vulnerable people.
  • A Guardian investigation discovered “dangerously incorrect” medical advice in Google’s AI-generated search summaries.
  • Experts warned that AI Overviews provided harmful guidance on psychosis and eating disorders, potentially putting lives at risk.

The mental health charity Mind has announced a groundbreaking global inquiry into the relationship between artificial intelligence and mental health. This move follows a year-long investigation by The Guardian that exposed significant risks within Google’s AI Overviews. The commission will bring together medical professionals, policymakers, tech companies, and individuals with lived mental health experiences to develop necessary safeguards.

The Guardian’s reporting revealed that Google’s AI-generated summaries frequently provided false or misleading health information. These summaries appear at the top of search results and are seen by an estimated 2 billion people every month. In several instances, the AI provided advice that experts labeled as “very dangerous,” particularly regarding sensitive conditions like psychosis and eating disorders.

Dr. Sarah Hughes, the chief executive of Mind, stated that the charity believes AI has immense potential but must be deployed responsibly. She emphasized that “dangerously incorrect” guidance could prevent people from seeking essential treatment. In the most severe cases, the charity warns that such misinformation reinforces stigma and places human lives at risk.

Mind’s research team conducted their own tests to verify the risks posed by these automated systems. Within minutes of searching common mental health queries, the charity was served AI Overviews claiming that starvation was healthy. Other results incorrectly attributed mental health issues solely to chemical imbalances or confirmed a user’s delusions during a crisis.

Google has defended its AI Overviews as “helpful” and “reliable,” claiming they cite reputable medical sources. However, a recent study suggested that YouTube—a platform owned by Google—is cited more frequently than any hospital network or government health portal. While Google has removed AI summaries for some medical searches, critics argue the company is playing a “whack-a-mole” game instead of fixing structural flaws.

The new commission aims to shape a safer digital ecosystem with stronger regulations and standards. It will gather evidence for one year to understand how AI increasingly influences those affected by mental health issues. The goal is to ensure that technology supports public services rather than creating new dangers for those in distress.

Patient advocates have also criticized Google for downplaying safety warnings. Disclaimers often only appear after a user clicks for more information, and even then, they are displayed in small, inconspicuous fonts. Experts argue that this design choice gives an “illusion of definitiveness” that can mislead users into trusting inaccurate data.

As AI becomes more deeply embedded in daily life, Mind insists that safeguards must be proportionate to the risks. The findings of this commission are expected to influence future policy and tech development worldwide. For now, experts advise users to treat AI-generated medical guidance with extreme caution and always consult a healthcare professional.