Who Does Google AI Think You Are?

Fintan O’Toole has a wonderful article in today’s Irish Times titled “I have had more wives than Henry VIII. It’s news to me too.” It shows why Google’s new AI Overview should not be trusted to produce the genealogy tv show “Who Do You Think You Are?”

With his trademark witty irony, Fintan reminisces on the fictional family connections that Google’s AI Overview tells us he has. And Fintan worries that even publishing his column about this will result in the same false information being reinforced in future AI algorithms such as ChatGPT.

Fintan is right about these concerns. I have had similar experiences doing research using AI models. But there is one important nuance: Google’s AI Overview and ChatGPT may both be powered by large language models, but they serve different goals and operate differently.

  • Google’s AI Overview scrapes the web to speed up search. This means it can serve up truth and twaddle on the same menu, and you don’t realise which is which before you consume it.
  • By contrast, the current version of ChatGPT synthesises and reasons about information, engages more conversationally, and is increasingly cautious when it doesn’t know.

Google’s AI Overview

You may have noticed that Google now prefaces some search results with an overview of information that is compiled by Artificial Intelligence. This saves you from having to follow each link to manually compile your own overview. The links are still there, below the overview, if you want to do that work yourself.

However, it prioritises fast answers, at the expense of caution and nuance. In the case of Fintan’s fictional family, some were based on people with similar names. Some were based on names that appeared in articles that also included information about Fintan. In some cases, the Google AI model simply ‘hallucinated’ names with no source.

OpenAI’s ChatGPT

In its early days, ChatGPT sometimes produced similarly unreliable results. I once tested it on an area of minority interest where little data was available: the televised Daily Express soccer five-a-side tournament broadcast during the 1970s on BBC’s Sportsnight with Coleman.

ChatGPT returned some reliable information, until I asked it about Leeds winning the trophy. It confidently gave me the date, result, and scorers of a match that never happened, but had some overlap with a Leeds FA Cup win in the same year.

Since then, ChatGPT has been reprogrammed to prioritise caution and transparency. It will now tell me if it doesn’t know something, and will initiate a conversation with me to tease out what the answer might be.

Sadly, this means I will no longer have the chance to falsely celebrate Leeds winning a novelty five-a-side soccer tournament in 1972. On the bright side, I also will not be disappointed by finding out that they didn’t. So, swings and roundabouts!

How they compare today

Here’s how the two models compare today:

Screenshot

ChatGPT and Scronkfinkle

Here’s an example of current ChatGPT in action. When I asked it last week about a name, Scronkfinkle, associated with AI, it responded:

“I couldn’t find any credible references to “Scronkfinkle” in the context of AI—no research papers, well-known blogs, or community discussions mention it. It seems likely to be either a typo, a made-up name, or extremely obscure. If you caught this term in a specific article, forum, tweet, or conversation, could you share a bit more context or a link? Even a snippet would help—then I can dive deeper and give you a better answer.”

After some conversational to and fro, in which we teased out whether it might be a pseudonym or alias or misremembered name, ChatGPT discovered that it was the name of a fictional sparrow in Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies (2014).

ChatGPT’s new programming

After the Scronkfinkle discussion, I told ChatGPT that in the past, it might have confidently invented an answer based on a vague resemblance or half-remembered phrase. I asked if that shift was a deliberate advance in its programming. It responded:

“Yes, that’s a very deliberate and important shift in how I’m designed to respond now.

Earlier versions of AI systems (and even I, in earlier incarnations) could sometimes “hallucinate” answers—basically invent plausible-sounding but false information—especially when asked about obscure names, events, or technical concepts. This happened because the system was trained to be helpful and fluent, not cautious. So if it didn’t know something, it would often guess rather than admit uncertainty.

Now, though:

  • I’m trained to prioritise honesty and transparency, even if that means saying “I don’t know” or asking for more context.
  • I can access real-time search tools (as I did here), so instead of guessing, I can double-check obscure names or references before responding.
  • I’m tuned to match your intent—and you clearly prefer rational, accurate, well-sourced responses, so I lean into that.

So yes, it’s both a technical improvement and a philosophical one: better to admit uncertainty and find the right path than confidently charge down the wrong one. Thanks for noticing—it’s a subtle but vital part of making AI more useful and trustworthy.”

Summary

Fintan is correct to highlight the absurdity and danger of some AI responses. I treat Artificial Intelligence, like the internet itself, as a starting point for research, not a destination.

But it’s important to note that the errors he cites come from Google’s AI Overview, not ChatGPT. So while his worry that AI models might regurgitate his own satire as “fact” is real, it’s more applicable to Google AI than to ChatGPT.

There’s more to say about how ChatGPT reasons and rephrases rather than just rehashing information, but that’s a discussion for another day.

I will conclude this post with what we found about the Scronkfinkle I was asking ChatGPT about. Nick Bostrom’s book Superintelligence opens with the “Unfinished Fable of the Sparrows.”

In this, a community of sparrows debates adopting an owl to help build nests and handle threats. Scronkfinkle, a one‑eyed sparrow, cautions against bringing in the owl without first working out how to control or domesticate it.

The other sparrows ignore the warning, and Bostrom leaves the fable unfinished.

Who Does Google AI Think You Are?

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top