You may not be acquainted with the phrase “peanut butter platform heels” but it surely apparently originates from a scientific experiment, the place peanut butter was reworked right into a diamond-like construction, beneath very excessive strain—therefore the “heels” reference.
Besides this by no means occurred. The phrase is full nonsense, however was given a definition and backstory by Google AI Overviews when requested by author Meaghan Wilson-Anastasios, as per this Threads publish (which comprises another amusing examples).
The web picked this up and ran with it. Apparently, “you’ll be able to’t lick a badger twice” means you’ll be able to’t trick somebody twice (Bluesky), “a unfastened canine will not surf” means one thing is unlikely to occur (Wired), and “the bicycle eats first” is a means of claiming that you need to prioritize your vitamin when coaching for a cycle trip (Futurism).
Google, nonetheless, shouldn’t be amused. I used to be eager to place collectively my very own assortment of nonsense phrases and obvious meanings, but it surely appears the trick is now not attainable: Google will now refuse to point out an AI Overview or inform you you are mistaken should you attempt to get an evidence of a nonsensical phrase.
Should you go to an precise AI chatbot, it is somewhat totally different. I ran some fast exams with Gemini, Claude, and ChatGPT, and the bots try to clarify these phrases logically, whereas additionally flagging that they look like nonsensical, and are not in widespread use. That is a way more nuanced strategy, with context that has been missing from AI Overviews.
Now, AI Overviews are nonetheless labeled as “experimental,” however most individuals will not take a lot discover of that. They’re going to assume the data they see is correct and dependable, constructed on data scraped from net articles.
And whereas Google’s engineers might have wised as much as this specific sort of mistake, very like the glue on pizza one final yr, it most likely will not be lengthy earlier than one other comparable challenge crops up. It speaks to some primary issues with getting all of our data from AI, slightly than references written by precise people.
What is going on on?
Essentially, these AI Overviews are constructed to supply solutions and synthesize data even when there isn’t any actual match on your question—which is the place this phrase-definition drawback begins. The AI characteristic can also be maybe not one of the best choose of what’s and is not dependable data on the web.
Seeking to repair a laptop computer drawback? Beforehand you’d get a listing of blue hyperlinks from Reddit and numerous assist boards (and perhaps Lifehacker), however with AI Overviews, Google sucks up all the pieces it will probably discover on these hyperlinks and tries to patch collectively a sensible reply—even when nobody has had the particular drawback you are asking about. Generally that may be useful, and typically you would possibly find yourself making your issues worse.
What do you suppose to date?

Credit score: Lifehacker
Anecdotally, I’ve additionally seen AI bots generally tend to need to agree with prompts, and affirm what a immediate says, even when it is inaccurate. These fashions are desirous to please, and basically need to be useful even when they can not be. Relying on the way you phrase your question, you will get AI to agree with one thing that is not proper.
I did not handle to get any nonsensical idioms outlined by Google AI Overviews, however I did ask the AI why R.E.M.’s second album was recorded in London: That was right down to the selection of producer Joe Boyd, the AI Overview advised me. However actually, R.E.M.’s second album wasn’t recorded in London, it was recorded in North Carolina—it is the third LP that was recorded in London, and produced by Joe Boyd.
The precise Gemini app offers the proper response: that the second album wasn’t recorded in London. However the way in which AI Overviews try to mix a number of on-line sources right into a coherent complete appears to be slightly suspect by way of its accuracy, particularly in case your search question makes some assured claims of its personal.

With the proper encouragement, Google will get its music chronology incorrect.
Credit score: Lifehacker
“When folks do nonsensical or ‘false premise’ searches, our programs will attempt to discover essentially the most related outcomes primarily based on the restricted net content material accessible,” Google advised Android Authority in an official assertion. “That is true of Search total, and in some circumstances, AI Overviews can even set off in an effort to supply useful context.”
We appear to be barreling in the direction of having engines like google that all the time reply with AI slightly than data compiled by precise folks, however after all AI has by no means fastened a faucet, examined an iPhone digicam, or listened to R.E.M.—it is simply synthesizing huge quantities of information from individuals who have, and attempting to compose solutions by determining which phrase is almost definitely to go in entrance of the earlier one.