Summer is the perfect time to catch up on reading, and a Chicago Sun-Times (issue 18th May) has some extraordinary suggestions in their Summer Reading List for 2025.
Never heard of such literary treasures as Isabel Allende’s “Tidewater Dreams” or Andy Weir’s “The Last Algorithm”? That’s probably because no one has. Those books simply don’t exist.
This happens when you let ChatGPT pick your summer reads. It’s a classic case of hallucination: AI sounding super smug and completely wrong.
What makes this incident even more bizarre is that the fake titles were mixed in with real books, further muddying the waters. And while some of the actual titles on the list do exist, the blurbs that accompanied them often bore little resemblance to the exact content of the books. In some cases, they described entirely different plots.
This isn’t an isolated mistake. It’s a trend. AI writes the copy, and apparently, no one bothers to read it before hitting publish.
The problem isn’t the AI itself. Large Language Models are not a substitute for editorial judgment, research, and fact-checking. It’s just the people that use these tools are too lazy, and the result is that readers end up with this kind of bullshit.
Journalism would be automated soon anyway, so not that it doesn’t matter much. Instead of ten journalists there would be just a single running specialist, running AI agents.
AI Will Keep Hallucinating – Get Used to It
Get used to hallucinations. Large Language Models will keep doing that, because it’s the product of next token prediction patterns.
Models don’t know anything. They don’t have a database of facts, a memory of verified sources, or even the ability to check whether a book actually exists. What they do have is a statistical probability trained to generate the most likely next word based on mountains of text, which might be truthful or not. This data can contain biases, misinformation, outdated facts, or conflicting information. The model learns these patterns, including the incorrect ones.
So when prompted to generate a summer reading list, the AI doesn’t look up real books. It stitches together plausible-sounding combinations of author names and invented titles based on linguistic patterns. That’s how you get “The Last Algorithm” a hallucination that sounds like a real novel by a real writer, but is pure fiction – fiction about fiction. Technically, this is called a hallucination.
Truthfulness is a secondary (and much harder) objective to instill. Sometimes, fabricating information leads to a more “fluent” or “complete-sounding” answer than admitting ignorance or providing truth. AI models don’t have a real-world model or common sense reasoning like humans do. They can’t verify information against external knowledge or logical consistency in the same way we can. They can’t “think” critically about the information they generate.
Models are also trained to sound authoritative. They may not have a built-in mechanism to say “I don’t know” effectively, unless specifically fine-tuned for it. This is what happened here.
Also producing this “Reading List” was a skill issue for the person who produced this article. Vague or poorly phrased prompts can lead the model down paths where it has to make more assumptions, increasing the likelihood of generating information that isn’t grounded in any specific factual input. If you prompt for:
Produce an article about best books to read in 2025
then you get nonsense like this. Correct prompting is a nuanced skill, which was covered both by Open AI and Google Gemini in their cookbooks.
Here, I even made a cover. Someone should probably write the book, using AI of course. We shouldn’t be surprised that AI came up with “The Last Algorithm”. Maybe the LLM is trying to tell us something.