“Sure, I Can generate That For You”: Science Journals are Flooded with ChatGPT Fake “Research”

May 9, 2024

Generative AI provided by ChatGPT and some other language models is pretty useful in many tasks, such as programming, prototyping, or web design. Unfortunately, it’s prone to abuse, and it’s evident in the case of researchers publishing in peer-reviewed science journals who are too lazy or downright stupid. Their papers include whole phrases, giving away that ChatGPT generated the article.

Phrases such as “Sure, I can generate that for you,”Certainly, here’s an example of…” and “Or as large language model…” are now widespread in published research articles.

Especially prone to this are Indian “scholars” who publish both in international journals and in their local publications, from smaller academic institutions to largest universities – but even the US academics aren’t without their fault.

Here are some of the more peculiar cases of practicing this kind of “science” you can come across, along with the links to their groundbreaking papers:

1. “Comprehensive Review of the Book Study of Language by George Yule”

A social science “article” published in the Saudi Journal of Humanities and Social Sciences in Dubai University. The article was written by Indian researchers from Uttar Pradesh, including professors Muneer Alam, Munawwar Mushtaque, and Mohd Rizwanullah.

Or rather, it was written by a professor Language Model:

Link to the paper: https://saudijournals.com/media/articles/SJHSS_85_103-107.pdf

2.” Synergistic Effect of Phosphatic Fertilizer and Biofertilizers on Soil Enzyme Activity and Yield of Finger Millet (Eleusine coracana L.)”

Another article, this time about fertilizers published in “Biological Forum – An International Journal” can entertain the reader with phrases such as this:

The paper was written by four supposed authors, one signing off as an “Associate Dean” and another as a “Senior Scientist.”

Link to the paper: researchgate.net.

3. “Investigating The Molecular Aspects of Theileria Annulata In Naturally Infected Animals, Alongside A Mention of Tick Distribution In Hyderabad And Karachi”

The following article has a lenghty title, and it was written by Pakistani researchers. They should probably “investigate” their paper first. This article was published in Karachi University journal, the largest school in the country.

4. “Exploring the Nature of Christ: An Insightful Examination of Seventh-day Adventist”

American scientists aren’t without fault. ChatGPT creeps its way even to theology. Christian Joseph Pacoli from Mountain View College, School of Theology, “explored the nature of Christ” with a bit of help from ChatGPT, but was too lazy to remove its references.

Mountain View College, located near the Google campus, is named one of the top US schools for low-income students.

Link to the paper: academia.edu.

5. “A Unique Approach to Noise to Electricity Generation”

This is a particularly nasty example because the “scholar” needed ChatGPT assistance in visualizing a basic electrical circuit. That’s a high-school physics level. Moreover, this suggests that the software produced a diagram, which “Professor in Department of Mechanical Engineering, Hyderabad Institute of Technology” later replicated by hand (as pictured above) to make it more believable that he made it himself.

Does he even know CAD software exists? The irony is the article is titled “A unique approach.”

Link to the paper: academia.edu.

6. “Unveiling the Digital Armor: Harnessing the Social Media Symphony to Promote COVID-19 Vaccines”

The title is so painfully cringe, it’s hard to look at. ChatGPT, for odd reasons, especially likes musical metaphors. Often, everything is a  “fiery crescendo,” “harmonious melody,” or a “heavenly symphony.” “Harnessing” and “unveiling” are words no sane people use outside of prose, and along with the “tapestry” these are probably the most widespread ChatGPT buzzwords. The “digital armor” is just the icing on the cake.

This one was published in “A Journal for New Zealand Herpetology.” What does it even have to do with reptiles? There are 6 (six) authors of this groundbreaking piece of  ‘research’.

As expected, it has unedited fragments and a copy-pasted ChatGPT transcript. This one is so sloppy that one of its paragraphs is suddenly cut off.

Despite having six supposed authors, it looks like no one (including journal reviewers) has read it.

Link to the paper: researchgate.net.

But does it matter?

It’s just a tiny fraction of articles published in peer-reviewed science journals or conference materials. You can find thousands of badly edited ChatGPT drivel in science periodicals. I presented just some of the cases. The prevalence of the phenomenon is astounding.

Below is a breakdown of papers that I came across while writing this article, sorted by leading “researcher” nationality:

  • 13 Indian,
  • 3 Pakistani,
  • 1 Indonesian,
  • 1 Chinese,
  • 1 American.

These people are too stupid or too lazy, or perhaps both, to even bother reading their papers before submitting them to journal reviewers, who, in turn, don’t read these articles too.

Not that it matters. No one will read garbage such as “Unveiling the Digital Armor: Harnessing the Social Media Symphony to Promote COVID-19 Vaccines “, a ChatGPT-generated paper on Twitter hashtags. They churn out these articles with minimal effort. Just Ctrl+C, Ctrl+V some sloppily constructed thesis, and you’re done; don’t even bother with reading what you just pasted.

All these articles serve only as a mean to rack up academic credits and meet publication quotas to stay afloat. Academia nowadays seems to attract a lot of mediocrity. It’s ironic how these people don’t even bother to read what they’ve churned out using ChatGPT,  before slapping their names on it. I bet they’re the first to accuse their students of using ChatGPT to write assignments.

There are, of course, dedicated and hardworking people left in the science, too, but they’re few and far between.

Victor Davis Hanson, wrote an excellent book about the subject, titled “Who Killed Homer”, about the decay and degeneration of today’s academia and the waning quality of people working in university environments. It was published two decades ago and predicted a lot of developments that we witness today. I could rave about this book, but that’s a discussion for another time. If you have the opportunity to read it, I highly recommend Hanson’s work.

Anyway, I can’t wait until the new generation trained on ChatGPT from their earliest years enters academia or even joins the regular workforce in the private sector. Such highly educated men will offer us a wild ride, that’s for sure.

Maciej Wlodarczak

My Book is on Kickstarter now!
Check out the Kickstarter for my book: The KS campaign is active. Lots of stuff inside and a pretty attractive package for people wanting to dip their toe in generative AI. Get it now!

4 Comments

  1. We recently received an obviously AI-generated meta-review at one of our paper at a pretty prestigious AI conference. According to Liang et al up to 15% of reviews at top AI conferences are already generated via LLMs. I think this will break the concept of double-blind peer-review in the long run.

    Liang, Weixin, et al. Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews. arXiv preprint arXiv:2403.07183.

Leave a Reply

Your email address will not be published.

Now on Kickstarter!

Check out the Kickstarter for my book: KS campaign is active now. Lots of stuff inside and a pretty attractive package for people wanting to dip their toe in generative AI!

Don't Miss