Fake GPT-written studies are flooding Google Scholar. Here's why taking them down could make things worse.
- Research papers suspected of using AI are showing up in Google Scholar, according to a study.
- Many discuss controversial topics that are susceptible to disinformation.
- Researchers said removing the GPT-written studies could fuel conspiracy theories.
Scientific papers suspected of using artificial intelligence are appearing in Google Scholar, one of the most popular academic search engines.
A study published this month in the Harvard Kennedy School's Misinformation Review said, "Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI."
"They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing," the study said.
ChatGPT is a chatbot developed by OpenAI that launched in 2022. The chatbot quickly went viral as users began drafting everything from workout routines to diet plans. Other companies like Meta and Google now have their own competing large language models.
Researchers gathered data by analyzing a sample of scientific papers pulled from Google Scholar that showed signs of GPT use. Specifically, scientific papers that included phrases considered common responses from ChatGPT or similar programs: "I don't have access to real-time data" and "as of my last knowledge update."
From that sample, researchers identified 139 "questionable" papers listed as regular results on Google Scholar.
"Most of these GPT-fabricated papers were found in non-indexed journals and working papers, but some cases included research published in mainstream scientific journals and conference proceedings," the study said.
Many of the research papers involved controversial topics like health, computing, and the environment, which are "susceptible to disinformation," according to the study.
While researchers acknowledged that the papers could be removed, they warned doing so could fuel conspiracy theories.
"As the rise of the so-called anti-vaxx movement during the COVID-19 pandemic and the ongoing obstruction and denial of climate change show, retracting erroneous publications often fuels conspiracies and increases the following of these movements rather than stopping them," the study said.
Representatives for Google and OpenAI did not respond to Business Insider's request for comment.
The study also identified two main risks from the "increasingly common" decision to use GPT to create "fake, scientific papers."
"First, the abundance of fabricated 'studies' seeping into all areas of the research infrastructure threatens to overwhelm the scholarly communication system and jeopardize the integrity of the scientific record," the study said.
The second risk involves the "increased possibility that convincingly scientific-looking content was in fact deceitfully created with AI tools and is also optimized to be retrieved by publicly available academic search engines, particularly Google Scholar."
"However small, this possibility and awareness of it risks undermining the basis for trust in scientific knowledge and poses serious societal risks," the study said.