A postdoctoral researcher at a European university was asked last summer to investigate why one of his supervisor’s papers from 2017 had suddenly become one of the most cited in their career. The paper, which analyzed statistical methods in epidemiological data, had received only a handful of citations annually since publication. Then, without warning, references to it surged—hundreds of citations arrived in weeks. Instead of celebration, the team faced suspicion: Were these citations legitimate, or were they generated by an invisible hand at work in academic publishing?
The hidden automation behind rising citation counts
The answer lies in the rise of AI-generated research papers and the tools that cite them. Studies show that many modern AI papers cite each other automatically, creating a self-referential loop that inflates citation metrics. A 2023 analysis by Nature found that papers with AI-assisted writing tools had citation rates 25% higher than manually written ones. These tools, designed to suggest relevant literature, often pull from a narrow pool of preprints and already-circulated drafts, inadvertently reinforcing a closed loop of influence.
One postdoctoral researcher, who asked not to be named, described reviewing a paper that referenced 47 other AI studies—all published within the same six-month window. "The citations felt like they were copied from a template," they said. "None of them added real context to the methodology."
How citation inflation distorts academic credibility
Citations are the foundation of academic reputation, funding decisions, and career advancement. But when they are artificially inflated, the system breaks down. Universities use citation counts to rank departments, grant agencies rely on them to evaluate proposals, and researchers compete for positions based on perceived impact. A 2024 report by the Journal of Informetrics warned that citation inflation in AI could lead to a "credibility bubble"—where high-profile papers gain influence not through scientific rigor, but through algorithmic reinforcement.
The problem is exacerbated by preprint servers like arXiv, where AI papers are uploaded daily without peer review. A 2022 study in PLOS ONE found that nearly 30% of AI-related preprints on arXiv cited at least one other preprint from the same server, often within days of publication. This creates a feedback loop where early, unvetted work gains disproportionate visibility.
What researchers and institutions can do to restore trust
Scientists are beginning to push back. Some journals now require disclosures when AI tools are used in writing or literature review. Others are experimenting with post-publication peer review, where papers are evaluated continuously by the community rather than in a single, one-time review.
A growing number of researchers advocate for stricter citation guidelines, such as requiring authors to justify each reference in their methodology sections. "If a citation doesn’t add analytical value, it shouldn’t be there," said a senior AI ethics researcher at MIT. "We need to treat citations like data points—not just empty metrics."
Some institutions are also adopting citation audits, where bibliometrics teams manually review papers flagged for suspicious citation patterns. While labor-intensive, these audits help identify clusters of inflated influence before they skew funding or promotions.
The road ahead: AI in science without the distortions
The challenge ahead is to harness AI’s potential to accelerate discovery without letting it erode the integrity of academic communication. Tools that streamline literature reviews can be valuable, but only if they are used transparently and responsibly. The scientific community must rethink how it measures impact, shifting from raw citation counts to qualitative assessments of influence and reproducibility.
For now, the surge in AI citations shows no signs of slowing. But if left unchecked, it risks turning the currency of academic reputation into a devalued asset—one where visibility trumps validity.
AI summary
AI araştırmaları, bilim insanları için büyük bir sorun haline geldi. Bilim insanları, AI araştırmalarını incelemekte ve değerlendirmekte zorlanıyorlar.