Letters to the editor (letters) of journals are a common means to advance scholarly conversations, and most biomedical journals include letters as publications that are indexed in Medline and related indices. With the advent of artificial intelligence (AI), and its ability to rapidly summarize text and compose “human-sounding” manuscripts, it is now possible for people to generate many letters by systematically combing the literature for new papers, having AI critique the text, and generating letters to the editor. Anecdotally, concern about this phenomenon has been a topic of discussion among journal editors, and a recent story in The New York Times,1 along with a study published on a preprint server,2 has highlighted the scope of the issue.
At the PRiMER journal, the editorial team encountered such an instance in the summer of 2025, and believe it important to report to the field, in advance of what may be an emerging problem. As a group, the editorial management team of PRiMER (the editor in chief [EiC], associate editors [AEs], and the production manager), wish to share our recent experience with an AI-generated letter.
A letter was submitted that commented on a paper that had been published a few weeks prior, and immediately caught the attention of the EiC, as it was critical and incisive beyond what is normally seen following peer and editorial review and publication. While many letters will draw out a specific issue, or add a perspective, this particular submission seemed to be more of a critical appraisal exercise and commented on study limitations that were already noted in the original publication. The EiC then checked the manuscript for AI content using AI detectors.3–5 The authors had included an attestation statement that they had not used generative AI tools to create the content (as is required at PRiMER6). Nevertheless, the AI content of the submitted letter was determined to be between 46%-100% AI-generated by three different AI detection tools. Upon subsequent investigation and discussion within the editorial management team, we found that the ORCID profiles of the authors of the submission contained nearly 100 highly critical letters to the editor across a broad range of fields, disciplines, and research methods.
We are concerned that what we observed is the start of a concerning new issue in scholarly editing and publishing: in order to generate large numbers of publications to add to curriculum vitae (CVs), some authors are generating both critical assessments, and subsequent letters, using AI tools. The instance we observed potentially results in the ability to publish large numbers of letters in a short period of time.
Why is this important? Undoubtedly, AI allows authors to cut some corners in writing, and perhaps to save some time. However, we have several concerns, based upon what we observed:
- AI-generated critical letters may call out flaws in a particular paper, but as most editors and scholars understand, all papers are flawed. A careful critique by a human scholar that highlights study flaws not immediately apparent or disclosed in a statement of limitations is a valuable addition to the literature. An AI-generated critique that ignores context and is unchecked by the expertise of the “author” group that submits the letter may be incorrect and risks being at best a useless addition to the literature and at worst, a misleading or unjustified critique that undermines readers’ trust in the original publication.
- Adding large numbers of letters that slip relatively easily past editorial desks, perhaps with less scrutiny than an original research article, crowds an already high volume of scientific information into the indices that chronicle the literature. Allowing a flood of AI-generated, easily published, poorly monitored letters potentially increases the noise and erodes the signal-to-noise ratio in any literature search.
- The ability to rapidly generate large numbers of potentially meaningless or erroneous letters, which then endure in the scholarly literature for generations to come, also degrades the value of publication. While many debate the merits of gauging scholars on various publication metrics,7 peer-reviewed publication is nevertheless a de facto part of academic culture.7,8 An proliferation of AI-generated letters potentially makes the process of peer review and publication less meaningful for end readers.
While legitimate discourse and criticism of published literature is vital to scientific discourse, the ability to rapidly generate critical letters to the editor, potentially for the sole purpose of filling CVs, without human scholarly expertise, may be a challenge for journal editors and disciplines as a whole. We urge our fellow editors and scholarly publications to be alert for this new, challenging use of AI.
