RadOnc Publishing Is A Disaster
Why radiation oncologists should spend more time asking and answering questions that have relevance to cancer patients.
This is my first post on this Dollars and Sense … It’s something I’ve thought a lot about over the last 5 years. I love our specialty, but find that the research has changed for the worse…
In radiation oncology today, the question ‘Why don’t you write that up?’ often leads to papers that serve little purpose beyond padding CVs. Academic publishing has become an exercise in quantity over quality, and it’s time to ask if we’re truly advancing patient care—or just playing the game. When asked to do this, you may want to respond: “Why should I write this up? Will it help patients?”
Getting something published in a journal meant something in the past. If you review older journals, you’ll see that the articles were much more meaningful. There are many seminal papers in the Red Journal, the flagship journal in radiation oncology - randomized trials RTOG 9003 (altered fractionation for head and neck cancer), EORTC 22845 (post-op RT for low grade gliomas). There were retrospective pathologic studies that identified risk factors and studies that allowed us to fine-tune our OAR constraints to reduce radiation pneumonitis. These are not isolated studies - back in the 1980s and 1990s, this was common in the Red Journal It was, in fact, what a flagship journal should be.
But, for many reasons that many others have described - computers/word processors, the internet as a whole, big data, growth of our numbers, the competitive nature of our specialty, having a quantitative way to evaluate students/residents/faculty - the numbers of publications has skyrocketed. And let me be clear, the growth is not because of an abundance of high quality publications. This is inflation with an academic flavor, with each publication being worth less and less.
Here are the “Big 4” of publications that are seen now and add almost no value to the medical community at large.
Demographic / identity / agenda based research - this will not be a rant about DEI. Demographic studies are essential when they uncover actionable disparities or guide systemic change. However, we should critically evaluate whether each study advances those goals or simply reaffirms known inequities without offering solutions. Studies concluding with vague calls for ‘increased awareness’ fail to meet the high bar we should set for impactful research. DEI research has the potential to drive systemic change, but we must demand rigor and actionable outcomes from these efforts. DEI itself is not the problem. Though it is controversial, I remain a cautious supporter of DEI efforts, the majority of academic publications are a misuse of noble ideas that simply do not facilitate the goal of improving health care outcomes - which is what the goal of all physicians should be. Here is an example about who treats what sites. Nothing will come from this - no department will make changes, no patients’ care will be improved. I am an environmentalist, but I am also a humanist and oncologist first. If we publish articles like this, we have to consider what the goals are. Carbon neutrality is laudable. But curing cancer is pretty damn important. If an article does not have the goal of improving health outcomes in cancer patients, physicians should not be spending their limited time producing it. Listing other examples would occupy the majority of space in this post and so I will leave it to the good reader to confirm my findings. Except one more, I can’t help myself. This study surveyed 26 people and asked them “how DEI could be improved within RO departments by creating a more inclusive organizational culture.” Goodness gracious. This made the Red Journal.
Big Dataset retrospective research. While some retrospective studies have provided valuable insights—for instance, identifying unexpected toxicities or real-world disparities—many others lack hypothesis-driven rigor and fail to inform clinical practice. Undoubtedly, this shaped and transformed medical care. This was particularly crucial in radiation oncology where there were so many unknowns. We needed a way to develop a framework on how to use this information to create clinical trials that would test them out. A hypothesis was generated, retrospective analysis helped affirm/negate this hypothesis and then a prospective trial could be run. The outcome would help us determine the best course forward. What has happened now is “p-hacking”, where researchers mine datasets for statistically significant results without meaningful context. These findings rarely lead to prospective trials or changes in practice. If you look at enough variables, you will find something that is p<0.05. No hypothesis is chosen - ‘investigators’ simply run regressions, find some positives and write a paper. These almost never lead to a prospective study. They are simply unserious and there to boost CVs. No, friends, sending EPIC messages does not improve survival. Take the time to think of why this spurious finding may occur. Take, for example, this study that promotes ultrasound guidance for skin cancer RT. On closer inspection, it’s less a scientific breakthrough and more an industry-sponsored push to justify overutilization. These articles undermine academic integrity and inflate CVs without advancing care. I’m not just picking on random people. My friend published a SEER database analysis on T3N0M0 breast cancer and PMRT. Does PMRT really improve OS by 10-15%? I hope we are not telling patients this and I hope this does not guide your discussions with patients (T3N0 were included on the original PMRT studies).
“Water is Wet” articles. These are articles that tell you something we already know. Many are familiar with the tongue in cheek article about the value of parachutes when jumping out of planes. For example, if we have multiple randomized trials showing that breast IMRT is superior to non-IMRT, then multiple retrospective / real-world data publications showing this does not add to our understanding of the world around us. If we already that poor people have worse health outcomes, then disaggregating a poorer group of a larger group and showing that they have worse health outcomes does not add any value. When a cost of care study compares a longer fractionation scheme to a shorter one in a fee-for-service system, showing that the longer fractionation scheme is more expensive adds no value. More expensive things are more expensive whether you write it up or not. These findings, while intuitive, often don’t contribute new insights or actionable changes in practice. Investigators recently noted homeless patients have worse outcomes - did money and time have to spent to figure this out? The amount of time and labor could have actually been spending time caring about this vulnerable populations. While replication is critical to science, repeatedly confirming what is already well-established adds diminishing returns. Journals must balance the need for validation with the imperative to publish new and actionable insights.
"Soft publications” These are non-scientific articles that make up a significant proportion of the articles in the lower-tier radiation oncology journals. An academic journal is not a good place to publish about “ringing the bell” after radiation treatment. But, if you want to write about this, I’m happy to host a guest post about the topic, which I do find interesting but vehemently disagree is science. Here is one about “sacred moments” in medicine. Pretty words. Not science. Take, for example, a study on marriage improving cancer outcomes. Is marriage beautiful? Absolutely. Is it actionable? Not really. What are we supposed to do—require patients to get married? Prescribe more TLC for singles? What if they are happily divorced or asexual? Don’t get me wrong - I love narrative oncology, but these can be disseminated across other platforms rather than be in the middle of a peer reviewed science journal. I frequently wonder how much time the authors spent going through a “peer review” process and how much money was spent to get it into the publication. The labor hours + the costs are a drain on everyone involved and do not improve patient outcomes.
Now, I understand to many that this may seem as a personal attack, as many readers may find themselves guilty of publishing one of the Big 4 type of articles. My pub list is not only unremarkable, but also uninspiring. I am also guilty of producing retrospective drivel. I have tried to be as charitable as I can, so I do not blame the authors. Who is to blame? 1) Elsevier and others that have monetized this into a multi-billion dollar business, using the free labor of researchers and reviewers to get very, very rich 2) Academic departments that utilize publications as a metric for evaluation and advancement 3) Competitive specialties that use these as a benchmark of quality of applications 4) The reviewers that do not push back hard on the scientific validity of these types of papers 5) The authors themselves (fine, I lied). I don’t entirely blame them—many are simply responding to the signals they’ve been given by mentors, departments, and the broader academic system. When productivity is defined by the number of papers, even brilliant minds can find themselves chasing metrics rather than meaning. The challenge for today is to shift this focus so that independent, impactful research is celebrated over superficial volume.
So, I’ve spent time discussing what shouldn’t be published. In a future post, I will talk about the types of questions we should be asking and answering.
And, If you enjoyed this, you may say, “Hey, Simul, why don’t you write this up and submit it to The Red Journal?” Smh. I most certainly will not. 1) It will not improve patient care 2) It will get desk rejected and I will feel sad.
Over the course of this Substack, I’ll occasionally spend time identifying articles that fall into one of these categories, how to avoid the trap of being asked to do write up things you have no interest in writing up and how to come up with better ideas. Hint: always start with a hypothesis and what would change if it is found to be confirmed or negated. This will serve as your North Star on whether to write it up or not.
Love you all,
Sim
Interesting post, Simul. I tend to agree with a lot of what you say, although, as the deputy editor in chief of ACRO’s new online journal, CURiE (Contemporary Updates: Radiotherapy Investigations & Evidence, on the Cureus platform), I think there are also challenges in strictly avoiding the four least valuable manuscripts as you have defined them. As always, and as I’m sure you’d agree, such things are nuanced. So while I’m sure we will accept some of the “forbidden four,” I assure you our approach with authors is to ensure that our manuscripts, when not publication-worthy when received, are guided to make them more relevant to the everyman (or everyperson) practicing radiation oncologist. At the same time we’ve already begun to publish some things that are off the beaten path, but timely and highly relevant nonetheless. For instance (pardon the blatant self-promotion!), we recently published details of a clinical pathway for the radiotherapy management of oligometastatic disease (https://www.cureus.com/articles/290108-acropath-oligometastases-the-american-college-of-radiation-oncology-clinical-pathway#!/). Perhaps a better example of our publication’s diversity and relevance is an article by Thomas et al, which is an editorial emphasizing the need to remove high-cost supplies and equipment from the Medicare Physician Fee Schedule (https://www.cureus.com/articles/282828-the-physician-fee-schedule-was-not-built-for-high-cost-supplies-and-equipment#!/). So as you are a radiation oncologist who cares about the quality and relevance of published works, I want to offer you the ability to help ensure exactly that. I believe you would be a very helpful part of the CURiE editorial team, and I’d very cordially invite you to consider joining us! Let me know if you’re interested, and I’ll gladly put your name into consideration at the first opportunity to do so. Thanks for this pertinent, relevant post!