The Toxic Culture of Rejection in Computer Science
Unlike most disciplines, in Computer Science, conference publications dominate over journals, and program committees carry out the bulk of the peer reviewing. Serving on a PC is a yeoman’s service, and the community owes them a debt of gratitude. However, I believe that a toxic culture has emerged. This blog is a call for PCs to change their priorities.
We have come to value as a quality metric for conferences a low acceptance rate. This feeds a culture of shooting each other down rather than growing and nurturing a community. The goal of a PC has become to destroy rather than to develop. Many of our venues are proud of their 10% acceptance rates. Are such low acceptance rates justified?
The conferences I have served on do not attract many submissions from charlatans. Most papers we reject are serious efforts, with a great deal of background work, often conducted by our most promising young minds. Just picking from a recent program committee I served on, we rejected papers that expert reviewers found to be “well-written … on an interesting topic that is very relevant to [the conference],” with high “interest of the approach,” and a “remarkable engineering effort.” But since we are conditioned to find reasons to reject rather than reasons to accept, those papers are thrown out due to a single dominating negative review.
One favorite reason to reject is “lack of novelty.” We certainly want papers that inform rather than just repeat. But there are several problems with the novelty criterion and how it is applied. First, too many of us use a notion of science, outdated since Thomas Kuhn, which views progress of a discipline as the accretion of new facts. The purpose of publication becomes to archive facts rather than to illuminate, educate, and inform the community.
I have written quite a bit about how technology develops as an evolutionary process. In a Darwinian evolutionary process, most mutations (novelty) are deleterious and do not persist in an ecosystem. The same is true of technology. Most truly new ideas are bad ones that will not survive. Why then do we put novelty as our highest goal? In contrast, most good ideas get reinvented multiple times before they catch on. They need the reinforcement of repetition to become established in a culture. Our process kills them off instead.
Worse, we reject nearly all systems papers because building any system requires integrating a lot of prior art, and all that prior art looks familiar to the reviewers. Reviewers can’t point to a delta, an identifiable fact that they now hold in their heads that they did not hold before. But they don’t realize that such papers can teach us how to pursue excellence in engineering and design, how to critically evaluate alternative approaches, and how to design experiments that evaluate those approaches. Such learning defies any effort to single out the novelty, so we reject.
The emphasis on novelty has deep roots in academic publishing. It used to be that publishing was expensive, and any repetition came at the expense of other things that could have been published. Today, however, publishing is essentially free.
In addition, intellectual disciplines used to be smaller and used to share a canon, a collection of ideas known to all worthy practitioners. We now instead have a discipline where there is no shared canon. We accept papers that, by accident, appear novel to the three random PC members who happen to be unfamiliar with the prior art.
A more defensible reason for an emphasis on novelty is that we don’t want to waste the time of our readers and conference attendees by telling them something they already know. However, they don’t know, and with most novel papers, they don’t even want to know. Consider how different conferences would be if we accepted interesting papers instead of novel ones. A paper that reinforces a prior idea may be more valuable than a paper that introduces a new idea. We should be focused on learning as a community, not on the accretion of an archive of incremental facts.
A second favorite criterion for rejection is obviousness. However, most good ideas are obvious in retrospect, and humans are very bad at realizing that they just learned something when presented with such an idea. We all react as, “oh yeah, of course, that’s true,” not realizing that it wasn’t true to us an hour ago. Instead, we favor the esoteric, the difficult to read, and the obscure. If a paper is easy to read, it is deemed obvious.
Our double-blind reviews combined with high rejection rates put us in a morally indefensible position. Even if we don’t realize it, we often learn from the papers we reject. In fact, the bidding process used in many conferences encourages PC members to pick papers to review that they expect to learn from. But the reviewers have no way to give proper attribution for what they learn. Without malice, they will go on to use what they learned. It’s impossible not to. We have institutionalized an unethical practice.
In PC discussions, to accept a paper, we ask for “champions” to fight for the paper. But being a champion is hard work, and if there is no champion, the paper gets rejected. This is wrong. Instead, we should be obligated to accept a paper that any reviewer learns from, whether they champion the paper or not.
Another problem is the built-in conflict of interest that arises from the combination of the low acceptance rates with the fact that many (if not most) program committee members also have papers in the pool being considered. They have an extra incentive to reject (or to not champion) papers to improve the chances of their own papers in the same pool.
Another problem with the double-blind review process is it creates a position of power with no accountability. Reviewers who will not be identified need not be so sure of their statements. If you have published papers, you certainly have seen criticisms that are arrogant and wrong. But because our papers go to conferences, not journals, your opportunity to respond is limited. Some conferences have a “rebuttal” phase of the review process, but, in my experience, this is a sham and serious dialogue rarely emerges.
Our culture of rejection has serious detrimental effects on our community. I have seen promising young researchers in our field get rejection after rejection. If they stubbornly persist, their results appear years after their shine has dulled. Sometimes, I see them give up and leave academia or leave the field altogether. This is a travesty.
Our culture rewards stubbornness and persistence, not quality, and it is self-reinforcing. After themselves surviving multiple rounds of rejection, young researchers eventually become PC members, and they see no reason why their own younger peers shouldn’t suffer similarly. Is this selectivity or hazing?
Our low acceptance rate exacerbates the flood of paper submissions that overwhelms PCs. The “publish or perish” culture of academia is full of strategies to maximize the number of publications rather than the quality. Combine this force with low acceptance rates, and the problems multiply (literally). They would not multiply if each rejected paper were to just die, but usually, they do not. They get slightly revised and resubmitted until they get accepted. As a result, it is not uncommon for a PC member to be asked to review so many papers within a few days that the task simply cannot be done well. Rejecting most of those papers just leads to more papers to review for the next committee.
Culture is hard to change. We need a coordinated effort to seek papers that are interesting, instructive, useful, or impactful in our community. This could certainly include papers introducing brand-new epsilons, but it is not restricted to such papers. We should publish only high-quality papers, but, today, we reject many high-quality papers.
Other communities have reviewing practices that mitigate some problems. These include disclosure of the reviewers’ names upon acceptance or rejection of a paper; archiving each submission, the reviews, and the authors’ rebuttals; and publishing reviews and rebuttals along with papers. This latter policy, for example, could enable publishing papers that have minor flaws without compromising the quality of the venue. The discussion of the possible flaws would become part of the archive.
If you serve on a PC, you should assume the anonymized authors are your colleagues, PhD students, and friends, not strangers. The double-blind review process has eliminated natural biases that may have influenced our reviews in the past, including tendencies to reject based on gender or to accept from the “better” institutions. But we have replaced these biases with an across-the-board tendency to reject.
You should also use “lack of novelty” as a criterion with caution. Almost nothing of value is ever truly new, and many good ideas need repeated reinforcement to catch on. Let’s focus on what the paper contributes to the community, not to the archive of facts.
I owe thanks for concrete suggestions on an earlier draft to Jan Beutel , Enrico Bini, Alain Girault, Marten Lohstroh, and Frank Mueller. Nevertheless, the author takes full responsibility for all opinions expressed here.
Author: Edward Lee is Professor of the Graduate School and Robert S. Pepper Distinguished Professor Emeritus in Electrical Engineering and Computer Sciences (EECS) at the University of California at Berkeley, where he has been on the faculty since 1986. He is the author of seven books, some with several editions, including two for a general audience, and hundreds of papers and technical reports. Lee has delivered more than 200 keynote and other invited talks at venues worldwide and has graduated 40 PhD students. Professor Lee’s research group studies cyber-physical systems, which integrate physical dynamics with software and networks. His focus is on the use of deterministic models as a central part of the engineering toolkit for such systems. He is the director of iCyPhy, the Berkeley Industrial Cyber-Physical Systems Research Center. From 2005-2008, he served as Chair of the EE Division and then Chair of the EECS Department at UC Berkeley. He has led the development of several influential open-source software packages, notably Ptolemy and Lingua Franca.
Lee received his BS degree in 1979 from Yale University, with a double major in Computer Science and Engineering and Applied Science, an SM degree in EECS from MIT in 1981, and a Ph.D. in EECS from UC Berkeley in 1986. From 1979 to 1982 he was a member of technical staff at Bell Labs in Holmdel, New Jersey, in the Advanced Data Communications Laboratory. He is a co-founder of BDTI, Inc., where he is currently a Senior Technical Advisor, and has consulted for a number of other companies.
Lee is a Fellow of the IEEE, was an NSF Presidential Young Investigator, won the 1997 Frederick Emmons Terman Award for Engineering Education, received the 2016 Outstanding Technical Achievement and Leadership Award from the IEEE Technical Committee on Real-Time Systems (TCRTS), the 2018 Berkeley Citation, the 2019 IEEE Technical Committee on Cyber-Physical Systems (TCCPS) Technical Achievement Award, and the 2022 European Design and Automation Association (EDAA) Achievement Award.
Disclaimer: Any views or opinions represented in this blog are personal, belong solely to the blog post authors and do not represent those of ACM SIGBED or its parent organization, ACM.