www.fgks.org   »   [go: up one dir, main page]

Authors, Big Deal, Business Models, Controversial Topics, Ethics, Peer Review

The Downside of Scale for Journal Publishers: Quality Control and Filtration

Fast Good Cheap Triangle

Fast, good, and cheap. Pick any two. Image via Cosmocatalano.

Scale remains a defining factor in the current age of scholarly publishing. Economies of scale are driving the consolidation of the industry under a few large players and pushing toward an end to the small, independent publisher.When we think about scale, we tend to think about big, commercial publishers gathering together thousands of journals, but there are other ways to achieve scale in scholarly publishing. Megajournals (and entire publishing houses) are tapping into economies of scale by decentralizing the editorial process. The benefits of this decentralization, however, come with costs, at least in terms of quality control and filtration.

The economic benefits of scale for publishers are obvious, as you pay lower prices for services, materials and personnel when you buy in bulk. Consolidation is the state of the market and the big publishers keep getting bigger, benefiting more and more from the resulting scale. But scale also tends to exacerbate the complex nature of journal publishing platforms and processes. We saw a good example of this complexity last year when a society-owned journal moved from publishing with Wiley to publishing with Elsevier, and some articles that were meant to be open access were not immediately made so. When even a single journal moves to a new platform, there are often countless moving parts with which one must deal. When working with this level of scale, things fall through the cracks, mistakes get made, and hopefully, over time, corrected.

There are, however, other approaches to scale beyond just being a really big publisher with lots and lots of titles. Some of the more interesting experiments in the journals market have been geared toward decentralization, that is, looking to benefit from scale by spreading the editorial work of the journal (or journals) broadly. PLOS and Frontiers have both had success with these approaches, but both have also recently had setbacks showing some of the cracks inherent in ceding editorial control to thousands of independent editors.

PLOS ONE is, without a doubt, the greatest success story in journal publishing for the current century. What started as an experiment in trying a new approach to peer review turned out to fill an unserved market need. It has been successful enough to carry the entire PLOS program into profit, freeing it from relying on charitable donations. But publishing 30,000 articles (and with rejections, fielding at least another 10,000) presents a major editorial challenge. How do you handle that many articles? PLOS, at least according to their 990 declarations, has approached this problem via an enormous amount of outsourcing. The outsourced functions most strikingly include hiring third party managing editors through firms offering such services. These outsourced managing editors are responsible for making sure submissions are complete and coordinating peer review.

But that only covers the administrative parts of article handling, what about editorial decision-making? PLOS ONE has some 6,100 editors. Rather than funneling everything through an Editor in Chief, the peer review and decision-making process is spread broadly. These editors “oversee the peer review process for the journal, including evaluating submissions, selecting reviewers and assessing their comments, and making editorial decisions.” But even with that many editors on-hand, a journal as broad as PLOS ONE still runs into occasional issues with editorial expertise. It is harder and harder for everyone to find good peer reviewers in a timely manner and the sheer bulk of PLOS ONE sometimes leads to papers being handled by editors without expertise in the research covered.

Without a central point where “the buck stops,” the quality of the review process can be quite variable. Mistakes are made, such as the recently published “Creator” paper with this sentence in the Abstract, “Hand coordination should indicate the mystery of the Creator’s invention.” The authors claimed that this was a mistranslation (they are not native English speakers) and the paper was subsequently retracted.

Let’s be clear — this was not a typical paper, and the vast majority of what PLOS ONE publishes is rigorously reviewed to meet the journal’s standards. But without the Sauronic Eye of an Editor-in-Chief to enforce standards and provide quality control, you’re going to run into papers where someone took a shortcut, didn’t quite do the work, or has an agenda beyond the journal’s stated vision for publication. There is no consistent level of quality control because there are 6,100 different sets of standards being used and no central point where they come together.

Frontiers, the open access publisher, has seen its own share of controversy lately. They were recently declared a “predatory publisher” by Jeffrey Beall, and journalist Leonid Schneider has written extensively about their various issues. Like PLOS ONE, Frontiers has recently had their own nonsense paper published and retracted, and this is part of a larger pattern where the behavior of the publisher’s 55,000 editors (covering 55 journals) is incredibly varied as far as how well it upholds the stated standards for publication.

I don’t think the term “predatory” is accurate for Frontiers, which continues to run some superb journals. The problem is not that Frontiers is making a deliberate attempt to deceive, rather that there is simply an institutional structure that makes quality control very difficult. The editorial strategy chosen by Frontiers is oriented toward crowdsourcing and away from careful curation and scrutiny. When one deals with such large quantities, you get into bell curves and averages. Some of the 55,000 editors are very good at their jobs, others not so much. As with PLOS ONE, a broad net is cast for editorial talent, and the resulting performance is wildly inconsistent.

Crowdsourced editorial management is a deliberate strategy — it cuts costs and likely speeds the review process. The gospel of digital disruption has supposedly taught us that the “good enough” product usually wins over the high quality (but more expensive to produce and higher priced) product. The question that must be asked then, is whether these decentralized approaches are “good enough” for the research literature? Is the success to failure ratio acceptable? Given the bulk of the journals in question, do we even have an accurate picture of the success to failure ratio? The Creator paper sat around for two months before a prominent blogger happened to notice it and fired up the internet’s outrage machine. What other timebombs are lurking in the enormous archives of these publications?

All journals make mistakes and have to issue corrections and retractions, to be sure, but are we willing to accept mistakes that are due to a fundamental lack of oversight, with no one really checking to see that the article was indeed properly reviewed (or reviewed at all)? From a psychological perspective, it sometimes doesn’t even matter if your ratio of quality to mistakes is 5000 to 1, if that one case is egregious. The PLOS ONE editor who let through a sexist peer review comment suggesting that a paper could have benefited from a male author made front page headlines and really harmed the journal’s brand. Put another way, it takes years of hard work to build a reputation for quality, but quality is a very fragile attribute and can be destroyed quickly when something like #CreatorGate surfaces. One prominent researcher went so far as to declare the journal “a joke”, wiping out years of reputation building.

Validation is a key service that journals provide, which is endangered by decentralization. Another key offering from journals is filtration:

…the reputations of journals are used as an indicator of the importance to a field of the work published therein. Some specialties hold dozens of journals—too many for anyone to possibly read. Over time, however, each field develops a hierarchy of titles…This hierarchy allows a researcher to keep track of the journals in her subspecialty, the top few journals in her field, and a very few generalist publications, thereby reasonably keeping up with the research that is relevant to her work.

Ask any researcher in any field and they can tell you the journals which publish the best work that is most relevant to their own research. When faced with an enormous stack of reading, it’s really helpful to be able to prioritize, to know which papers to read first. A good Editor-in-Chief or Editorial Board sets a clear standard for quality and gives a journal its “personality”, which can enable that sort of filtering. When you have 1,000 independent editors each following their own set of rules, the personality of the journal gets diluted, if not lost altogether, and the researcher loses a valuable tool.

Given the numbers of papers published through such decentralized approaches, there is clear market demand for the services these journals offer. But when one looks at the furor that arises around these sorts of blatant editorial errors, it is clear that mistakes of this sort are unacceptable to the community. Editors have a solemn responsibility to strive for quality in all efforts, and a journal’s reputation is based on someone setting standards and consistently enforcing them. Turn that over to a crowd of editors and the resulting articles are likely going to be all over the place. Does reputation still matter? Is this “good enough” for the scholarly literature?

About David Crotty

I am the Editorial Director, Journals Policy for Oxford University Press. I oversee journal policy and contribute to strategy across OUP’s journals program, drive technological innovation, serve as an information officer, and manage a suite of research society-owned journals. I was previously an Executive Editor with Cold Spring Harbor Laboratory Press, creating and editing new science books and journals, and was the Editor in Chief for Cold Spring Harbor Protocols. I received my Ph.D. in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing. I have been elected to the STM Association Board and serve on the interim Board of Directors for CHOR Inc., a not-for-profit public-private partnership to increase public access to research.

Discussion

2 thoughts on “The Downside of Scale for Journal Publishers: Quality Control and Filtration

  1. I’m fascinated by the “Outrage Machine”. I wonder to what extent it has an effect in reality. 140 chars over a glass of wine is easy. I wonder how far it actually penetrates. Like the articles that lie there waiting to be read, I suspect that most folk will only perceive of these papers dimly, if at all.

    I also wonder how one squares the approach of the mega journals (eg PLoS) with the whole reproducibility issue. If you crowdsource the standard setting (and it’s a perfectly valid approach!) then surely that acts in opposition to any attempts to get higher ‘reproducibility’ across a given field of study. Because there will always be an outlet for that poor quality paper.

    I expect that the phrase “Post Publication Peer Review” will show up as a solution to the issue. Problem is, that’s just another word for crowdsourced quality control isn’t it…

    Posted by David Smith | Mar 22, 2016, 6:21 am
  2. David, you pose a reasonable question but is the rate of documented serious errors/retractions that much higher than in traditional publishing? Even the most prestigious journals have their share of retractions. These journals publish a very large number of articles so is there a higher error rate in the review process or just more errors because so many articles are published. I don’t know the answer but I think it is a fair question.

    Posted by David Solomon | Mar 22, 2016, 7:40 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Scholarly Kitchen on Twitter

Find Posts by Category

Find Posts by Date

March 2016
S M T W T F S
« Feb    
 12345
6789101112
13141516171819
20212223242526
2728293031  
SSP_LOGO
The mission of the Society for Scholarly Publishing (SSP) is "[t]o advance scholarly publishing and communication, and the professional development of its members through education, collaboration, and networking." SSP established The Scholarly Kitchen blog in February 2008 to keep SSP members and interested parties aware of new developments in publishing.
......................................
The Scholarly Kitchen is a moderated and independent blog. Opinions on The Scholarly Kitchen are those of the authors. They are not necessarily those held by the Society for Scholarly Publishing nor by their respective employers.
Follow

Get every new post delivered to your Inbox.

Join 20,724 other followers

%d bloggers like this: