www.fgks.org   »   [go: up one dir, main page]

The money behind academic publishing

    ()

    sporsmal_grey_rgb
    Article

    The academic publishing industry earns high profits and shapes how we undertake medical research. With the increasing demand for free access to articles, academic publishing is now changing, but is it changing for the better?

    Illustration: Helene Brox
    Illustration: Helene Brox

    Most doctors relate to the pharmaceutical industry with a healthy dose of scepticism. Academic publications are also something that all doctors and researchers need to relate to on a daily basis, but knowledge of and scepticism about the academic publishing industry appear to be less widespread. This topic has increasing relevance, since publication practices have changed radically over recent decades. Like 14 other countries, the Research Council of Norway has also approved Plan S. Under this plan, all research supported through funds announced by the Research Council of Norway after 2021 will be published in open-access academic journals (1–3). How will this change academic publishing, and is the industry really willing to change? My objective with this article is to draw attention to existing problems of academic publishing and the new problems that are created by open access and Plan S.

    Huge profits

    Huge profits

    The academic publishing industry has a large financial turnover. Its worldwide sales amount to more than USD 19 billion, which positions it between the music industry and the film industry (4). The market is largely dominated by five large publishing houses: Elsevier, Black & Wiley, Taylor & Francis, Springer Nature and SAGE, which control more than 50 % of the market between them. Elsevier is the largest, with approximately 16 % of the total market and more than 3000 academic journals. As an industry, these publishing houses are unique in terms of their profitability, generating large net profits. Elsevier has a profit margin approaching 40 %, which is higher than that of companies such as Microsoft, Google and Coca Cola, and the curve is pointing upwards (4–6).

    These huge profits are not altogether surprising. The reason can be illustrated by a comparison with a traditional newspaper, whose profit tends to be in the 10–15 % range (4). A newspaper incurs wage costs for its journalists, editors and graphic artists, as well as expenses for research, fact-checking, printing and distribution. All of this must be paid for through sales and advertising. Academic journals have cleverly managed to turn this situation on its head. The production of content is paid for by research funds, both the salaries of the researchers and the substantial costs involved in undertaking research. My own experience is that most academic editors work for merely symbolic pay and that quality control and fact-checking are done through peer review, which is unpaid voluntary work. Because nearly all access is digital, even printing no longer needs to represent a cost. As a result, the only real cost is incurred by the graphic design of the article.

    The government funds all stages of research production, but must then pay again to have access to the research results

    It is interesting to note how all this is funded. As in many other countries, most of the research funding in Norway comes from the government. Thereby, the government funds all stages of research production, but must then pay again to access the research results. And such access does not come cheap. In the publishing houses referred to above, a single article costs USD 30–50. Norwegian public institutions pay approximately NOK 330 million for subscriptions, and the figure for Europe as a whole has been estimated at EUR 420 million (7, 8). In view of the low costs incurred by the publishing houses, these sums are completely unreasonable (4, 9).

    Impact factor – quality indicator or marketing ploy?

    Impact factor – quality indicator or marketing ploy?

    To earn money, the publishing houses depend on selling a product. How well this product sells, depends on its quality. Traditionally, the quality of academic journals has been measured in terms of their 'impact factor', a measure that the journals trumpet loudly to attract good studies and more subscribers. The impact factor is calculated according to the number of citations of the journal's articles over a two-year period.

    This raises a number of fundamental problems (10). Primarily, it accepts that the number of citations is consistent with quality, which is a major assumption. The number of citations also varies considerably between disciplines. For example, the average impact factor for journals in clinical endocrinology is more than twice as high as for journals in surgery, even though one discipline cannot be weighted as more important than the other. Another problem is that self-citations by the author him-/herself or by the journal are included.

    Through the so-called DORA declaration, many have chosen to disregard the impact factor completely when evaluating research quality, and Norwegian research institutions have endorsed this (11). The idea is good, but it leaves us with the problem of having no method for assessing the quality of research. Failing to recognise that we need an objective quality assessment is naive. Although the impact factor has obvious flaws, it has utility value. So far, none of the critics has proposed a better alternative. The problem is not the impact factor per se; it is how it is interpreted and used.

    How does this affect research?

    How does this affect research?

    For a journal, the impact factor is crucial for financial success, and it is hence important that its articles are frequently cited. This influences what is published, but is unfortunately not consistent with what benefits research. For example, negative studies and replication studies that test the results from previously published research are crucial for further development. Such studies have less news value or citation potential, which means less opportunity for publication in high-ranking journals.

    Plan S must be improved to prevent it from exacerbating the situation or silencing the researchers, while ensuring that the strategy demonstrably reduces the public funding of the publishing houses' profit rate

    As professionals, we play along. To some extent we need to do this to be able to survive as researchers. Few of us can afford to pursue negative findings, positive results are often published quickly and uncritically, and too few replication studies are undertaken. This has consequences; in a study published in Nature, more than 70 % of all medical and biological researchers report having failed to confirm other researchers' results (12). The most important reasons were selective publishing of data, pressure to publish and poor statistical and analytical assessment.

    Open access – one step forwards or two steps back?

    Open access – one step forwards or two steps back?

    Open access emerged in the early 2000s and has been promoted as the solution to these problems, in terms of access, funding and distribution of research results. With open access, the articles are openly available and the publishing costs are covered by the researcher. The publishing of results will not be limited by the impact factor to the same extent as before. It will be easier to get negative studies and replication studies published. Purely open-access journals exist, but some traditional subscription journals also offer open access for an extra charge.

    The flipside is that open access has paved the way for a completely new way to earn a profit. This change also means that the journal will not necessarily have any financial incentives to ensure appropriate peer review or quality control – or relate to their impact factor at all – as long as they can make the researchers pay. This type of publishing also comes at a cost. In a purely open-access journal, the price is often in the range of USD 1500–3000, but for traditional subscription-based journals, it can reach USD 6000 (5).

    In 2013, John Bohannon published the article 'Who's afraid of peer review?', which pointed to the core problem (13). He wrote a study in which he generated fake academic articles with a content devoid of scientific meaning and with obvious errors and omissions. This study was sent to more than 300 open-access journals, and more than 150 of them accepted it for publication with virtually no signs of quality control or peer review. Half of these journals were registered in the Directory of Open Access Journals (DOAJ), which is worrisome. The objective of this registry is to list quality-assured open-access journals to distinguish them from unscrupulous operators (so-called 'predatory journals') (14).

    To make research more available, publishing in open-access journals is now encouraged in Norway. Many Norwegian universities and hospitals have established funding schemes to cover the costs. The main requirement is that the journal must be indexed in the Norwegian Register for Scientific Journals, Series and Publishers, which cooperates closely with and is based on the Directory of Open Access Journals (15). If we are to believe Bohannon, there are grounds for questioning whether this quality assessment is sufficient for the use of public funds on such a comprehensive scale. Seen in light of the researchers' perhaps exaggerated belief in their own results and eagerness to publish widely, open access may result in the government funding research publications of limited academic value, that fail to undergo adequate quality control and will only be read by a few.

    Plan S – the solution or the emperor's new clothes?

    Plan S – the solution or the emperor's new clothes?

    The Journal of the Norwegian Medical Association has previously referred to Plan S, which will entail radical change to our publishing practices (1, 2). The Research Council of Norway has endorsed Plan S, the intention behind which is that all publicly funded research should be published in open-access channels. This reform has now started, by way of establishment of collective agreements with the publishers. For example, Norway recently entered into an agreement with Elsevier, which ensures open access and publication in their journals (16). Many journals were excluded from this agreement, some of which are highly ranked. Renowned Norwegian researchers have criticised Plan S, though not the initiative as such, only its implementation, which may permanently exclude researchers from publishing in relevant channels (17).

    Although uncritical use of the impact factor is not the solution, objective quality criteria are required

    Although the access will now be open, there is no evidence to suggest that the price paid by the government will in fact decline. In my opinion, this goal should be as important or even more important. Many have also been critical of the new agreements and Plan S because of its lack of focus on cost reduction (3), and it is naive to believe that Elsevier and others will give up their golden goose without a fight. Despite the increasing pressure on the industry and the demands from the public sector for open access in recent years, the profit rates of the publishing houses are growing (5, 6). Nor does Plan S in its present form include a good solution to the problems referred to above, that open access potentially may increase the quantity, lower the quality and entail insufficient peer review. Sweden, Denmark and the United States have already rejected Plan S fully or partly because of the pitfalls indicated.

    What can be done?

    What can be done?

    The most important thing we as users of the system can do is to be aware of the realities and treat the publishing houses, journals and the academic articles that we read with healthy scepticism. With increasing awareness, the academic communities can exert pressure on the industry and the authorities. This has already led to amendments to Plan S (18). Editors and peer reviewers should seek to standardise the requirements for the reporting of research and release of negative results, for the use of statistics and methods and for access to source data. Plan S must be improved to prevent it from exacerbating the situation or silencing the researchers, while ensuring that the strategy demonstrably reduces the public funding of the publishing houses' profits.

    It is also unfortunate that we are now establishing a system that places less emphasis on the objective quality control of research. Although uncritical use of the impact factor is not the solution, objective quality criteria are required. Rather than an immediate abolition of the impact factor, the focus should be on replacing it with better and fair quality assessments.

    PDF
    Print

    Recent Articles

    Made by Ramsalt Using Ramsalt Media