Insights turns 4!

You are here

Academic publishing and scholarly communications: Good reads, July 2017

Editage Insight... | Jul 31, 2017 | 921 views
Academic publishing and scholarly communications: Good reads, July 2017

If the month of July seemed to whizz past while you were hard at work, we have you covered. This month was interesting because there were several noteworthy happenings: an author exposed predatory journals’ workings by submitting a fake Star Wars paper, a team of researchers revealed how many articles Sci-Hub has, a group of scientists challenged P values, and more. Here is a curated list of articles that would keep you abreast of the goings-on in the scholarly publishing world this month. Happy reading!

1. A peek into the workings of predatory publishers: In an attempt to explore the workings of the predatory publishing world, a prominent neurosurgeon, also the author of the popular blog Neuroskeptic, tricked four journals into accepting a bogus manuscript. The author wanted to check whether ‘predatory’ journals would publish a paper that was obviously absurd. The author further adds that he “created a spoof manuscript about “midi-chlorians” – the fictional entities which live inside cells and give Jedi their powers in Star Wars [and] filled it with other references to the galaxy far, far away, and submitted it to nine journals under the names of Dr Lucas McGeorge and Dr Annette Kin.” The author also included a line in the methods section stating, “The majority of the text of this paper was Rogeted [7].” Reference 7 cited an article on Rogeting and stated, “The majority of the text in the current paper was Rogeted from Wikipedia: https://en.wikipedia.org/wiki/Mitochondrion Apologies to the original authors of that page.” Four journals accepted the paper; one demanded a $360 fee from the author while three others actually went ahead and published it. The author admits that an initial screening, even at a five-minute level, would have been enough to gauge that the paper was actually fake and had no merit whatsoever. While reporting this sting, the author asserts that this exercise brings into focus the lack of a peer review mechanism in predatory publishing.

2. Can Sci-Hub skew traditional publishing models? Biodata scientist Daniel Himmelstein at the University of Pennsylvania and colleagues published a preprint on PeerJ that assessed the repository of Sci-Hub. They report that Sci-Hub can provide access to more than two-thirds of all scholarly articles and more than 97% of Elsevier journal articles are available on the site for free downloading. In an interview, Himmelstein mentioned that his team first found out how many scholarly articles existed by using data from Crossref; and they compiled a list of 81.6 million articles. They then compared the papers available on Sci-Hub and noted that about 69% of these articles, majorly from Elsevier and American Chemical Society, are up on the piracy site. Most of these are paywalled articles, Himmelstein stated. Interestingly, Sci-Hub is able to provide for any requested papers about 99% of the time. He added that, “I think the larger picture of this study is that this is the beginning of the end for subscription scholarly publishing. I think it is at this point inevitable that the subscription model is going to fail and more open models will be necessitated. One motivation for doing the study is that I want to bring that eventuality into reality more quickly.”

3. Are P values the culprit for reproducibility crisis? Science is currently grappling with a reproducibility crisis, and the scholarly community is worried that the scientific literature is littered with unreliable results. A group of 72 prominent scientists have come to the conclusion that weak statistical evidence is the root cause of irreproducible results. The culprit seems to be the Pvalues, which are used to judge the significance of findings in many disciplines and are used to test the null hypothesis. The smaller the P-value that is found for a set of results, the less likely it is that the results are purely due to chance. Results are deemed 'statistically significant' when this value is below 0.05. However, many scientists are concerned that the 0.05 threshold has caused too many false positives to appear in the literature. This problem has apparently aggravated due to a practice called P-hacking, in which researchers gather data without first creating a hypothesis to test, and then look for patterns in the results that can be reported as statistically significant. In an interesting preprint, the researchers argue that P-value thresholds should be lowered to 0.005 for the social and biomedical sciences. Daniel Benjamin, one of the paper’s co-lead authors, believes that claims with P-values between 0.05 and 0.005 should be treated merely as “suggestive evidence” instead of established knowledge. However, reducing the P-value to 0.005 could be problematic as it may increase the odds of a false negative — stating that effects do not exist when in fact they do. To counter this problem, the authors of the preprint suggest that researchers increase sample sizes by 70%, as this would avoid increasing rates of false negatives, while still dramatically reducing rates of false positives. But this would be an expensive solution, one only well-funded researchers would be able to afford.

4. A step toward rewarding transparent research: The tenure and promotion season is on, and around this time of the year, promotion committees in the U.S. universities send portfolios of potential candidates to scholars in other institutions for an independent assessment. However, advocates of OA and reproducibility worry that as long as tenure and promotion are dependent exclusively on publication volume, prestige, and success in obtaining grants, researchers have little incentive to publish OA. Therefore, institutions need to reward scholars for conducting open, rigorous, and reproducible research. In this interesting article, Brian Nosek, a senior faculty member of the University of Virginia and the co-founder and director of the Center for Open Science, reports what could be a small step in this direction. Nosek is contacted every year with promotion review requests by other universities. This year, he has noticed a marked shift in the focus of universities. Committees expressed interest in evaluating the candidates’ work and impact relevant to open science, and wished to evaluate quality over quantity in the review. Instructions to reviewers emphasized quality, and each package contained a sensible combination of content for the reviewer: candidate CV, three to five representative articles to read, and sometimes personal research and teaching statements.  While counting heuristics, prestige signals from journal names, and total grant amount could continue to dominate decision-making, for the most part, the provided packages give the external reviewer enough information to conduct an in-depth assessment without overwhelming them to fall back on counting heuristics as the sole criteria. Let's hope to see more progress in this direction in the coming years.

5. How research funding and open access are connected: In this interesting and thought-provoking blog post, open access advocate and independent journalist Richard Poynder talks about the connection between research funding and open access (OA). Sponsorship is a major aspect of research and publishing and scholarly publishers are one of the sources of sponsorship (of research and libraries). "This generosity comes at a time when scholarly communication is in sore need of root-and-branch reform." However, Poynder argues that publishers' interests (making profits, perhaps) are not aligned with those of researchers (publishing for personal and scientific advancement). It is not uncommon for publishers to have a vested interest in the legacy system. Thus, it might be advisable for researchers and libraries to avoid seeking out publishers as potential sponsors. Sponsoring a library, for example, might give publishers the leverage, or "soft power," to control the flow and direction of scholarly communication routed through the library. On a large scale, publisher sponsorships could change the global scholarly publishing landscape altogether. Poynder states that publisher sponsorships and lobbying could be used interchangeably and might have helped legacy publishers to co-opt open access. This has resulted in a new pay-to-publish model where publishers adapt OA to suit their needs. This post throws some perspective on the rapidly evolving OA publishing ecosystem.

6. Open peer review lacks quality: Phil Davis, who writes for the popular blog Scholarly Kitchen, discussed in his blog post the paper “A prospective study on an innovative online forum for peer reviewing of surgical science,” which observes that open peer review attracts lower quality reviews. The authors of the study compared the quality of open online reviews to conventional reviews for manuscripts submitted to the British Journal of Surgery (BJS). 110 manuscripts were posted online with email invitations and the same manuscripts were sent to reviewers for conventional evaluation. The reviews were evaluated by editorial assistants using validated quality instruments. The assessment concluded that the score for online reviews was 2.35 compared to 3.52 for conventional reviews. Moreover, uptake for open peer review was extremely low – the invitations were sent to 7000 reviewers, but the study managed to receive only 59 reviews. The authors concluded that, “A personal email from an editor targeting a researcher with known expertise may have greatly improved participation over a mass impersonal email.” Since there was no way to know whether the online reviewers had the competence required, this could have played a role in the poor uptake, the authors noted. While the study has its own limitations and therefore cannot be generalized, it gives an idea of how open online peer review may suffer when it comes to uptake and quality in comparison with the conventional peer review.

Did you enjoy reading the posts we recommended for you this month? Have you read any of these already? Do you have an opinion to share? Feel free to share your thoughts in the comments section below. You can also follow our Industry News segment, where we share regular updates on what the academic publishing industry is talking about.

Become an Editage Insights member
Sign up with your email address and gain unrestricted access to exclusive content
By clicking 'Join Now', you agree to our Terms & Privacy Policy.

Republish

Like this article? Republish it!
Knowledge should be open to all. We encourage our viewers to republish articles, online or in print. Our Creative Commons license allows you to do so for free. We only ask you to follow a few simple guidelines:
  • Attribution: Remember to attribute our authors. They spend a lot of time and effort in creating this content for you.
  • Editage Insights: Include an attribution to Editage Insights as the original source.
  • Consider a teaser: Yes, that’s what we call it…a teaser. You could include a few lines of this post and say “Read the whole article on Editage Insights”. Don’t forget to add the link to the article.
  • Re-using images: Re-publishing some of the images from our articles may need prior permission from or credit to the original image source.
  • Quick and easy embed code: The simplest way to share this article on your webpage would be to embed the code below.

 

Please copy the above code and embed it onto your website to republish.
Q & A

Have your own question?


 

Related Categories