Academic publishing and scholarly communications: Good reads, February 2018


Reading time
6 mins
Academic publishing and scholarly communications: Good reads, February 2018

Although the month of February is typically shorter, there has been no dearth in the buzz in the scholarly world. From trending discussions around the definition of the concept of “excellence” and the presence of “disability bias” in peer review to raising a concern about the transparency in research communication and more, a lot of issues were written about. If you have been keeping too busy to follow all the latest tidbits, here is a list of articles that cover the latest goings-on in the industry. Happy reading!

1. The convoluted path of scientific research:  Often the research landscape is viewed as being disciplined, clear, and systematic. But the truth is that the research landscape is full of bumps with "miscalculations; wrong assumptions or conclusions; and sociological intrigue,” writes Abraham Loeb in this post. He adds, “The path to finding the truth is often convoluted; competition is fierce and both experiments and theories could be wrong." If this sounds interesting, the rest of Loeb’s article is sure to keep you hooked. This blog post talks about how universities can play a major role in bringing the reality of research to the lay people. This will help cultivate a culture of innovation and ensure that there are healthy expectations from and greater trust in scientific research. In today's research landscape where there is increased emphasis on research communication along with a lot of suspicion about research outcomes, doing this will help increase transparency to a great extent. Universities could be at the forefront of this positive outreach and encourage students to develop an array of skills that will help them contribute to science more meaningfully.

2. Redefining the concept of “excellence”: A recent editorial in Nature expresses the growing need for science to redefine the concept of "excellence." Excellence seems to mean going beyond being merely “good.” A researcher, an institution, or even a country that manages to achieve this ideal accrues a host of benefits such as grants and political support, amongst other things. This has raised some important, yet tricky questions such as "What does excellence mean?" and "How is it measured?" Attempting to answer these questions, a paper in Science and Public Policy came to an agreement on two points: (1) Using excellence as a measure of research quality makes many people uncomfortable; and (2) despite their discomfort, people are unable to suggest a better alternative since “science and scientists must meet political demands of accountability and assessment.” While some researchers would prefer if the system did away with excellence metrics altogether, there are others that suggest since excellence be reformed as a term as it can be defined in many ways.

3. Why astronomers do not publish: Researchers are expected to publish their studies, particularly when huge investments are involved in the projects. While most researchers scramble to publish papers, the field of astronomy is experiencing a different problem. A new survey by the European Southern Observatory (ESO) indicates that after spending time on world’s best telescopes, astronomers are not publishing their observations. Sophisticated telescopes are expensive to build and maintain, so the lack of output of as many as 50% of the teams that were awarded time on telescopes has become a matter of concern for ESO. The institution surveyed astronomers who were awarded time on any of ESO’s telescopes between 2006 and 2013 but published no studies by 2016. The top three reasons for not publishing were: (1) “Still working with data;” (2) “Data of insufficient quality;”  and (3) “Inconclusive results.” Some of the problems underlying this trend seem to be the aversion to publishing negative results and projects that are not well thought out. It is a given that not all projects are likely to produce desirable results. However, if researchers plan their projects better and publish their results even when they are negative, it can contribute significantly to the field.

4. Is there a disability bias in peer review? This post narrates the case of an author, Lisa Iezzoni, who has shared a strong commentary about Reviewer #2's comments on her rejected paper. She claims that the comments indicate "explicitly disparaging language and erroneous derogatory assumptions.” Iezzoni was studying Massachusetts Medicaid recipients with either serious mental illness or significant physical disability, and her work included a questionnaire about their experience with Medicaid. Reviewer #2's comments questioned her choice of patients stating that they "may have no competence to self-assess themselves quality of life or the medical service quality…since the respondents have signification physical disability and serious mental disability, how can they complete the questionnaire survey by themselves without a qualify investigators assistant?" Iezzoni feels that these statements show the reviewers' ignorance and inability to understand "the lived experiences of individuals with a psychiatric diagnosis or significant physical disability who reside in the community, as study participants did." Sharing his view about the case, the author of this blog post talks about how this could perhaps be a case of reviewer incompetence rather than a disability bias.

5. Can duplicated images in papers be spotted accurately? An algorithm that sifts through thousands of papers to screen images may just be the answer to detecting duplicate images in scientific publishing. Daniel Acuna, a machine learning researcher at Syracuse University in New York, led a team that reported using this algorithm for a study. The current process of detecting duplicates is labor-intensive and time consuming; it involves random spot checks in manuscripts or a manual screening of all images submitted along with manuscripts. An automated process has been long overdue. To easily be able to screen images with the image-checking software, publishers will need to create a database of all published images across the literature. Although the accuracy of the algorithm is still being gauged because testing the tool in the absence of a database of duplicate or non-duplicate scientific images is difficult, it’s commendable that existing techniques can be applied in this manner to screen duplication. 

Have you come across something you’d like to share with other researchers or publishing professionals? We’d love to read it too! Please share your recommendations in the comments section below.

If you like these recommendations, you might also like our previously published Scholarly Communications Good Reads collections.

And if you’d like to stay tuned to important happenings in the journal publishing industry, visit our Industry News section.

Be the first to clap

for this article

Published on: Feb 28, 2018

Comments

You're looking to give wings to your academic career and publication journey. We like that!

Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.

One click sign-in with your social accounts

1536 visitors saw this today and 1210 signed up.