What makes for "good science"? Stacy Konkiel on open access, impact metrics, and more

This interview is part of a Series
This interview is part of a Series
Series

Impact factor

Impact factor, an index based on the frequency with which a journal's articles are cited in scientific publications, is the most widely used citation metric to evaluate the influence of published research and the prestige of researchers. However, the reliance on impact factor to assess a researcher’s worth has frequently been called into question. This enlightening series covers the buzz around the latest impact factor release, and delves deeper into interesting views such as: Why is it not enough to use the journal impact factor to evaluate research? What makes for good science? Can any other metrics replace impact factor? 

Read more

Reading time
11 mins
What makes for "good science"? Stacy Konkiel on open access, impact metrics, and more

Meet Stacy Konkiel, Outreach & Engagement Manager at Altmetric, a company that tracks online activity relating to scholarly literature. Previously, Stacy was the Director of Marketing and Research at Impactstory, a nonprofit organization that helps researchers use altmetrics through a web-based tool and Indiana University Bloomington’s first Science Data Management Librarian. Before that, Stacy worked at PLOS as a Marketing Associate. Stacy holds two Master’s Degrees in Information Science and Library Science at Indiana University. An active blogger and tweeter, Stacy is a well-known name in the open access and altmetrics circles.

“Dynamic” is the word that comes to mind when we think about the scholarly publishing scenario today. Everything seems to be about transformation or reformation. Old systems are being questioned and new systems are being proposed with the promise of revolutionizing academic publishing. Given this situation, how progressive is academia and how has it responded to the call for movements such as open access and altmetrics? Also, where will these tides of change lead us? In this interview with Editage Insights, Stacy Konkiel answers these questions and shares her views on how things are shaping up in the academic research and publication landscape.

You started out as a librarian and are now an open access advocate. What caused this transition?

I think that I’ve always been an open access advocate, even before I had even heard of the phrase. Like a lot of librarians, I was called to work in the field because of a sense of social justice, a belief that everyone should have access to information. But it was my first job out of library school, working under Marilyn Billings at the University of Massachusetts Amherst Institutional Repository, when I began to understand the negative consequences of paywalled scholarship and started to really consider myself an “open access advocate.”

In 2006 (nearly 10 years ago), Mike Eisen, one of the co-founders of PLOS ONE remarked, “Scientists are eager to apply the awesome power of the Internet revolution to scientific communication, but have been stymied by the conservative nature of scientific publishing.” How far do you think the scientific community has progressed in this regard?

We’ve seen a huge shift in attitudes towards open access since Eisen made that statement—more researchers are choosing to make their articles open access, there’s been a marked rise in the number of researchers practicing open science by making their code, data, and other scholarly outputs openly accessible online, and even some funders are now requiring open access and data sharing as a condition for grantmaking.

That said, academia still has a long way to go (at least here in the States we do). At many institutions, tenure & promotion committees are still very conservative. Generally, they tend to put much more weight on high citation counts or h-indices than they do on whether research has actually made a difference in the world. And often these committees may fail to recognize the important, diverse contributions that researchers make--by creating software that changes the way analysis is done in their discipline, for example, or in allowing their data to be reused by others in their field and beyond, and so on.

I’m hopeful that in the near future, promotion & tenure committees will start to become more nuanced in how they review the contributions that researchers have made to science. Once more incentives for researchers to publish open access are in place (especially those related to career advancement and funding), I believe we’ll see the conservative nature of publishing fully change, as well, based on an increased demand for open access publishing services.

Today, almost everyone is talking about open science and open data. How would you define or describe open science and open data? How open do you think is the scientific and publishing community to open access as a concept today?

Open science and open data both exist on a spectrum, in my opinion. They are not only about making research available in an open access publication. For example, you can adopt certain “open” practices like licensing your data with an Open Data Commons License, while still choosing to archive it in a repository like ICPSR (Interuniversity Consortium for Political and Social Research) that can require a subscription for access. (Though, I’d definitely encourage all Editage Insights readers to make their data freely available on a local repository or on Figshare, barring data sensitivity concerns!) Or you can choose to publish your work in a toll access journal, while archiving your preprint in your institutional repository—this is merely a different “flavor” of open access than the one that journals like BioMed Central and PLOS have made popular.

I believe the scientific and publishing communities are both more open than they ever have been before to open access (though we definitely have a way to go, as there are still many conservative scientists and publishers out there!).

Measuring the impact of research has gained more importance and newer altmetrics have made an appearance. While the impact factor is still used as an indicator of journal prestige, more and more people are talking about doing away with it altogether or using it in conjunction with newer altmetrics. What are some of these altmetrics, how have they emerged, and what benefits do they offer various stakeholders of the scientific community?

It’s a sad (but surprisingly little-known) fact that a journal’s impact factor (or JIF) has little to do with the quality of the individual articles it publishes. Yet, the JIF is still used as a shorthand for understanding the quality of scholars’ work in many scenarios, including in hiring and promotion decisions. In some countries, scholars are even rewarded for publishing in high impact factor journals. Talk about perverse incentives!

I’ve exchanged views with researchers from all over the world and when I ask them, “What makes for ‘good science’?” their answers are always the same: To be “good science” research must be (a) high-quality work and (b) it must make a difference in the world. Citation-based metrics like the JIF can’t tell us a lot about either of those things, but some altmetrics can.

  • If you want to understand the impact of research on society, the holy grail tends to be if your work has been referenced by policy makers or if your work has received extensive media coverage. You can also look at how your papers are being talked about on social media platforms like Twitter and Facebook to understand how the public perceives your work, and whether it’s making a difference in their lives.
  • If you want to understand your research’s impact upon your discipline or other disciplines, you can look at others’ peer reviews of your work, what other scientists are writing about your work on their blogs, and reading the contexts in which you’ve been cited.

It’s important to note that altmetrics and citation counts themselves don’t tell us much about the impact of scholarship, aside from the volume of attention it’s received. What’s really useful to scientists is being able to share who’s saying what about their papers. (Is a Nobel Laureate praising your study on Publons? Are the only tweets about your papers coming from you and your publisher, or are NGOs and patient advocacy organizations also sharing your work online? And so on.)

How aware are researchers/authors about the emerging altmetrics? How do you go about making them more aware of alternative research evaluation metrics?

Researchers are becoming increasingly aware of the concept of altmetrics, if not the phrase. Most tend to “get it” when you explain how altmetrics can help you find out if your work’s being talked about online, and many want to share their own altmetrics with others.

To anyone looking to help other researchers “buy in” to altmetrics, I’d suggest doing a lot of one-on-one discussions (with solid examples of useful altmetrics), as well as creating websites and tools like LibGuides, which are handy ways for researchers to find curated content about a subject.

Given the global focus on open data and alternative ways of measuring impact, what role do repositories play in enabling effective scientific communication and in making information available?

Repositories play one of the largest roles in making scientific communication much faster and more efficient. Subject-specific repositories like ArXiv (a repository of e-preprints of journal articles in mathematics, physics, computational biology, etc.) and SSRN (which helps the rapid dissemination of scholarly research in the social sciences and humanities) can be more relevant to entire disciplines than journals. These function as the go-to source for the most up-to-date research, often in an easier to access format than toll access publishers provide, at a much cheaper cost. (ArXiv is free to end users and only costs supporting institutions around $6 per article uploaded, for example). They can also be an important resource for scholars in developing countries, who often don’t have access to costly subscription journals.

There’s also an incredible amount of exposure one can get by uploading their work to a repository. For example, Nature saw around 3 million readers a month in 2012, whereas ArXiv saw around twice that amount (and now sees around 12 million readers a month)!

In the course of your work, have you noticed any regional differences in the acceptance of open access or the use of altmetrics? To elaborate, are these concepts more widely accepted by researchers from specific regions? If so, what steps can be taken to educate a wider audience about these trends?

I’d say that acceptance of altmetrics varies less on region and more upon one’s discipline. For example, researchers who tend to practice “web native” science like bioinformatics have by and large been more receptive to altmetrics than, say, traditional chemists.

That said, there are differences in the amount of coverage that scholars from different parts of the world tend to receive for their research, and that might play a part in whether those scholars find altmetrics useful. If you can’t find altmetrics for your work because altmetrics aggregators aren’t tracking platforms like Sina Weibo that are more regionally important than Twitter or Facebook, I imagine you’d be less likely to accept altmetrics. (That’s one reason why at Altmetric we do try to track such sources—we want our data to be as representative as possible!)

The (recent) past of scientific communication was all about traditional forms of peer review and the reigning impact factor. The present is about questioning existing systems and the emergence of newer publishing models, open access, data sharing, and alternative methods of research evaluation. What, according to you, does the future hold?

I’ve recently wagered some bets about the future of scholarship, which I’ve excerpted here:

  • Better understanding: We’ll start to see more nuanced conversations about what “impact” really means, as well as an increased acceptance of more varied flavors of impact.
  • Better dissemination: Publishers will continue to experiment with new ways to make research consumable online, building on important work like eLife’s Lens and PeerJ’s PaperNow.
  • Better bottom lines for OA publications: Publishers, societies, and libraries will also invent and test new open access financial models, moving academia away from the idea of “one size fits all” OA publishing.
  • Better recognition: The many varied scholarly contributions of individuals will finally be recognized by the powers that be, whether it’s related to data curation, designing protocols, or scholarly service activities (which creates discrete important but currently undervalued outputs like peer reviews, blog posts, and so on). Perhaps we’ll even be more nuanced in our recognition, seeing those activities as merely different from (not lesser than) traditionally valued scholarly activities.

I’d welcome your readers to weigh in on what they think the future might hold!

Thank you, Stacy!

So where do you think we are headed? Do you have any specific thoughts about altmetrics? Share your thoughts and comments below.

[This interview was conducted by Jayashree Rajagopalan.]

Be the first to clap

for this interview

Published on: Sep 15, 2015

Comments

You're looking to give wings to your academic career and publication journey. We like that!

Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.

One click sign-in with your social accounts

1536 visitors saw this today and 1210 signed up.