Timeless tips for authors from an experienced journal editor and academic trainer


Reading time
19 mins
Timeless tips for authors from an experienced journal editor and academic trainer

What happens when you meet with a researcher who is passionate about sharing his knowledge and instructing young researchers and faculty about the best practices of academic publishing? You get a treasure trove of information and advice about all aspects of research – from planning a career to publishing your paper in a journal! This conversation with Dr. Caven Mcloughlin – Ph.D., Professor, Kent State University, Ohio USA – is full of advice for researchers at all stages in their academic career.

A qualified school psychologist, Caven works at Ohio’s largest school psychology preparation program and instructs students in early childhood school psychology. He has been a special education classroom teacher and administrator as well as a school counselor. For over 25 years, he has been conducting federally funded training programs for interdisciplinary leadership personnel who work with toddlers, infants, and newborns. Caven is also a Fulbright Specialist, and as part of the program, travels to different parts of the world to conduct informative and instructive workshops on academic publishing. He particularly travels to the BRICKS nations and instructs authors and faculty from these nations about the best publishing practices. Caven is a prolific researcher and has published over 100 research papers and chapters as well as written, edited, or contributed to 10 books. He is also the Editor of School Psychology International Journal.

Caven’s rich and varied experience as well as passion for sharing tips with researchers makes him a great go-to person for all sorts of advice related to academic publishing. His experience working with authors from different countries also enhances his understanding of the struggles researchers face in different parts of the world. In this interview, Caven shares his views on some of the most common challenges faced by researchers, especially those in developing countries, as well as how they can overcome these obstacles. He also gives away some timeless advice about writing and publishing a research paper in an academic journal.  

Could you tell us more about your work as a Fulbright Specialist?

Surely! There has never been an academic year for me in the last three decades where I haven’t made overseas presentations to faculty colleagues. It was natural for me to become associated with the Fulbright organization. I’ve been a Fulbright Specialist for just 3 years, and in that time I have visited two different universities in South Africa, each on two occasions, as well as a university in South India, to which I will be returning very soon for the third time. My role has always been to assist faculty in getting their work published in prestigious, international, high-impact, English-language journals through providing them with insider tips based largely on my 20+ years of experience as a journal editor, and of course, as an academician in my own fields.

Over those years I’ve come to the conclusion that while university faculty are expected to become prolifically published authors, they are largely untutored and unsupported in the basic steps required in both designing their research so that it will be eventually publishable and in articulating their findings in ways that makes the Results and Implications valuable. I target my presentations to authors in under-resourced countries, in mainly BRICKSA locations — Brazil, Russia, India, China, South Africa, and Korea (though increasingly China is thought of as ineligible to be part of this group).

Your work requires you to travel and interact with researchers from across the globe, especially in third world and developing countries. Do you think there is a sort of East-West gap among researchers in terms of their awareness of best publication practices?

In the past two years, in addition to making presentations in South Africa and India, I’ve spoken to faculty groups in Turkey, South Korea, and China as well as in my home country of the USA, while also serving as a journal editor and as a program administrator at my home university. Thus, I have had several opportunities to understand the impact of different academic cultures, expectations, and styles of academic preparation and training on prospective authors in several continents.

I’m unconvinced that Western authors are inherently smarter. Rather, I believe that non-Western academics suffer from six different hurdles:

  1. There are very few prestigious journals in any discipline edited by non-Western personnel and so there are relatively fewer role models and tutors in the art of publication in developing countries.
     
  2. There is almost no specific instruction at the undergraduate and graduate levels in the steps for designing research investigations that have a high chance of attracting the interest of a journal editorial team.
     
  3. Authors in BRICKS countries don’t always know how to frame their Results and the consequent Conclusions so as to emphasize the socio-economic or person-enhancing implications of their investigations. Editors want more than an exercise in admiring the data. They want to see a rationale showing why it was important to collect those particular data in the first place.
     
  4. It’s sad to say, but the culture of cut-and-paste from others’ work (i.e., duplication-of-content or plagiarism) has earned all investigators from several non-Western countries a reputation such that their work is viewed as suspect by editors.
     
  5. The academic culture supporting promotion at most BRICKS universities incentivizes quantity over quality. The publication of insignificant, scarcely valuable, and practically irrelevant articles in great numbers is mindlessly valued over the development of high-quality, relevant, impactful research.
     
  6. It is possible to accumulate a good number of publications that are of high quality; but to do so requires thoughtful career planning – another element missing for many BRICKS’ faculty.

In summary, YES, there is a big gap between researchers in the resourced- versus the under-resourced locations of this world. However, my experience working with indigenous faculty across the globe has taught me that there are lessons that can quickly and easily be learned from the sharing of insider tips, which is, incidentally, something I love to do!

You’ve also played an advisory role in university-level tenure-related decisions. (as part of Kent State University’s Advisory Board for its promotion and tenure committee). A majority of our readers are early-career researchers who would like to consider tenure as a natural career progression. For their benefit, how are decisions on tenure made? Also, do you have any tips for researchers who might be interested in moving up via the tenure route?

Whenever I consult with non-Western faculty, I always pose a question about career planning with emphasis on how far into the future young-academics are planning for their own professional development. Then, I routinely get a blank gaze, and eventually, a comment signaling “maybe a few months.” That’s not the case for most comparable Western academics who generally have a discernible horizon many years out.

I generally urge non-Western academics to follow the pattern of my junior colleague faculty and prepare an annual ‘Contextual Statement.’ This serves as a ‘career-plan’ statement that (a) predicts what the next year’s research products will include (objectively stated as “goals”); (b) offers predictions on the research trajectory that is being planned for at least the next three-years (“what’s in the works?”); and (c) defines the intellectual space for those researchers showing how their proposed research products align with the priorities valued by their discipline (“where does their research fit in the discipline?”).

I’ve observed that mentorship is another element that is more valued in the West. Most Western academics can identify primary and secondary mentors and guides, sometimes even in different dimensions of their work (e.g., discipline content, methodology, technical writing). But this has not been the case with most of the researchers I have interacted with during my visits abroad.

All Western academics understand that tenure, which brings the option of lifelong employment, is earned as a result of research, teaching, and service. Each of these dimensions needs to meet or exceed an ‘Adequate’ evaluation, and at least one (preferably research) needs to be ‘Exemplary.’ Research is generally the most misunderstood element in this trio. In Western universities that I visit, faculty are valued for being all-rounders with a particular research focus or expertise.

It startles me when I come across authors — and I must be candid and say that this next issue is a particular problem in the developing world — who knowingly invest in attempting to buy their way into the hallowed halls of academia by paying for publications in dubious, look-alike, fake journals. Everyone in the administration of universities everywhere I have traveled knows that this is a problem, but most don’t know how to handle it. This is quite serious: not only does it encourage predatory publishing but it also calls the credibility of a researcher’s work into question.

Having a Curriculum Vitae (CV) tainted by the inclusion of publications in predatory journals is what I call the “kiss of death” to building considerable international recognition as a researcher. It’s toxic to an academic reputation. It’s what colleagues will chortle about behind your back. It is also a certain path to being relegated to the lowest ranks in the university system. Practitioners of this sort of professional misconduct seem to forget that their CV items will continue to be reviewed long into the future, perhaps to determine eligibility for full-professor. What will people think when they see that the prime publishing years in a researcher’s life have been contaminated by fake entries? There is a price to pay for publishing in predatory outlets. And it’s far more than the cost of the money-transfer to a counterfeit journal’s bank account.

Recently, replication and reproducibility have become a topic of discussion, especially in psychology. As a journal editor, what are your views on how serious the problem is, and do you have any suggestions/ideas to improve the situation?

Let’s be blunt. When you’re questioning about replication and reproducibility what you’re really asking about is plagiarism. Replication is, in fact, an honorable and dignified activity when what a scholar is attempting is to reproduce in different circumstances, with a different sample, a finding that has achieved eminence in a field of study. Basically, a replication study seeks to support (or alternatively to debunk) a seminal idea. That is something that journal editors want to see. However, what they do not want to see is plagiarism.

I am dumbfounded by the number of authors who don’t seem to believe that when they affirm that they understand that their work will be scrutinized for plagiarism or duplication-of-content at the initial stage in the evaluation process, that it will actually happen.

Every prestigious, high-impact journal from commercial publishers uses a plagiarism-screening device. In my own case, before I can actually view a submitted article at the online portal, it has already been scrutinized by IThenticate, a plagiarism detection program. This extraordinarily complex piece of artificial intelligence software powered by Boolean analytics compares words and phrases with every published article, dissertation, and online academic entry since the start of the last century. What is most appealing to an editor is that this software can even sniff out the plagiarism of ideas (such as when an author paraphrases text from an existing publication through the careful insertion of synonyms to try to cloud the fact that it is copied). As an editor, I get line-by-line color-coded documentation signaling every location where particular phrases/ideas were previously published. There is no fooling this software! Those who engage in duplication are highly unlikely to overcome this initial hurdle.

The proliferation of plagiarism is the major concern for editor colleagues with whom I correspond. And yes, we do share with one another when we see patterns of submission from settings or individuals where plagiarism appears commonplace. Many EFL (English-as-a-foreign-language) or ESL (English-as-a-second-language) authors lean on other writers’ explanations – even using the original wording because they find it difficult to articulate their own ideas clearly. Unfortunately, such duplication alone can be the reason why an author’s work gets declined.

Here, let me share something that is probably not widely known. Most editors don’t want to get into extended correspondence with authors who have engaged in ethical misconduct ― authors who will try to justify, or offer to remedy, their falsehood. Frankly, it is difficult to write a letter declining an article stating that the author is a cheat! So the editor finds an unrelated issue on which to pin the blame. As a consequence, the author turns around and sends on the same tainted-text to another journal, and so the cycle of rejection is repeated multiple times.

Authors should studiously avoid duplicating others’ words, phrases, and ideas. I suggest you always test your own work with whatever plagiarism software you can locate, especially for co-authored reports, prior to journal submission. When I address this issue with groups, I bluntly remind authors that they may never have a second chance for making a good first-impression.

To what extent are the fields of Psychology and Education affected/influenced by the impact factor?

As we are all aware, a journal’s Impact Factor is measured by the number of times, on average, that a journal’s articles are cited by others in a two-year window following the year of publication. The presumption is that the most valuable journals will include articles that are cited most often, immediately following publication. In the physical sciences where discovery research is more cumulative than in education and psychology, journals generally have higher Impact Factors than in the social sciences and humanities.

So, many authors and their employers forget that an Impact Factor is a measure of the credibility of the journal, and not the credibility of individual papers it contains or the authors who prepared each paper. There is a separate index that appraises the credibility of scholars and it is called the h-index. This metric measures both the productivity and the citation impact of a scholar’s publications. The h-index is based on the author’s most cited papers and the total number of citations that he/she has received in other publications. It serves as a scholar-to-scholar comparator, rather than a journal-to-journal matchup.

At the point of promotion, what becomes more important than a journal’s Impact Factor is whether your senior colleagues recognize and value the journals in which your work was disseminated. Therefore, rather than worrying about the Impact Factor, I suggest to new faculty that in conversation with their senior colleagues (and others who will appraise their eventual promotion), they should raise the topic of which journals are primary for dissemination in their shared field. The appraisal committee will not care about a journal with an alleged high Impact Factor if they themselves have never heard of it, or would never consider publishing in that obscure title (which is perhaps just another counterfeit journal with a fictitious Impact Factor metric).

Another question specific to Psychology and Education: To what extent have these fields embraced the offshoots of the open science concepts – open access, open data, and data sharing?

It sometimes seems that every author wants his or her work to be uncritically accepted without delay, published without any costs involved, disseminated immediately, and made available without restriction to the whole world. But high quality anything has a cost.

Open Access (OA) is part of the answer. But keep in mind that OA publications are never truly ‘free.’ Someone has to pay for the behind-the-scenes publication costs, and those costs can be enormous. Getting work through the steps required by peer-review standards, technical and copy editing, legal review, setting the text, and mounting it on the internet and sometimes into paper journals that require delivery — all this involves costs. Someone, somewhere must pay for this service because publishing houses are commercial enterprises and not charities!

There is no easy way to strike a balance between authors’ desires to publish their work and succeed in their careers and the need to subsidize the considerable cost of this service on the publishing side.

You also have considerable experience training faculty members and publishing professionals. In your view, what is the most critical area of training for academic faculty?

Methodology, methodology, methodology. Shall I say it again? Methodology!

No study can ever have Results that are superior to the quality of the methodology and statistical analysis that guided the selection of the sample, the gathering of the data, and analysis of the patterns in the data. If the research design is rudimentary, unsystematic, simplistic, or naïve then that’s the ultimate ceiling for the quality and usefulness of the Results. To achieve considerable international recognition, it is crucial for researchers to become conversant with modern, cutting-edge research tools. My advice is to take a workshop in research methodology or advanced statistics rather than go to a conference to learn more about your content-area.

Top-level journals are no longer willing to accept articles that are based on simple descriptive data displays, correlational rather than causal analyses, or undergraduate-level statistical analysis. And it never works to bring in a statistician when the data have already been collected and ask: “Can you tell me what these data mean?” There are no statistical data-manipulations that can resurrect an inappropriately collected or ill-designed data-set. In that case, it’s never a solution to ask, “Well, what other analyses can be conducted?” The time to bring in a research design consultant/statistician is BEFORE the vital elements of sample selection and data collection have been initiated. Training research faculty in the art of designing the right methodology for research would help solve a lot of problems.

From your experience as a journal editor, what are the top submission mistakes authors make? How can they avoid them?

Let me approach this positively and attempt to answer a slightly different question: “What four elements should an aspiring researcher focus on when framing their reports for submission to a strong journal?”

  1. The quality of your writing and the organization of your manuscript will determine whether the ideas/content of your article will be taken seriously. Since no top-ranking journal sends every submission out for review, and instead relies on a filter conducted by the editorial staff to determine which submissions get reviewed, you MUST catch the eye of the editor. If you don’t pay attention to the textual elegance and accuracy of your writing as well as the organization of your manuscript, you will put yourself ‘out of the competition’ for getting an acceptance letter. Most authors overemphasize content and superficial polishing and organization.

    Focus on HOW your scientific ideas are packaged and don’t simply list the scientific details within those ideas. To qualify your manuscript as relevant for their journal, you must attend to what it is that Editors focus on and value. Only rarely can authors step back from their final version and evaluate for themselves whether they’re getting their ideas across elegantly and accurately. This is because by that stage they’re too close to see the holes in the text or identify instances of ambiguity and duplication. An independent editor who has mastery over technical English and experience in the publication-process can make the difference between acceptance and the dreaded letter declining the opportunity for review.
     
  2. Most authors spend a great deal of time making sure that the text of their article is in formal English. But, those same authors will then dash off a paltry submission letter in questionable English, entirely forgetting to include the elements and assurances an editor needs to see. BRICKS authors particularly seem to have a hard time preparing a confident, convincing, persuasive Letter-to-the-Editor sharing the good news about their submitted research article. It’s crucial to understand that your submission letter is your only sales pitch for getting the editor to send your work out for full-peer-review, rather than declining it with a cursory bench decision. Unless you promote the value of your work, you miss a vital chance to boost the probability of acceptance.
     
  3. What matters the most in scientific writing is clarity. Strong scientists avoid fanciful and ornamental language. Rather, the focus should be on explaining yourself clearly, and the best way to do so is by using short, simple sentences. Don’t bury your central thesis in a mass of detail that hides your main messages. Be explicit, direct, and straightforward. Don’t try to write and edit at the same time. These are separate tasks requiring different skills. Ask your most critical colleagues to check your work. Engage a native-speaker editor when you’re not writing in your mother tongue. The question is NOT whether you have mastery over conversational spoken English. It is: Can you write in technically sharp and unambiguous English? Seek help if that step isn’t your forte.
     
  4. Always follow the journal’s prescribed submission procedures diligently, completely, and without complaint! The journal will have specific guidance posted as Instructions-for-Authors, or some such title. Read them carefully. They were prepared to help authors increase the chances for getting their submissions accepted.

    Following the journal’s instructions is the price-of-entry into the publishing competition. Failing to follow the formatting/organizational/procedural guidelines is itself sufficient reason to earn a refusal letter from many editors. All top-tier journals get far more high-quality submissions than they ever can accept. So quite naturally, one heavily weighted filter influencing acceptance/rejection is the degree to which the manuscript deviates from the journal’s house-style. Remember: If all else fails, FOLLOW THE INSTRUCTIONS!

Thanks, Caven! There’s a lot of priceless advice in this interview. I hope our readers find this useful!

Caven: Jayashree, thanks for this chance to share some publication-related information. At this point in my career, I search for every opportunity to give back and share what I’ve learned about scholarly publication. 

Be the first to clap

for this interview

Published on: Nov 07, 2016

Passionate about scholarly publishing, always looking to have memorable conversations with researchers and industry professionals across the globe
See more from Jayashree Rajagopalan

Comments

You're looking to give wings to your academic career and publication journey. We like that!

Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.

One click sign-in with your social accounts

1536 visitors saw this today and 1210 signed up.