# Statistical reporting doesn’t have to be scary: Tips for the statistically challenged researcher

7 mins

In our current world, filled with a constant availability of information and opinion, trust and integrity are becoming more elusive. As scientists, we are committed to the truth, and we need to work to keep these words alive and relevant.

On a global scale, the misrepresentation of the results of a drug study could theoretically cause harm to someone taking the drug. Although this would have to be an egregious and systemic error, the possibility is there1.

On a more realistic scale, misinterpretation of research results causes the waste of time, money, and human resources and further erodes the public trust in science or scientists. It doesn’t help the career of the researcher any either.

While the proper use of statistics can be confusing, it should not be scary. This article provides some simple suggestions to help you be more thoughtful and accurate in analyzing and reporting your study results, even if statistics isn’t one of your strengths.

# Reasons researchers may make mistakes

Statistics is a strange discipline that promotes apprehension in a lot of people. Those who have an aversion to math are scared by all the numbers, while at the same time, math proponents are put off by the inexact nature of statistics. Statistics involves a lot of numbers but does not offer definite right answers, such as one finds in algebra or calculus.

A fear of statistics is one of the major reasons researchers make errors in analyzing and reporting their study data and results. The consequence of this aversion is typically a desire to avoid all things statistical and not educate oneself enough. This can lead to being sloppy and not paying enough attention to being accurate and confident with the study results. Of course, being frightened of the numbers is not the only reason researchers make statistical mistakes. Other causes may include the following:

• Lack of formal training – Some researchers may not have received sufficient formal training in statistics, although the numerous classes and websites on the internet are now making it possible to overcome this deficit through self-learning.
• Lack of support – If you’re not strong in statistics, you need to find someone who is and can check your work. That may not be possible if that support is not available.
• Lack of resources – Researchers who have limited amounts of time or funding may find it difficult to spend enough attention on the data collection and analysis, which may then lead to errors. For example, a lack of time may result in verifications not being done, and a lack of funding may mean an inadequate number of personnel for the project1.

# 8 avoidable statistical errors made by researchers

Statistical errors can occur in various steps of the research process, including during the data collection, analysis, and reporting. Here are some fundamental errors the statistically challenged researcher should be aware of to avoid inaccurate reporting.

• Not using clean data

This error is usually not obvious to readers unless they look at the actual data, which is one reason for the increasing journal requirement of data transparency. Datasets that contain outliers and missing values need to be treated carefully prior to analysis. If you’re reporting on the average height of college freshmen, and a quarter of the records in your data are blank, or if several of the students are on the basketball team, you likely won’t get an accurate average. The solution to this is to simply get to know your data. Create some graphs: scatter plots, histograms, whatever will help your data show what they look like.

• Adopting a poor sampling method

This issue shows up frequently in studies involving survey data. The underlying mistake is assuming that the data collected represent the population of interest. However, with surveys, it is very difficult to get a true representative sample. For instance, to obtain the required number of surveys, researchers often opt for the most convenient method. That may mean standing outside a supermarket or calling home phone numbers. Using either of these common methods will skew the resulting data. When collecting your data, think carefully about your population of interest and what population you are actually sampling.

• Reporting spurious correlations

Be mindful about finding significant correlations between variables that are not actually related. For example, a strong correlation was found between the number of master’s degrees conferred in the US and box office revenue2. Obviously, these variables are not closely related.

• Assuming that correlation implies causation

Evidence of a significant positive or negative correlation between variables is not evidence that a change in one causes a change in the other. A popular example of this is the strong positive correlation between shark attacks and ice cream consumption3. Does this correlation mean that shark attacks are caused by eating ice cream? Common sense will tell you that both are caused by a third factor, namely, summer.

• Using a wrong analysis method

You don’t need to be fully immersed in statistical knowledge to avoid this mistake. Unless you have help, you probably won’t be using any complex analysis methods on your data. However, you need to understand the methods you do use. For instance, can a simple linear regression be used on categorical variables? Can t-tests find significant differences between multiple groups? It will take a little work, but you can avoid some embarrassing mistakes by doing some reading.

• Failing to check analysis assumptions

This is related to the first error. All analysis methods are based on assumptions made about the data on which they are being used. For example, many methods are based on the assumption that the underlying data is normally distributed. If your data has a different distribution, the results may not be accurate. The solution is to make sure you fully understand the variables and data involved. Again, graphs can help tremendously here.

• Not correctly reporting results

A frequent error made by statistically challenged researchers is not correctly interpreting and reporting the results they get, often overstating their results. This may mean they write that they proved something, when the results only indicate that the something may be true in one small specific case. Statistics is a very precise and conservative language. If your results suggest that your null hypothesis cannot be rejected, it doesn’t mean that it is proven. It only means that it cannot be disproven, and your results can only provide evidence supporting that hypothesis. This can be very confusing to even statistically minded researchers.

• Cherry-picking data and results

It is never a good idea to conduct a study already knowing what results you need. This may lead to manipulating the data or finding a test that will produce the desired result. Another not-completely-ethical approach is to develop the hypothesis after the data is obtained and examined4.

Most of these errors can be avoided by applying common sense and paying attention. However, many more complex mistakes frequently appear in research studies. As always, the best solution is to consult a friend or colleague who has a lot of statistical knowledge and experience.

References

1. Brown AW, Kaiser KA, and Allison DB. Issues with data and analyses: Errors, underlying themes, and potential solutions. PNAS. 2018, 115, 2563–70. https://doi.org/10.1073/pnas.1708279115

2. Statology. 5 examples of spurious correlation in real life. https://www.statology.org/spurious-correlation-examples/ [Accessed 29 July 2022]

3. Statology. Correlation does not imply causation: 5 real-world examples. https://www.statology.org/correlation-does-not-imply-causation-examples/ [Accessed 29 July 2022]

4. Gray K. Statistical mistakes even scientists make. KDnuggets. https://www.kdnuggets.com/2017/10/statistical-mistakes-even-scientists-make.html [Accessed 29 July 2022]

Be the first to clap

Published on: Aug 04, 2022

Extensive experience in education with a strong STEM background; passionate about lifelong learning, for myself and others
See more from Jennifer Ulz