# Bayesian hierarchical models: An overview for biomedical researchers

3 mins

In biomedical research, we often deal with data that has multiple levels of organization. Bayesian hierarchical models provide a way to model and understand these hierarchies effectively.

Imagine you’re studying the effects of a new drug on cholesterol levels in patients in different hospitals. Each hospital might have its own unique characteristics and treatment protocols. A Bayesian hierarchical model allows you to analyze the data while considering both the hospital-level differences and the individual patient data simultaneously. This blogpost will give you an overview of Bayesian hierarchical models and outline some best practices you can follow while using them.

## Uncertainty is the Key

One of the fantastic aspects of Bayesian hierarchical models, and Bayesian statistics in general, is their ability to handle uncertainty gracefully. In many biomedical studies, we’re not always 100% certain about our measurements. With Bayesian methods, we can incorporate this uncertainty into our models. It’s like saying, “We think the drug might reduce blood pressure by 10 points on average, but we’re not completely sure; it could be a little more or a little less.” This level of transparency is invaluable in biomedical research.

Okay, this all sounds great, but how do you get started with Bayesian hierarchical models?

## Data Collection

When collecting data for Bayesian hierarchical models, there are several key considerations to keep in mind to ensure the success and validity of your analysis:

1. Hierarchy Identification: Identify the hierarchical structure within your data. Determine the levels of data organization and the relationships between them. For example, in a clinical trial, you might have patients within hospitals, creating a two-level hierarchy.

1. Data Quality: Ensure the quality and reliability of your data. Biomedical data can be sensitive, so maintaining data integrity is critical. Implement rigorous data collection protocols and quality control measures.

1. Sample Size: Consider the number of data points at each level of the hierarchy. Having sufficient data at each level is essential for accurate parameter estimation. A too-small sample can lead to unreliable results.

## Model Specification: Building the Blueprint

Think of model specification as creating the blueprint for your statistical analysis. Here’s what it involves:

1. Defining Your Variables: Start by identifying the variables in your study. In our example of testing a new drug, these might include patient characteristics (like age, gender, and pre-treatment cholesterol levels), hospital characteristics, and the drug’s effectiveness (biomarker levels, patient performance on standardized measures, scores on questionnaires related to adverse effects, quality of life, etc.).
2. Structure Your Model: Now comes the interesting part. You need to define how these variables are related to each other. For instance, you might suspect that the drug’s effectiveness varies depending on the type of hospital where the trial was conducted. This suspicion is the basis for your hierarchical structure.
3. Choose Probability Distributions: In Bayesian modeling, you’ll need to decide what probability distributions best describe the relationships between your variables. For instance, you might use a normal distribution to model cholesterol levels and a gamma distribution to model quality of life scores.
4. Set Priors: Priors are your initial beliefs about the parameters of your model. This is where your prior knowledge or best guesses come into play. If you think the drug is likely to reduce cholesterol levels by 10 points on average, you’d set a prior reflecting this belief.
5. Specify Relationships: Define how variables at different levels interact. In our drug example, you’d specify how patient-level variables (like socioeconomic status) interact with hospital-level variables (like the hospital’s treatment protocol).
6. Hierarchical Structure: This is the core of Bayesian hierarchical models. You’ll specify how individual patient data relate to hospital-level data. This structure captures the idea that patients within the same hospital might be more similar to each other due to shared hospital practices.

## Parameter Estimation: Uncovering the Secrets

Now that you have your model blueprint, it’s time to estimate the parameters, the values that make your model work. This is where the Bayesian magic happens:

1. Posterior Distribution: In Bayesian modeling, we’re not just interested in point estimates of parameters. Instead, we want the full picture, which is the posterior distribution. Think of it as a probability distribution that tells you how likely different values of your parameters are given your data and your prior beliefs.
2. Markov Chain Monte Carlo (MCMC): The MCMC method explores the parameter space of your model. It’s like searching for the best-fitting parameters while considering all the uncertainties and interactions you’ve specified in your model.
3. Simulation: MCMC generates samples from the posterior distribution. These samples represent different possible parameter values. You can think of it as creating a range of scenarios that could explain your data.
4. Parameter Estimates: By analyzing the samples from your MCMC simulation, you can estimate the parameters of your model. This is where you get your drug’s average cholesterol-lowering effect and the uncertainty around it.
5. Uncertainty Quantification: Bayesian modeling gives you not just point estimates but also a measure of uncertainty. You’ll get a range of possible values for your parameters, which is incredibly valuable in biomedical research where certainty is often hard to come by.

## Making Inferences

Making inferences from Bayesian hierarchical models can be both complex. Here are some tips to keep in mind to ensure that your inferences are meaningful and reliable:

1. Check Convergence: When using Markov Chain Monte Carlo (MCMC) methods for parameter estimation, ensure that your chains have converged. This means that the algorithm has explored the parameter space sufficiently, and the results are stable. Visualizations like trace plots and the Gelman-Rubin statistic can help assess convergence.

1. Monitor Chain Mixing: Your MCMC chains should mix well, meaning they explore the parameter space efficiently. Poor mixing can indicate problems with your model or data.

1. Choose Appropriate Summary Statistics: When summarizing the posterior distribution, consider whether the mean, median, or other statistics are most appropriate for your research question. Also, provide credible intervals (the Bayesian equivalent of confidence intervals) to quantify uncertainty.

1. Posterior Predictive Checks: Perform posterior predictive checks to assess whether your model’s predictions match observed data. This helps validate the model’s fit and identifies potential issues.

1. Prior Sensitivity Analysis: Assess the impact of your choice of prior distributions on the results. Conduct sensitivity analyses by varying priors within reasonable ranges to see how they affect your inferences.

## Conclusion

Bayesian hierarchical models are a powerful tool for handling large and complicated datasets: they allow you to embrace the complexity of your data, consider uncertainty, and make more informed decisions. By following the tips outlined in this article, you can make more robust and informative inferences from Bayesian hierarchical models in your study and draw meaningful conclusions from your data.

Looking for support as you explore the exciting world of Bayesian statistics? Partner with an experienced biostatistician, through Editage’s Statistical Analysis & Review Services

### Marisha Fonseca

An editor at heart and perfectionist by disposition, providing solutions for journals, publishers, and universities in areas like alt-text writing and publication consultancy.

#### Found this useful?

If so, share it with your fellow researchers