Tuesday, October 2, 2018

Is Some Injury Risk Factor Research Worthless?


I recently read an interesting 2016 paper by Clifton et al. in the Journal of Athletic Training called Predicting Injury: Challenges in Prospective Injury Risk Factor Identification.

The premise of the paper is that some researchers screw up when concluding they've identified risk factors for injury. In the paper, the authors describe two of the most common screwups and how to address them.

Before I get to the screwups, a quick explanation is in order regarding the correct way to identify risk factors for injury, which is through a prospective study. It can be summed up in three steps:
Step 1. Examine a group of uninjured athletes at baseline (i.e. in the pre-season). 
Step 2. Track their injuries prospectively (i.e. over a period of time, usually a competitive season). 
Step 3. At the end of the study, break the sample into two groups -- the athletes who got injured and the athletes who didn’t -- and look for differences between the groups in their baseline measures.
And now for the common screwups, as per Clifton et al.:

Common Screwup #1) Retrospective Study Design

Prospective injury studies are difficult to do because they require careful and consistent follow-up regarding injury. A much easier approach is a “retrospective” one. With this type of study, you simply assess a group of athletes with and without injuries and compare the groups based on these previous injuries. With this design, there’s no need to follow the athletes over time.

The mistake that’s commonly made comes with the interpretation of the retrospective study. Researchers will often state that measures on which the injured athletes performed worse are risk factors for injury. The trouble is, there’s no way to know whether those factors were present prior to the injury OR if they are actually the result of the injury.

This isn’t to say retrospective research is worthless. It’s just that follow-up studies with prospective designs are needed to determine whether the differences seen in retrospect are true risk factors for injury prospectively.

Common Screwup #2) Surrogate Risk Factors

The other common screw-up is putting too much stock in surrogate risk factors, which are basically “risk factors for risk factors” for injury. (No, that's not a typo!) The way these studies work is that they look for measures that are associated with known risk factors for injury.

For example, there’s research out there that shows certain measures of shoulder rotational range of motion (i.e. internal and external rotation) are risk factors for injury in overhead athletes (e.g. baseball players). Because range of motion takes a few minutes to measure, a researcher might try to find a different test that’s similar to measuring shoulder rotational range of motion but quicker and easier. For instance, the researcher might look at the association between rotational range of motion and whether an athlete can touch their hands behind their back.

Image result for back scratch test

Suppose the researcher finds a good correlation between these two measures. The mistake some researchers go on to make is saying that because being able to touch your hands behind your back is associated with rotational range of motion, being able to touch your hands behind your back is a risk factor for injury.

The trouble is, unless being able to touch your hands behind your back is perfectly correlated with rotational range of motion, there’s going to be some error involved. Depending on the magnitude of this error, it can greatly diminish the degree of association between the surrogate risk factor and actually injury incidence.

Once again, surrogate risk factor research isn’t worthless. But if something is found to be a surrogate risk factor, a follow-up study is essential to determine whether the newly proposed risk factor is indeed associated with injury prospectively.



In a nutshell, the above screwups are good examples of why we have to be super careful when reading research. We have to be skeptical, look carefully at the methodology and the data, and consider whether the results of the study align with the authors' conclusions. Researchers aren’t perfect, and they sometimes (knowingly or not) overstate their findings.

Share This