Tuesday, February 2, 2016

What It Means to Be Evidence-based [ExerciseGeeks.Com]


“Do you have a study to back up that claim?”

These are the oft-repeated words of know-it-alls the world over. From my training as a biomechanist, I get it. When it comes to scientific writing, you can’t just throw statements out there without justification. When you make a claim, you have to back it up with evidence.

I mean, imagine a scientist claiming gait training cures Parkinson’s because it supposedly worked for his great uncle, twice-removed, back in the 40’s. That wouldn’t be very credible.

In general, being evidence-based is a good thing, albeit painstaking. For scientific writing, the evidence must take the form of previously published peer-reviewed scientific papers (oftentimes the more you cite, the better), such as case studies, experimental studies, reviews, and meta-analyses.

This is where things get interesting, because not all evidence is created equal. For example, only weak conclusions can be drawn from a case study of one subject. The best research designs include large numbers of subjects, randomization of those subjects to experimental and control groups, and blinding (subjects are ignorant as to which group they’re even in). Reviews and meta-analyses represent the strongest forms of evidence since they amalgamate the results of many similar studies.

The problem is that science isn’t perfect. Try as they may, researchers are not without their biases, and statistics can be made to bend the truth. Moreover, there can’t be a systematic review that precisely pertains to every unique real-life situation and population.


In fact, the real world is rarely as cut-and-dry as a scientific laboratory. Striking that balance between internal validity and external validity is actually one of the greatest challenges is science. The more variables a study controls (for internal validity), the less generalizable it is to the real world (external validity). On the flip side, the more a study seeks to mimic real-world conditions, the less we can derive causal explanations from it due to possible confounding variables.

Thus, as fitness professionals, we often have to draw inferences from the existing scientific literature when there isn’t a paper that answers our exact question (i.e. What would be better for fat loss for an elderly female amputee with bilateral rotator cuff tears, cardio or strength training?). We must also draw on other types of evidence, besides the ones listed above, in order to guide our practice.

Two types of evidence that often get pooh-poohed by the aforementioned know-it-alls are expert opinion and anecdotal evidence (i.e. experience). That’s right: if Albert Einstein says E = McLIFT, then it’s okay to rely on his expert opinion — at least tentatively, until new and better evidence is presented. Likewise, if you found that E = McLIFT for your clients Tom, Dick, and Harry, then that’s evidence, too. Sure, it’s weak evidence, but it’s evidence nonetheless, especially if you have nothing else to go on.



As a more concrete example, take the practice of static stretching. Most of the science indicates that static stretching reduces strength and power (see my review on the subject). Taken at face value, you could make the case not to static stretch before lifting weights. Or, having tried both stretching and not stretching and evaluated the consequences, you could conclude that the performance impairments are transient and negligible for your purposes and that from a logistics standpoint, static stretching prior to lifting works well for you.

In the end, neither science nor experience alone can provide a definitive answer to any question. Instead, they must complement each other.

Clearly, being evidence-based isn't as simple as just citing a study or two. To learn more about What It Means to Be Evidence-based, follow the link to ExerciseGeeks.com, my new collaborative project with Marc Lewis and Chris Leib:


Share This